[jira] [Commented] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770496#comment-13770496
 ] 

Binglin Chang commented on HADOOP-9972:
---

Regarding API, I think we should differentiate core API and extend/legacy API, 
IMO, there should be 3 core API:

getFileStatus  resolve symlink
getFileLinkStatus don't resolve symlink
readdir   don't resolve symlink, just like current HDFS listStatus

These core API should be implemented in each FS

All other related APIs can be build based on core API and implemented in 
FSContext/FileSystem once for all:
{code}
FS.listStatus(path):
  readdir(path).map(s => if (s.isSymlink) getFileStatus ignore Exception else s)

FS.listStatus(path, PathOptions):
   readdir(path).map(process PathOptions)

glob(pattern):
  if pattern matches none, return pattern
  else return matched paths
  ignore all exceptions

globStatus(pattern):
  glob(pattern).map(getFileStatus)
{code}


> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770471#comment-13770471
 ] 

Binglin Chang commented on HADOOP-9972:
---

Hi Colin, 
About globStatus example, if we follow linux practice, globStatus(p) = 
glob(pattern).map(path => getFileStatus(path))
String [] glob(pattern):
  if matches none, return pattern
  else return matched paths
  ignore all exceptions

I did some experiments, you can see ls * indeed should error message, but ls 
*/stuff should not show error message.
{code}
[root@master01 test]# mkdir -p aa/cc/foo
[root@master01 test]# mkdir -p bb/cc/foo
[root@master01 test]# chmod 700 bb
[root@master01 test]# ll /home/serengeti/.bash
[root@master01 test]# su serengeti
[serengeti@master01 test]$ ll
total 8
drwxr-xr-x 3 root root 4096 Sep 18 08:30 aa
drwx-- 3 root root 4096 Sep 18 08:31 bb
[serengeti@master01 test]$ ls *
aa:
cc
ls: bb: Permission denied
[serengeti@master01 test]$ ls */cc
foo
{code}

Separate globStatus to glob and getFileStatus seems a more proper way of doing 
globStatus rather than add new classes/interface and callback handler, and this 
is linux practice, should be more robust.







> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9978) Support range reads in s3n interface to split objects for mappers to read

2013-09-17 Thread Amandeep Khurana (JIRA)
Amandeep Khurana created HADOOP-9978:


 Summary: Support range reads in s3n interface to split objects for 
mappers to read
 Key: HADOOP-9978
 URL: https://issues.apache.org/jira/browse/HADOOP-9978
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Amandeep Khurana




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9776) HarFileSystem.listStatus() returns "har://-localhost:/..." if port number is empty

2013-09-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9776:
--

Assignee: shanyu zhao

> HarFileSystem.listStatus() returns "har://-localhost:/..." if port 
> number is empty
> --
>
> Key: HADOOP-9776
> URL: https://issues.apache.org/jira/browse/HADOOP-9776
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-9776-2.patch, HADOOP-9776.patch
>
>
> If the given har URI is "har://-localhost/usr/my.har/a", the result 
> of HarFileSystem.listStatus() will have a ":" appended after localhost, like 
> this: "har://-localhost:/usr/my.har/a". it should return 
> "har://-localhost/usr/my.bar/a" instead.
> This creates problem when running a hive unit test TestCliDriver 
> (archive_excludeHadoop20.q), generating the following error:
>   java.io.IOException: cannot find dir = 
> har://pfile-localhost:/GitHub/hive-monarch/build/ql/test/data/warehouse/tstsrcpart/ds=2008-04-08/hr=12/data.har/00_0
>  in pathToPartitionInfo: 
> [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/tstsrcpart/ds=2008-04-08/hr=11,
>  
> har://pfile-localhost/GitHub/hive-monarch/build/ql/test/data/warehouse/tstsrcpart/ds=2008-04-08/hr=12/data.har]
>   [junit] at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
>   [junit] at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:260)
>   [junit] at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit.(CombineHiveInputFormat.java:104)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770436#comment-13770436
 ] 

Karthik Kambatla commented on HADOOP-9976:
--

Haven't included any tests because this was just a pom change. The javac 
warnings are due to deprecations in the new version of avro. Given we are 
already using 1.7.4, I think we should get this in.

> Different versions of avro and avro-maven-plugin
> 
>
> Key: HADOOP-9976
> URL: https://issues.apache.org/jira/browse/HADOOP-9976
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.1-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-9976-1.patch
>
>
> Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
> 1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9971) Pack hadoop compress native libs and upload it to maven for other projects to depend on

2013-09-17 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770358#comment-13770358
 ] 

Liu Shaohui commented on HADOOP-9971:
-

[~cnauroth]Thanks for pointing these problems. The native libs vary on 
different platform and even on different glibc versions. hadoop-snappy also 
don't solve these problems and you need to compile it according to platform and 
glibc you use. 

This patch is useful for the companies whose online servers have same platform 
and glibc version. One compiles the common native libs and uploads it to maven 
repository, every other projects can use it without caring about compatibility.

> Pack hadoop compress native libs and upload it to maven for other projects to 
> depend on
> ---
>
> Key: HADOOP-9971
> URL: https://issues.apache.org/jira/browse/HADOOP-9971
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
> Attachments: HADOOP-9971-trunk-v1.diff
>
>
> Currently, if other projects like hbase want to using hadoop common native 
> lib, they must copy the native libs to their distribution, which is not 
> agile. From the idea of 
> hadoop-snappy(http://code.google.com/p/hadoop-snappy), we can pack the hadoop 
> common native lib and upload it to maven repository for other projects to 
> depend on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770357#comment-13770357
 ] 

Hadoop QA commented on HADOOP-9976:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603693/hadoop-9976-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1527 javac 
compiler warnings (more than the trunk's current 1147 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3106//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3106//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3106//console

This message is automatically generated.

> Different versions of avro and avro-maven-plugin
> 
>
> Key: HADOOP-9976
> URL: https://issues.apache.org/jira/browse/HADOOP-9976
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.1-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-9976-1.patch
>
>
> Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
> 1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9976:
---

Status: Patch Available  (was: Open)

> Different versions of avro and avro-maven-plugin
> 
>
> Key: HADOOP-9976
> URL: https://issues.apache.org/jira/browse/HADOOP-9976
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.1-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-9976-1.patch
>
>
> Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
> 1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770341#comment-13770341
 ] 

Sandy Ryza commented on HADOOP-9976:


+1 pending jenkins

> Different versions of avro and avro-maven-plugin
> 
>
> Key: HADOOP-9976
> URL: https://issues.apache.org/jira/browse/HADOOP-9976
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.1-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-9976-1.patch
>
>
> Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
> 1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9669:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770264#comment-13770264
 ] 

Brandon Li commented on HADOOP-9669:


I've committed the patch. Thank you, Haohui.

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9669:
---

Fix Version/s: 2.1.1-beta

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770219#comment-13770219
 ] 

Hadoop QA commented on HADOOP-9669:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603718/HADOOP-9669.003.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3105//console

This message is automatically generated.

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770214#comment-13770214
 ] 

Hudson commented on HADOOP-9669:


SUCCESS: Integrated in Hadoop-trunk-Commit #4432 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4432/])
HADOOP-9669 Reduce the number of byte array creations and copies in XDR data 
manipulation. Contributed by Haohui Mai (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1524259)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServerHandler.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/XDR.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsTime.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/nfs3/TestFileHandle.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestXDR.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/security/TestCredentialsSys.java


> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-9669:
---

Attachment: HADOOP-9669.003.patch

A new patch that addresses the problems found by [~brandonli]. Thanks very much 
for [~brandonli].

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.003.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2013-09-17 Thread Lohit Vijayarenu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770111#comment-13770111
 ] 

Lohit Vijayarenu commented on HADOOP-9631:
--

[~cnauroth] Can you please help review latest patch

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-09-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769725#comment-13769725
 ] 

Sangjin Lee commented on HADOOP-9639:
-

I'd like to get started on implementing this feature, assuming that the feature 
is considered useful and at least the core of the design reasonable. We could 
still make subsequent changes to the design as the implementation progresses.

I would very much like to do this out in the open, rather than creating a large 
patch at the end of the implementation and proposing it. That way, we could get 
more fine-grained feedback. What are good ways to do it for something like this?

Is the branch committer concept potentially applicable here? Please let me know 
what you think. Thanks!

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9854) Configuration.set() may be called before all the deprecated keys are registered, causing inconsistent state

2013-09-17 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770104#comment-13770104
 ] 

Joep Rottinghuis commented on HADOOP-9854:
--

Marking as blocker. W/o this Cascading was not able to properly replace 
deprecated keys with new ones in the config causing jobs to fail.

> Configuration.set() may be called before all the deprecated keys are 
> registered, causing inconsistent state
> ---
>
> Key: HADOOP-9854
> URL: https://issues.apache.org/jira/browse/HADOOP-9854
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.5-alpha
>Reporter: Sangjin Lee
>Priority: Blocker
>
> Currently deprecated keys are registered at various times. Some are 
> registered  when the Configuration class itself is initialized, but the vast 
> majority are registered when the JobConf class is initialized.
> Therefore, it is entirely possible (and does happen) that Configuration.set() 
> is called for a key before its deprecation mapping is registered, thus 
> leaving the internal state of Configuration in an inconsistent state.
> We actually had this problem occur in real life, causing the set value not to 
> be recognized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9975) Adding relogin() method to UGI

2013-09-17 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9975:
--

Attachment: HADOOP-9975.patch

Attached a patch based on HADOOP-9926 and will submit when the dep is resolved.

> Adding relogin() method to UGI
> --
>
> Key: HADOOP-9975
> URL: https://issues.apache.org/jira/browse/HADOOP-9975
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-9975.patch
>
>
> In current Hadoop UGI implementation, it has API methods like 
> reloginFromKeytab() and reloginFromTicketCache().  However, such methods are 
> too Kerberos specific and also involves login implementation details, it 
> would be better to add generic relogin() method regardless authentication 
> mechanism. This is possible since relevant authentication specific parameters 
> like principal and keytab are already passed and saved in the UGI object 
> after initial login.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9854) Configuration.set() may be called before all the deprecated keys are registered, causing inconsistent state

2013-09-17 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated HADOOP-9854:
-

Priority: Blocker  (was: Major)

> Configuration.set() may be called before all the deprecated keys are 
> registered, causing inconsistent state
> ---
>
> Key: HADOOP-9854
> URL: https://issues.apache.org/jira/browse/HADOOP-9854
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.5-alpha
>Reporter: Sangjin Lee
>Priority: Blocker
>
> Currently deprecated keys are registered at various times. Some are 
> registered  when the Configuration class itself is initialized, but the vast 
> majority are registered when the JobConf class is initialized.
> Therefore, it is entirely possible (and does happen) that Configuration.set() 
> is called for a key before its deprecation mapping is registered, thus 
> leaving the internal state of Configuration in an inconsistent state.
> We actually had this problem occur in real life, causing the set value not to 
> be recognized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-09-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-9912:


Affects Version/s: (was: 2.3.0)
   2.1.0-beta

Updating affects version to 2.1 since HADOOP-8040 subtasks are in 2.1 (modulo 
HADOOP-9417).

> globStatus of a symlink to a directory does not report symlink as a directory
> -
>
> Key: HADOOP-9912
> URL: https://issues.apache.org/jira/browse/HADOOP-9912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.1.0-beta
>Reporter: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-9912-testcase.patch, new-hdfs.txt, new-local.txt, 
> old-hdfs.txt, old-local.txt
>
>
> globStatus for a path that is a symlink to a directory used to report the 
> resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9954) Hadoop 2.0.5 doc build failure - OutOfMemoryError exception

2013-09-17 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated HADOOP-9954:
-

Environment: CentOS 5, Sun JDK 1.6 (but not on CenOS6 + OpenJDK 7).

> Hadoop 2.0.5 doc build failure - OutOfMemoryError exception
> ---
>
> Key: HADOOP-9954
> URL: https://issues.apache.org/jira/browse/HADOOP-9954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.5-alpha
> Environment: CentOS 5, Sun JDK 1.6 (but not on CenOS6 + OpenJDK 7).
>Reporter: Paul Han
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9954.patch
>
>
> When run hadoop build with command line options:
> {code}
> mvn package -Pdist,native,docs -DskipTests -Dtar 
> {code}
> Build failed adn OutOfMemoryError Exception is thrown:
> {code}
> [INFO] --- maven-source-plugin:2.1.2:test-jar (default) @ hadoop-hdfs ---
> [INFO] 
> [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs ---
> [INFO] ** FindBugsMojo execute ***
> [INFO] canGenerate is true
> [INFO] ** FindBugsMojo executeFindbugs ***
> [INFO] Temp File is 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml
> [INFO] Fork Value is true
>  [java] Out of memory
>  [java] Total memory: 477M
>  [java]  free memory: 68M
>  [java] Analyzed: 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/classes
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/org/codehaus/mojo/findbugs-maven-plugin/2.3.2/findbugs-maven-plugin-2.3.2.jar
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/com/google/code/findbugs/bcel/1.3.9/bcel-1.3.9.jar
>  ...
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar
>  [java] Exception in thread "main" java.lang.OutOfMemoryError: GC 
> overhead limit exceeded
>  [java]   at java.util.HashMap.(HashMap.java:226)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefSet.(UnconditionalValueDerefSet.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:650)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:82)
>  [java]   at 
> edu.umd.cs.findbugs.ba.BasicAbstractDataflowAnalysis.getFactOnEdge(BasicAbstractDataflowAnalysis.java:119)
>  [java]   at 
> edu.umd.cs.findbugs.ba.AbstractDataflow.getFactOnEdge(AbstractDataflow.java:54)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:297)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:150)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:205)
>  [java]   at 
> edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:979)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:230)
>  [java]   at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)
>  [java] Java Result: 1
> [INFO] No bugs found
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9976:
-

Attachment: hadoop-9976-1.patch

Straight-forward patch. Grepped through to make sure there are no other 
versions. 

> Different versions of avro and avro-maven-plugin
> 
>
> Key: HADOOP-9976
> URL: https://issues.apache.org/jira/browse/HADOOP-9976
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.1-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-9976-1.patch
>
>
> Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
> 1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9977) Hadoop services won't start with different keypass and keystorepass when https is enabled

2013-09-17 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated HADOOP-9977:
---

Description: 
Enable ssl in the configuration. While creating keystore, give different 
keypass and keystore password. (here, keypass = hadoop and storepass=hadoopKey)

keytool -genkey -alias host1 -keyalg RSA -keysize 1024 -dname 
"CN=host1,OU=cm,O=cm,L=san jose,ST=ca,C=us" -keypass hadoop -keystore 
keystore.jks -storepass hadoopKey

In , ssl-server.xml set below two properties.
ssl.server.keystore.keypasswordhadoop
ssl.server.keystore.passwordhadoopKey

Namenode, ResourceManager, Datanode, Nodemanager, SecondaryNamenode fails to 
start with below error.

2013-09-17 21:39:00,794 FATAL namenode.NameNode (NameNode.java:main(1325)) - 
Exception in namenode join
java.io.IOException: java.security.UnrecoverableKeyException: Cannot recover key
at org.apache.hadoop.http.HttpServer.(HttpServer.java:222)
at org.apache.hadoop.http.HttpServer.(HttpServer.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.(NameNodeHttpServer.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:74)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:626)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:488)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Caused by: java.security.UnrecoverableKeyException: Cannot recover key
at sun.security.provider.KeyProtector.recover(KeyProtector.java:328)
at 
sun.security.provider.JavaKeyStore.engineGetKey(JavaKeyStore.java:138)
at 
sun.security.provider.JavaKeyStore$JKS.engineGetKey(JavaKeyStore.java:55)
at java.security.KeyStore.getKey(KeyStore.java:792)
at 
sun.security.ssl.SunX509KeyManagerImpl.(SunX509KeyManagerImpl.java:131)
at 
sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:68)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:259)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:170)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:121)
at org.apache.hadoop.http.HttpServer.(HttpServer.java:220)
... 9 more

  was:
Enable ssl in the configuration. While creating keystore, give different 
keypass and keystore password. (here, keypass = hadoop and storepass=hadoopKey)

keytool -genkey -alias host1 -keyalg RSA -keysize 1024 -dname 
"CN=host1,OU=hw,O=hw,L=palo alto,ST=ca,C=us" -keypass hadoop -keystore 
keystore.jks -storepass hadoopKey

In , ssl-server.xml set below two properties.
ssl.server.keystore.keypasswordhadoop
ssl.server.keystore.passwordhadoopKey

Namenode, ResourceManager, Datanode, Nodemanager, SecondaryNamenode fails to 
start with below error.

2013-09-17 21:39:00,794 FATAL namenode.NameNode (NameNode.java:main(1325)) - 
Exception in namenode join
java.io.IOException: java.security.UnrecoverableKeyException: Cannot recover key
at org.apache.hadoop.http.HttpServer.(HttpServer.java:222)
at org.apache.hadoop.http.HttpServer.(HttpServer.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.(NameNodeHttpServer.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:74)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:626)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:488)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Caused by: java.security.UnrecoverableKeyException: Cannot recover key
at sun.security.provider.KeyProtector.recover(KeyProtector.java:328)
at 
sun.security.provider.JavaKeyStore.engineGetKey(JavaKeyStore.java:138)
at 
sun.security.provider.JavaKeyStore$JKS.engineGetKey(JavaKeyStore.java:55)
at java.security.KeyStore.getKey(KeyStore.java:792)
at 
sun.security.ssl.SunX509KeyManagerImpl.(SunX509KeyManagerImpl.java:131)
at 
sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:68)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:259)
at 
org.apache.hado

[jira] [Updated] (HADOOP-8040) Add symlink support to FileSystem

2013-09-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8040:


Fix Version/s: (was: 3.0.0)

> Add symlink support to FileSystem
> -
>
> Key: HADOOP-8040
> URL: https://issues.apache.org/jira/browse/HADOOP-8040
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Eli Collins
>Assignee: Andrew Wang
> Fix For: 2.1.0-beta
>
> Attachments: hadoop-8040-1.patch, hadoop-8040-2.patch, 
> hadoop-8040-3.patch, hadoop-8040-4.patch, hadoop-8040-5.patch, 
> hadoop-8040-6.patch, hadoop-8040-7.patch
>
>
> HADOOP-6421 added symbolic links to FileContext. Resolving symlinks is done 
> on the client-side, and therefore requires client support. An HDFS symlink 
> (created by FileContext) when accessed by FileSystem will result in an 
> unhandled UnresolvedLinkException. Because not all users will migrate from 
> FileSystem to FileContext in lock step, and we want users of FileSystem to be 
> able to access all paths created by FileContext, we need to support symlink 
> resolution in FileSystem as well, to facilitate migration to FileContext.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9977) Hadoop services won't start with different keypass and keystorepass when https is enabled

2013-09-17 Thread Yesha Vora (JIRA)
Yesha Vora created HADOOP-9977:
--

 Summary: Hadoop services won't start with different keypass and 
keystorepass when https is enabled
 Key: HADOOP-9977
 URL: https://issues.apache.org/jira/browse/HADOOP-9977
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Yesha Vora


Enable ssl in the configuration. While creating keystore, give different 
keypass and keystore password. (here, keypass = hadoop and storepass=hadoopKey)

keytool -genkey -alias host1 -keyalg RSA -keysize 1024 -dname 
"CN=host1,OU=hw,O=hw,L=palo alto,ST=ca,C=us" -keypass hadoop -keystore 
keystore.jks -storepass hadoopKey

In , ssl-server.xml set below two properties.
ssl.server.keystore.keypasswordhadoop
ssl.server.keystore.passwordhadoopKey

Namenode, ResourceManager, Datanode, Nodemanager, SecondaryNamenode fails to 
start with below error.

2013-09-17 21:39:00,794 FATAL namenode.NameNode (NameNode.java:main(1325)) - 
Exception in namenode join
java.io.IOException: java.security.UnrecoverableKeyException: Cannot recover key
at org.apache.hadoop.http.HttpServer.(HttpServer.java:222)
at org.apache.hadoop.http.HttpServer.(HttpServer.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.(NameNodeHttpServer.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:74)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:626)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:488)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Caused by: java.security.UnrecoverableKeyException: Cannot recover key
at sun.security.provider.KeyProtector.recover(KeyProtector.java:328)
at 
sun.security.provider.JavaKeyStore.engineGetKey(JavaKeyStore.java:138)
at 
sun.security.provider.JavaKeyStore$JKS.engineGetKey(JavaKeyStore.java:55)
at java.security.KeyStore.getKey(KeyStore.java:792)
at 
sun.security.ssl.SunX509KeyManagerImpl.(SunX509KeyManagerImpl.java:131)
at 
sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:68)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:259)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:170)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:121)
at org.apache.hadoop.http.HttpServer.(HttpServer.java:220)
... 9 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9976) Different versions of avro and avro-maven-plugin

2013-09-17 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9976:


 Summary: Different versions of avro and avro-maven-plugin
 Key: HADOOP-9976
 URL: https://issues.apache.org/jira/browse/HADOOP-9976
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.1-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


Post HADOOP-9672, the versions for avro and avro-maven-plugin are different - 
1.7.4 and 1.5.3 respectively. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9669) Reduce the number of byte array creations and copies in XDR data manipulation

2013-09-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9669:
---

Summary: Reduce the number of byte array creations and copies in XDR data 
manipulation  (was: There are multiple array creations and array copies for a 
single nfs rpc reply)

> Reduce the number of byte array creations and copies in XDR data manipulation
> -
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9418) Add symlink resolution support to DistributedFileSystem

2013-09-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-9418:


 Target Version/s:   (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
Fix Version/s: (was: 3.0.0)

Updating the fix version to reflect this was merged to 2.1.

> Add symlink resolution support to DistributedFileSystem
> ---
>
> Key: HADOOP-9418
> URL: https://issues.apache.org/jira/browse/HADOOP-9418
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.1.0-beta
>
> Attachments: hadoop-9418-10.patch, hadoop-9418-1.patch, 
> hadoop-9418-2.patch, hadoop-9418-3.patch, hadoop-9418-4.patch, 
> hadoop-9418-5.patch, hadoop-9418-6.patch, hadoop-9418-7.patch, 
> hadoop-9418-8.patch, hadoop-9418-9.patch
>
>
> Add symlink resolution support to DistributedFileSystem as well as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9758) Provide configuration option for FileSystem/FileContext symlink resolution

2013-09-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-9758:


 Target Version/s:   (was: 2.1.0-beta)
Affects Version/s: (was: 2.3.0)
   (was: 3.0.0)

> Provide configuration option for FileSystem/FileContext symlink resolution
> --
>
> Key: HADOOP-9758
> URL: https://issues.apache.org/jira/browse/HADOOP-9758
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9758-4.patch, hadoop-9758-5.patch, 
> hadoop-9758-6.patch, hdfs-4968-1.patch, hdfs-4968-2.patch, hdfs-4968-3.patch
>
>
> With FileSystem symlink support incoming in HADOOP-8040, some clients will 
> wish to not transparently resolve symlinks. This is somewhat similar to 
> O_NOFOLLOW in open(2).
> Rationale for is for a security model where a user can invoke a third-party 
> service running as a service user to operate on the user's data. For 
> instance, users might want to use Hive to query data in their homedirs, where 
> Hive runs as the Hive user and the data is readable by the Hive user. This 
> leads to a security issue with symlinks:
> # User Mallory invokes Hive to process data files in {{/user/mallory/hive/}}
> # Hive checks permissions on the files in {{/user/mallory/hive/}} and allows 
> the query to proceed.
> # RACE: Mallory replaces the files in {{/user/mallory/hive}} with symlinks 
> that point to user Ann's Hive files in {{/user/ann/hive}}. These files aren't 
> readable by Mallory, but she can create whatever symlinks she wants in her 
> own scratch directory.
> # Hive's MR jobs happily resolve the symlinks and accesses Ann's private data.
> This is also potentially useful for clients using FileContext, so let's add 
> it there too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) There are multiple array creations and array copies for a single nfs rpc reply

2013-09-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769940#comment-13769940
 ] 

Brandon Li commented on HADOOP-9669:


+1. New patch looks good. 
Nit: I noticed that most NFS related RPC call response size is less than 256 
bytes, so we can have DEFAULT_INITIAL_CAPACITY as 256 instead of 512. 

> There are multiple array creations and array copies for a single nfs rpc reply
> --
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9975) Adding relogin() method to UGI

2013-09-17 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770003#comment-13770003
 ] 

Kai Zheng commented on HADOOP-9975:
---

Based on the work by HADOOP-9797 and the family, it’s easy to implement this as 
the new codes almost support it already introducing kerberosRelogin() method 
when refactoring the codes for reloginFromKeytab() etc as follows. 
{code}
public interface HadoopLogin {
  public void login() throws HadoopLoginException;
  public void logout() throws HadoopLoginException;
  public void relogin() throws HadoopLoginException;
...

public class AbstractHadoopLogin implements HadoopLogin {
…
  public void relogin() throws HadoopLoginException {
logout();
login();
  }
…
{code}

UGI saves HadoopLoign object, which can be used to do relogin(), as follows
{code}
Public class UserGroupInformation {
  private HadoopLogin login;
  public synchronized void kerberosRelogin()  throws IOException {
  login.relogin();
  }
  public synchronized void reloginFromKeytab() throws IOException {
kerberosRelogin();
  }
  public synchronized void reloginFromTicketCache() throws IOException {
kerberosRelogin();
  }
…
{code}
Here it’s only needed to rename the kerberosRelogin() method to relogin().


> Adding relogin() method to UGI
> --
>
> Key: HADOOP-9975
> URL: https://issues.apache.org/jira/browse/HADOOP-9975
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> In current Hadoop UGI implementation, it has API methods like 
> reloginFromKeytab() and reloginFromTicketCache().  However, such methods are 
> too Kerberos specific and also involves login implementation details, it 
> would be better to add generic relogin() method regardless authentication 
> mechanism. This is possible since relevant authentication specific parameters 
> like principal and keytab are already passed and saved in the UGI object 
> after initial login.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9975) Adding relogin() method to UGI

2013-09-17 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770001#comment-13770001
 ] 

Kai Zheng commented on HADOOP-9975:
---

This was inspired by related discussion in HDFS-3676. Reference 
https://issues.apache.org/jira/browse/HDFS-3676?focusedCommentId=13417152&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13417152,
 Daryn mentioned:
bq. On an aside, it seems odd there isn't a UGI#relogin() that determines if it 
should use a keytab, ticket cache, etc... Not sure why all the callers that 
want to relogin should have knowledge of the login method.


> Adding relogin() method to UGI
> --
>
> Key: HADOOP-9975
> URL: https://issues.apache.org/jira/browse/HADOOP-9975
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> In current Hadoop UGI implementation, it has API methods like 
> reloginFromKeytab() and reloginFromTicketCache().  However, such methods are 
> too Kerberos specific and also involves login implementation details, it 
> would be better to add generic relogin() method regardless authentication 
> mechanism. This is possible since relevant authentication specific parameters 
> like principal and keytab are already passed and saved in the UGI object 
> after initial login.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9975) Adding relogin() method to UGI

2013-09-17 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-9975:
-

 Summary: Adding relogin() method to UGI
 Key: HADOOP-9975
 URL: https://issues.apache.org/jira/browse/HADOOP-9975
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kai Zheng
Assignee: Kai Zheng


In current Hadoop UGI implementation, it has API methods like 
reloginFromKeytab() and reloginFromTicketCache().  However, such methods are 
too Kerberos specific and also involves login implementation details, it would 
be better to add generic relogin() method regardless authentication mechanism. 
This is possible since relevant authentication specific parameters like 
principal and keytab are already passed and saved in the UGI object after 
initial login.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8040) Add symlink support to FileSystem

2013-09-17 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8040:


Affects Version/s: (was: 2.0.3-alpha)
   (was: 3.0.0)
   (was: 0.23.0)
Fix Version/s: (was: 2.3.0)
   2.1.0-beta

Updating the fix version to reflect that these subtasks (modulo HADOOP-9417) 
are already in branch-2.1-beta.

> Add symlink support to FileSystem
> -
>
> Key: HADOOP-8040
> URL: https://issues.apache.org/jira/browse/HADOOP-8040
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Eli Collins
>Assignee: Andrew Wang
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: hadoop-8040-1.patch, hadoop-8040-2.patch, 
> hadoop-8040-3.patch, hadoop-8040-4.patch, hadoop-8040-5.patch, 
> hadoop-8040-6.patch, hadoop-8040-7.patch
>
>
> HADOOP-6421 added symbolic links to FileContext. Resolving symlinks is done 
> on the client-side, and therefore requires client support. An HDFS symlink 
> (created by FileContext) when accessed by FileSystem will result in an 
> unhandled UnresolvedLinkException. Because not all users will migrate from 
> FileSystem to FileContext in lock step, and we want users of FileSystem to be 
> able to access all paths created by FileContext, we need to support symlink 
> resolution in FileSystem as well, to facilitate migration to FileContext.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) There are multiple array creations and array copies for a single nfs rpc reply

2013-09-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769912#comment-13769912
 ] 

Hadoop QA commented on HADOOP-9669:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603652/HADOOP-9669.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3104//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3104//console

This message is automatically generated.

> There are multiple array creations and array copies for a single nfs rpc reply
> --
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9669) There are multiple array creations and array copies for a single nfs rpc reply

2013-09-17 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-9669:
---

Attachment: HADOOP-9669.002.patch

A patch that addresses Brandon's comments.

I'll address 4/5 in HADOOP-9966. 

> There are multiple array creations and array copies for a single nfs rpc reply
> --
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.002.patch, 
> HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-09-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769869#comment-13769869
 ] 

Colin Patrick McCabe commented on HADOOP-9912:
--

I posted a proposed API based on our WebEx discussion at 
https://issues.apache.org/jira/browse/HADOOP-9972

> globStatus of a symlink to a directory does not report symlink as a directory
> -
>
> Key: HADOOP-9912
> URL: https://issues.apache.org/jira/browse/HADOOP-9912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-9912-testcase.patch, new-hdfs.txt, new-local.txt, 
> old-hdfs.txt, old-local.txt
>
>
> globStatus for a path that is a symlink to a directory used to report the 
> resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9974) Trunk Build Failure at HDFS Sub-project

2013-09-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769876#comment-13769876
 ] 

Arpit Agarwal commented on HADOOP-9974:
---

Moved to Hadoop since it appears to be a general build issue. I can reproduce 
it when building trunk with '-Pdist -Dtar'. The only workaround is to use 
protobuf 2.4 and pass -Dprotobuf.version=2.4.1.

> Trunk Build Failure at HDFS Sub-project
> ---
>
> Key: HADOOP-9974
> URL: https://issues.apache.org/jira/browse/HADOOP-9974
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Mac OS X
>Reporter: Zhijie Shen
>
> Recently Hadoop upgraded to use Protobuf 2.5.0. To build the trunk, I updated 
> my installed Protobuf 2.5.0. With this upgrade, I didn't encounter the build 
> failure due to protoc, but failed when building HDFS sub-project. Bellow is 
> failure message. I'm using Mac OS X.
> {code}
> INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main  SUCCESS [1.075s]
> [INFO] Apache Hadoop Project POM . SUCCESS [0.805s]
> [INFO] Apache Hadoop Annotations . SUCCESS [2.283s]
> [INFO] Apache Hadoop Assemblies .. SUCCESS [0.343s]
> [INFO] Apache Hadoop Project Dist POM  SUCCESS [1.913s]
> [INFO] Apache Hadoop Maven Plugins ... SUCCESS [2.390s]
> [INFO] Apache Hadoop Auth  SUCCESS [2.597s]
> [INFO] Apache Hadoop Auth Examples ... SUCCESS [1.868s]
> [INFO] Apache Hadoop Common .. SUCCESS [55.798s]
> [INFO] Apache Hadoop NFS . SUCCESS [3.549s]
> [INFO] Apache Hadoop MiniKDC . SUCCESS [1.788s]
> [INFO] Apache Hadoop Common Project .. SUCCESS [0.044s]
> [INFO] Apache Hadoop HDFS  FAILURE [25.219s]
> [INFO] Apache Hadoop HttpFS .. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
> [INFO] Apache Hadoop HDFS-NFS  SKIPPED
> [INFO] Apache Hadoop HDFS Project  SKIPPED
> [INFO] hadoop-yarn ... SKIPPED
> [INFO] hadoop-yarn-api ... SKIPPED
> [INFO] hadoop-yarn-common  SKIPPED
> [INFO] hadoop-yarn-server  SKIPPED
> [INFO] hadoop-yarn-server-common . SKIPPED
> [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> [INFO] hadoop-yarn-server-web-proxy .. SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> [INFO] hadoop-yarn-server-tests .. SKIPPED
> [INFO] hadoop-yarn-client  SKIPPED
> [INFO] hadoop-yarn-applications .. SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell . SKIPPED
> [INFO] hadoop-mapreduce-client ... SKIPPED
> [INFO] hadoop-mapreduce-client-core .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
> [INFO] hadoop-yarn-site .. SKIPPED
> [INFO] hadoop-yarn-project ... SKIPPED
> [INFO] hadoop-mapreduce-client-common  SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> [INFO] hadoop-mapreduce-client-app ... SKIPPED
> [INFO] hadoop-mapreduce-client-hs  SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .. SKIPPED
> [INFO] hadoop-mapreduce .. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming . SKIPPED
> [INFO] Apache Hadoop Distributed Copy  SKIPPED
> [INFO] Apache Hadoop Archives  SKIPPED
> [INFO] Apache Hadoop Rumen ... SKIPPED
> [INFO] Apache Hadoop Gridmix . SKIPPED
> [INFO] Apache Hadoop Data Join ... SKIPPED
> [INFO] Apache Hadoop Extras .. SKIPPED
> [INFO] Apache Hadoop Pipes ... SKIPPED
> [INFO] Apache Hadoop Tools Dist .. SKIPPED
> [INFO] Apache Hadoop Tools ... SKIPPED
> [INFO] Apache Hadoop Distribution  SKIPPED
> [INFO] Apache Hadoop Client .. SKIPPED
> [INFO] Apache Hadoop Mini-

[jira] [Moved] (HADOOP-9974) Trunk Build Failure at HDFS Sub-project

2013-09-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HDFS-5218 to HADOOP-9974:
-

Key: HADOOP-9974  (was: HDFS-5218)
Project: Hadoop Common  (was: Hadoop HDFS)

> Trunk Build Failure at HDFS Sub-project
> ---
>
> Key: HADOOP-9974
> URL: https://issues.apache.org/jira/browse/HADOOP-9974
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Mac OS X
>Reporter: Zhijie Shen
>
> Recently Hadoop upgraded to use Protobuf 2.5.0. To build the trunk, I updated 
> my installed Protobuf 2.5.0. With this upgrade, I didn't encounter the build 
> failure due to protoc, but failed when building HDFS sub-project. Bellow is 
> failure message. I'm using Mac OS X.
> {code}
> INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main  SUCCESS [1.075s]
> [INFO] Apache Hadoop Project POM . SUCCESS [0.805s]
> [INFO] Apache Hadoop Annotations . SUCCESS [2.283s]
> [INFO] Apache Hadoop Assemblies .. SUCCESS [0.343s]
> [INFO] Apache Hadoop Project Dist POM  SUCCESS [1.913s]
> [INFO] Apache Hadoop Maven Plugins ... SUCCESS [2.390s]
> [INFO] Apache Hadoop Auth  SUCCESS [2.597s]
> [INFO] Apache Hadoop Auth Examples ... SUCCESS [1.868s]
> [INFO] Apache Hadoop Common .. SUCCESS [55.798s]
> [INFO] Apache Hadoop NFS . SUCCESS [3.549s]
> [INFO] Apache Hadoop MiniKDC . SUCCESS [1.788s]
> [INFO] Apache Hadoop Common Project .. SUCCESS [0.044s]
> [INFO] Apache Hadoop HDFS  FAILURE [25.219s]
> [INFO] Apache Hadoop HttpFS .. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
> [INFO] Apache Hadoop HDFS-NFS  SKIPPED
> [INFO] Apache Hadoop HDFS Project  SKIPPED
> [INFO] hadoop-yarn ... SKIPPED
> [INFO] hadoop-yarn-api ... SKIPPED
> [INFO] hadoop-yarn-common  SKIPPED
> [INFO] hadoop-yarn-server  SKIPPED
> [INFO] hadoop-yarn-server-common . SKIPPED
> [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> [INFO] hadoop-yarn-server-web-proxy .. SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> [INFO] hadoop-yarn-server-tests .. SKIPPED
> [INFO] hadoop-yarn-client  SKIPPED
> [INFO] hadoop-yarn-applications .. SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell . SKIPPED
> [INFO] hadoop-mapreduce-client ... SKIPPED
> [INFO] hadoop-mapreduce-client-core .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher  SKIPPED
> [INFO] hadoop-yarn-site .. SKIPPED
> [INFO] hadoop-yarn-project ... SKIPPED
> [INFO] hadoop-mapreduce-client-common  SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> [INFO] hadoop-mapreduce-client-app ... SKIPPED
> [INFO] hadoop-mapreduce-client-hs  SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins  SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .. SKIPPED
> [INFO] hadoop-mapreduce .. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming . SKIPPED
> [INFO] Apache Hadoop Distributed Copy  SKIPPED
> [INFO] Apache Hadoop Archives  SKIPPED
> [INFO] Apache Hadoop Rumen ... SKIPPED
> [INFO] Apache Hadoop Gridmix . SKIPPED
> [INFO] Apache Hadoop Data Join ... SKIPPED
> [INFO] Apache Hadoop Extras .. SKIPPED
> [INFO] Apache Hadoop Pipes ... SKIPPED
> [INFO] Apache Hadoop Tools Dist .. SKIPPED
> [INFO] Apache Hadoop Tools ... SKIPPED
> [INFO] Apache Hadoop Distribution  SKIPPED
> [INFO] Apache Hadoop Client .. SKIPPED
> [INFO] Apache Hadoop Mini-Cluster  SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 

[jira] [Updated] (HADOOP-9966) Refactor XDR code into XDRReader and XDRWriter

2013-09-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9966:
---

Component/s: nfs

> Refactor XDR code into XDRReader and XDRWriter
> --
>
> Key: HADOOP-9966
> URL: https://issues.apache.org/jira/browse/HADOOP-9966
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> Several methods in the current XDR class have ambiguous semantics. For 
> example, Size() returns the actual size of internal byte array. The actual 
> size of current buffer, is also affected by read requests, which pull data 
> out of the buffer.
> These ambiguous semantics makes removing redundant copies on the NFS paths 
> difficult.
> This JIRA proposes to decompose the responsibilities of XDR into two separate 
> class: XDRReader and XDRWriter. The overall designs should closely follow 
> Java's *Reader / *Writer classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)

2013-09-17 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-9639:


Attachment: shared_cache_design_v3.pdf

small updates (added that the checksum should be cryptographically strong)

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: HADOOP-9639
> URL: https://issues.apache.org/jira/browse/HADOOP-9639
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: filecache
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9967) Zero copies in hdfs-nfs

2013-09-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-9967:
---

Component/s: nfs

> Zero copies in hdfs-nfs
> ---
>
> Key: HADOOP-9967
> URL: https://issues.apache.org/jira/browse/HADOOP-9967
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Haohui Mai
>
> There are multiple copies in the NFS request / response paths. For example, 
> The RPCFrameDecoder class always copies the data. Currently these copies are 
> mandatory, due to the inflexibilities of several internal APIs.
> Using the ChannelBuffer class in the APIs should these excessive copies in 
> the NFS path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9669) There are multiple array creations and array copies for a single nfs rpc reply

2013-09-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769818#comment-13769818
 ] 

Brandon Li commented on HADOOP-9669:


Thanks, Haohui. Some comments:
1. please try to keep the original javadoc for the same named methods
2. can you make "State state" as final?
3. please fix the javadoc /** check if the rest of data has more than  
bytes */
"len" is not visible in generated javadoc
4. readFixedOpaque still has a copy
not sure if it's possible to generat a read-only bytebuffer from another 
bytebuffer
5. it would be nice to remove the extra copy for writeFixedOpaque
For 4 and 5, I am ok if you think it's out of scope of this JIRA.


> There are multiple array creations and array copies for a single nfs rpc reply
> --
>
> Key: HADOOP-9669
> URL: https://issues.apache.org/jira/browse/HADOOP-9669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Haohui Mai
> Attachments: HADOOP-9669.001.patch, HADOOP-9669.patch
>
>
> XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
> each data type.  The static append creates a new array and copy data.  
> Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there 
> are multiple array creations and array copies.  For example, there are at 
> least 6 array creations and array copies for RpcAcceptedReply.voidReply(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9968) ProxyUsers does not work with NetGroups

2013-09-17 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-9968:
-

Attachment: hadoop-9968-1.2.patch

Attaching patch for Hadoop-1

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: hadoop-9968-1.2.patch, HADOOP-9968.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9973) wrong dependencies

2013-09-17 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HADOOP-9973:
---

 Summary: wrong dependencies
 Key: HADOOP-9973
 URL: https://issues.apache.org/jira/browse/HADOOP-9973
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta, 2.1.1-beta
Reporter: Nicolas Liochon
Priority: Minor


See HBASE-9557 for the impact: for some of them, it seems it's pushing these 
dependencies to the client applications even if they are not used.

mvn dependency:analyze -pl hadoop-common
[WARNING] Used undeclared dependencies found:
[WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
[WARNING]commons-collections:commons-collections:jar:3.2.1:compile
[WARNING] Unused declared dependencies found:
[WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
[WARNING]tomcat:jasper-compiler:jar:5.5.23:runtime
[WARNING]tomcat:jasper-runtime:jar:5.5.23:runtime
[WARNING]javax.servlet.jsp:jsp-api:jar:2.1:runtime
[WARNING]commons-el:commons-el:jar:1.0:runtime
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:runtime


mvn dependency:analyze -pl hadoop-yarn-client
[WARNING] Used undeclared dependencies found:
[WARNING]org.mortbay.jetty:jetty-util:jar:6.1.26:provided
[WARNING]log4j:log4j:jar:1.2.17:compile
[WARNING]com.google.guava:guava:jar:11.0.2:provided
[WARNING]commons-lang:commons-lang:jar:2.5:provided
[WARNING]commons-logging:commons-logging:jar:1.1.1:provided
[WARNING]commons-cli:commons-cli:jar:1.2:provided
[WARNING]org.apache.hadoop:hadoop-yarn-server-common:jar:2.1.2-SNAPSHOT:test
[WARNING] Unused declared dependencies found:
[WARNING]org.slf4j:slf4j-api:jar:1.7.5:compile
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
[WARNING]com.google.inject.extensions:guice-servlet:jar:3.0:compile
[WARNING]io.netty:netty:jar:3.6.2.Final:compile
[WARNING]com.google.protobuf:protobuf-java:jar:2.5.0:compile
[WARNING]commons-io:commons-io:jar:2.1:compile
[WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
[WARNING]com.google.inject:guice:jar:3.0:compile
[WARNING]
com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
[WARNING]
com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
[WARNING]com.sun.jersey:jersey-server:jar:1.9:compile
[WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
[WARNING]com.sun.jersey.contribs:jersey-guice:jar:1.9:compile







--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769661#comment-13769661
 ] 

Colin Patrick McCabe commented on HADOOP-9972:
--

I guess I should add a few words about why {{PathErrorHandler}} is necessary.  
Basically, we want to give users of {{globStatus}} flexibility.

For example, let's say you have the following directories:
/a owned by superuser, mode 
/b owned by bob, mode 0777

Bob would like to be able to get back a result from {{globStatus(/\*/stuff)}}, 
not just an AccessControlException (which came out of trying to access 
/a/stuff).  But bob also doesn't necessarily want to ignore the 
AccessControlException completely.  He wants something like the  behavior of 
GNU ls, which will print out an error message to stderr about paths it can't 
access, but still continue to list the remaining paths which it can.  
Currently, bob can't get this-- he simply gets an IOException and *no* 
globStatus results.  Ignoring the error completely also seems like the wrong 
thing to do as well, though.  Hence the {{PathErrorHandler}}, which allows more 
sophisticated error handling here.

Symlinks make this more important, since you have errors like 
{{UnresolvedPathException}}, which anyone can cause simply by creating a 
dangling symlink.  We don't want directories with dangling symlinks to become 
un-globbable.  Obviously, the default error handlers will provide the existing 
behavior for {{listStatus}} and {{globStatus}}.

> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9971) Pack hadoop compress native libs and upload it to maven for other projects to depend on

2013-09-17 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769732#comment-13769732
 ] 

Chris Nauroth commented on HADOOP-9971:
---

I'm curious how this change would interact with Apache release management.  I'm 
not a release manager though, so I'm not fully qualified to make a decision.  
Can a current release manager please comment?  [~mattf], maybe you could take a 
look?

My concern is that the patch seems to imply a need for an Apache release 
artifact per supported platform architecture.  Do we have a formally defined 
list of those supported platforms?  In practice, we've seen various *nixes: 
multiple Linux distros, BSD, Solaris, AIX.  There is also hadoop.dll and 
winutils.exe for Windows.

This also would seem to drive additional complexity into the mvn build.  How 
would we handle uploading multiple release artifacts for multiple platforms 
under the same version number, given that the mvn build is executing on one 
machine with one specific architecture?  Would Maven let us incrementally 
upload different platform-specific artifacts under the same version number?  
Would we need to scatter builds to multiple platform slaves and then gather 
them for the upload?  Either way implies Apache infrastructure work to 
guarantee we have the right mix of machines.

commons-daemon has the same challenge of mixing Java and native deployment 
artifacts, and I believe they no longer publish the native artifacts and only 
release source.  [~liushaohui], you mentioned hadoop-snappy.  Do you know how 
that project handles releases?

Thank you for the patch.  It would be convenient if we can work it out, but I 
think release managers are the most qualified to review.


> Pack hadoop compress native libs and upload it to maven for other projects to 
> depend on
> ---
>
> Key: HADOOP-9971
> URL: https://issues.apache.org/jira/browse/HADOOP-9971
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
> Attachments: HADOOP-9971-trunk-v1.diff
>
>
> Currently, if other projects like hbase want to using hadoop common native 
> lib, they must copy the native libs to their distribution, which is not 
> agile. From the idea of 
> hadoop-snappy(http://code.google.com/p/hadoop-snappy), we can pack the hadoop 
> common native lib and upload it to maven repository for other projects to 
> depend on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9973) wrong dependencies

2013-09-17 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769668#comment-13769668
 ] 

Nicolas Liochon commented on HADOOP-9973:
-

looking at mvn dependency:analyze -pl hadoop-yarn-common
[WARNING] Used undeclared dependencies found:
[WARNING]javax.xml.bind:jaxb-api:jar:2.2.2:compile
[WARNING]commons-logging:commons-logging:jar:1.1.1:provided
[WARNING]org.apache.commons:commons-compress:jar:1.4.1:provided
[WARNING]javax.servlet:servlet-api:jar:2.5:provided
[WARNING]commons-codec:commons-codec:jar:1.4:provided
[WARNING]com.sun.jersey:jersey-core:jar:1.9:compile
[WARNING]org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile
[WARNING]com.google.guava:guava:jar:11.0.2:provided
[WARNING]commons-lang:commons-lang:jar:2.5:provided
[WARNING]commons-cli:commons-cli:jar:1.2:provided
[WARNING]org.mortbay.jetty:jetty:jar:6.1.26:provided
[WARNING] Unused declared dependencies found:
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
[WARNING]io.netty:netty:jar:3.6.2.Final:compile
[WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
[WARNING]
com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
[WARNING]
com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile


it seems that the parent pom in yarn contains most of the dependencies, but 
when you have a yarn-client it should be possible to avoid the dependency to 
com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
 


> wrong dependencies
> --
>
> Key: HADOOP-9973
> URL: https://issues.apache.org/jira/browse/HADOOP-9973
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.1.1-beta
>Reporter: Nicolas Liochon
>Priority: Minor
>
> See HBASE-9557 for the impact: for some of them, it seems it's pushing these 
> dependencies to the client applications even if they are not used.
> mvn dependency:analyze -pl hadoop-common
> [WARNING] Used undeclared dependencies found:
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.1:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]tomcat:jasper-compiler:jar:5.5.23:runtime
> [WARNING]tomcat:jasper-runtime:jar:5.5.23:runtime
> [WARNING]javax.servlet.jsp:jsp-api:jar:2.1:runtime
> [WARNING]commons-el:commons-el:jar:1.0:runtime
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:runtime
> mvn dependency:analyze -pl hadoop-yarn-client
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.mortbay.jetty:jetty-util:jar:6.1.26:provided
> [WARNING]log4j:log4j:jar:1.2.17:compile
> [WARNING]com.google.guava:guava:jar:11.0.2:provided
> [WARNING]commons-lang:commons-lang:jar:2.5:provided
> [WARNING]commons-logging:commons-logging:jar:1.1.1:provided
> [WARNING]commons-cli:commons-cli:jar:1.2:provided
> [WARNING]
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.1.2-SNAPSHOT:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.slf4j:slf4j-api:jar:1.7.5:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
> [WARNING]com.google.inject.extensions:guice-servlet:jar:3.0:compile
> [WARNING]io.netty:netty:jar:3.6.2.Final:compile
> [WARNING]com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [WARNING]commons-io:commons-io:jar:2.1:compile
> [WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
> [WARNING]com.google.inject:guice:jar:3.0:compile
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-server:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]com.sun.jersey.contribs:jersey-guice:jar:1.9:compile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769653#comment-13769653
 ] 

Colin Patrick McCabe commented on HADOOP-9972:
--

Proposed new APIs (in FileSystem and FileContext):
{code}
FileStatus[] listStatus(Path path, PathOptions options) throws IOException;
FileStatus[] globStatus(Path path, PathOptions options) throws IOException;
{code}

The {{PathOptions}} class will contain three fields:
{code}
  private PathFilter pathFilter;
  private PathErrorHandler errorHandler;
  private Boolean resolveSymlinks;
{code}

{{PathFilter}} serves the same purpose that it currently does-- filtering out 
paths from the results.

{{PathErrorHandler}} has a {{handleError}} function taking a {{Path}} and 
{{IOException}}.  This function gets invoked whenever there is an IOException.  
It can choose to rethrow the exception,  log the exception and continue, or 
simply ignore it completely.

{{resolveSymlinks}} determines whether we should fully resolve all symlinks 
that we come across.  If it is set, we will never get back a FileStatus for a 
symlink from either {{listStatus}} or {{globStatus}}.

We can add more fields to {{PathOptions}} later if it becomes necessary.

> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769646#comment-13769646
 ] 

Colin Patrick McCabe commented on HADOOP-9972:
--

I think we can probably let {{FileContext#listStatus}} and 
{{FileContext#Util#globStatus}} default to *not* fully resolving symlinks.  
This makes sense, since {{FileContext}} has had symlink support  for a long 
time, and doesn't have as much legacy code relying on it.

We also probably need some way of sensibly handling errors in globStatus.  
Right now, we really only have the choice of ignoring the error, and throwing 
an exception which ends the whole globStatus.  We should add some options.

> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9972:
-

  Component/s: fs
Affects Version/s: 2.1.1-beta

> new APIs for listStatus and globStatus to deal with symlinks
> 
>
> Key: HADOOP-9972
> URL: https://issues.apache.org/jira/browse/HADOOP-9972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.1.1-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to 
> deal with symlinks.  The issue is that code has been written which is 
> incompatible with the existence of things which are not files or directories. 
>  For example,
> there is a lot of code out there that looks at FileStatus#isFile, and
> if it returns false, assumes that what it is looking at is a
> directory.  In the case of a symlink, this assumption is incorrect.
> It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
> and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
> dangling ones.  This will prevent incompatibility with existing MR jobs and 
> other HDFS users.  We should also add new versions of listStatus and 
> globStatus that allow new, symlink-aware code to deal with symlinks as 
> symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9972:


 Summary: new APIs for listStatus and globStatus to deal with 
symlinks
 Key: HADOOP-9972
 URL: https://issues.apache.org/jira/browse/HADOOP-9972
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to deal 
with symlinks.  The issue is that code has been written which is incompatible 
with the existence of things which are not files or directories.  For example,
there is a lot of code out there that looks at FileStatus#isFile, and
if it returns false, assumes that what it is looking at is a
directory.  In the case of a symlink, this assumption is incorrect.

It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
dangling ones.  This will prevent incompatibility with existing MR jobs and 
other HDFS users.  We should also add new versions of listStatus and globStatus 
that allow new, symlink-aware code to deal with symlinks as symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9968) ProxyUsers does not work with NetGroups

2013-09-17 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769631#comment-13769631
 ] 

Benoy Antony commented on HADOOP-9968:
--

The patch also includes testcase to test the proxyuser with netgroups.

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9968.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9968) ProxyUsers does not work with NetGroups

2013-09-17 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-9968:
-

Description: 
It is possible to use NetGroups for ACLs. This requires specifying  the config 
property hadoop.security.group.mapping as  
org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.

The authorization to proxy a user by another user is specified as a list of 
groups hadoop.proxyuser..groups. The Group resolution does not work  
if we are using NetGroups.

  was:
It is possible to use NetGroups for ACLs. This requires specifying  the config 
property org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping as  
org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.

The authorization to proxy a user by another user is specified as a list of 
groups hadoop.proxyuser..groups. The Group resolution does not work  
if we are using NetGroups.


> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9968.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2013-09-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769539#comment-13769539
 ] 

Daryn Sharp commented on HADOOP-9969:
-

HADOOP-9850 already records the auth being attempted so the sasl failure loop 
can tell if kerberos is being attempted.  We saw this issue internally and 9850 
did indeed fix the issue for us.

Would you please attach (please don't post inline) a log with client debugging 
enabled?

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9935) set junit dependency to test scope

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769510#comment-13769510
 ] 

Hudson commented on HADOOP-9935:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1551 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1551/])
HADOOP-9935. Revert from trunk. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523839)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> set junit dependency to test scope
> --
>
> Key: HADOOP-9935
> URL: https://issues.apache.org/jira/browse/HADOOP-9935
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Assignee: André Kelpe
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9935.patch, HADOOP-9935.patch
>
>
> junit should be set to scope test in hadoop-mapreduce-project and 
> hadoop-yarn-project. This patch will fix the problem, that hadoop always 
> pulls in its own version of junit and that junit is even included in the 
> tarballs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9350) Hadoop not building against Java7 on OSX

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769514#comment-13769514
 ] 

Hudson commented on HADOOP-9350:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1551 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1551/])
HADOOP-9350. Moving to appropriate section in CHANGES.txt (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523891)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop not building against Java7 on OSX 
> -
>
> Key: HADOOP-9350
> URL: https://issues.apache.org/jira/browse/HADOOP-9350
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: OSX, java version "1.7.0_15" -Oracle installation of JRE 
> and JDK
>Reporter: Steve Loughran
>Assignee: Robert Kanter
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: building.patch, HADOOP-9350.patch
>
>
> Maven stack-traces out in the {{jspc}} compilation as the JSPC plugin doesn't 
> work against the new JDK7 JAR layout. Needs a symlink set up to fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9944) RpcRequestHeaderProto defines callId as uint32 while ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769507#comment-13769507
 ] 

Hudson commented on HADOOP-9944:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1551 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1551/])
HADOOP-9944. Fix RpcRequestHeaderProto.callId to be sint32 rather than uint32 
since ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (i.e. -3). Contributed by 
Arun C. Murthy. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523885)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto


> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)
> -
>
> Key: HADOOP-9944
> URL: https://issues.apache.org/jira/browse/HADOOP-9944
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>Priority: Blocker
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9944.patch, HADOOP-9944.patch, HADOOP-9944.patch
>
>
> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9971) Pack hadoop compress native libs and upload it to maven for other projects to depend on

2013-09-17 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HADOOP-9971:


Attachment: HADOOP-9971-trunk-v1.diff

> Pack hadoop compress native libs and upload it to maven for other projects to 
> depend on
> ---
>
> Key: HADOOP-9971
> URL: https://issues.apache.org/jira/browse/HADOOP-9971
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
> Attachments: HADOOP-9971-trunk-v1.diff
>
>
> Currently, if other projects like hbase want to using hadoop common native 
> lib, they must copy the native libs to their distribution, which is not 
> agile. From the idea of 
> hadoop-snappy(http://code.google.com/p/hadoop-snappy), we can pack the hadoop 
> common native lib and upload it to maven repository for other projects to 
> depend on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9971) Pack hadoop compress native libs and upload it to maven for other projects to depend on

2013-09-17 Thread Liu Shaohui (JIRA)
Liu Shaohui created HADOOP-9971:
---

 Summary: Pack hadoop compress native libs and upload it to maven 
for other projects to depend on
 Key: HADOOP-9971
 URL: https://issues.apache.org/jira/browse/HADOOP-9971
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Liu Shaohui
Priority: Minor
 Attachments: HADOOP-9971-trunk-v1.diff

Currently, if other projects like hbase want to using hadoop common native lib, 
they must copy the native libs to their distribution, which is not agile. From 
the idea of hadoop-snappy(http://code.google.com/p/hadoop-snappy), we can pack 
the hadoop common native lib and upload it to maven repository for other 
projects to depend on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9935) set junit dependency to test scope

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769447#comment-13769447
 ] 

Hudson commented on HADOOP-9935:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1525/])
HADOOP-9935. Revert from trunk. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523839)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> set junit dependency to test scope
> --
>
> Key: HADOOP-9935
> URL: https://issues.apache.org/jira/browse/HADOOP-9935
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Assignee: André Kelpe
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9935.patch, HADOOP-9935.patch
>
>
> junit should be set to scope test in hadoop-mapreduce-project and 
> hadoop-yarn-project. This patch will fix the problem, that hadoop always 
> pulls in its own version of junit and that junit is even included in the 
> tarballs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9350) Hadoop not building against Java7 on OSX

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769451#comment-13769451
 ] 

Hudson commented on HADOOP-9350:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1525/])
HADOOP-9350. Moving to appropriate section in CHANGES.txt (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523891)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop not building against Java7 on OSX 
> -
>
> Key: HADOOP-9350
> URL: https://issues.apache.org/jira/browse/HADOOP-9350
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: OSX, java version "1.7.0_15" -Oracle installation of JRE 
> and JDK
>Reporter: Steve Loughran
>Assignee: Robert Kanter
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: building.patch, HADOOP-9350.patch
>
>
> Maven stack-traces out in the {{jspc}} compilation as the JSPC plugin doesn't 
> work against the new JDK7 JAR layout. Needs a symlink set up to fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9962) in order to avoid dependency divergence within Hadoop itself lets enable DependencyConvergence

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769441#comment-13769441
 ] 

Hudson commented on HADOOP-9962:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1525/])
HADOOP-9962. in order to avoid dependency divergence within Hadoop itself lets 
enable DependencyConvergence. (rvs via tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523599)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> in order to avoid dependency divergence within Hadoop itself lets enable 
> DependencyConvergence
> --
>
> Key: HADOOP-9962
> URL: https://issues.apache.org/jira/browse/HADOOP-9962
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9962.patch.txt
>
>
> In order to avoid the likes of HADOOP-9961 it may be useful for us to enable 
> DependencyConvergence check in  maven-enforcer-plugin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9944) RpcRequestHeaderProto defines callId as uint32 while ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769442#comment-13769442
 ] 

Hudson commented on HADOOP-9944:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1525/])
HADOOP-9944. Fix RpcRequestHeaderProto.callId to be sint32 rather than uint32 
since ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (i.e. -3). Contributed by 
Arun C. Murthy. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523885)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto


> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)
> -
>
> Key: HADOOP-9944
> URL: https://issues.apache.org/jira/browse/HADOOP-9944
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>Priority: Blocker
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9944.patch, HADOOP-9944.patch, HADOOP-9944.patch
>
>
> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769444#comment-13769444
 ] 

Hudson commented on HADOOP-9961:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1525/])
HADOOP-9961. versions of a few transitive dependencies diverged between hadoop 
subprojects. (rvs via tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523596)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> versions of a few transitive dependencies diverged between hadoop subprojects
> -
>
> Key: HADOOP-9961
> URL: https://issues.apache.org/jira/browse/HADOOP-9961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9961.patch.txt
>
>
> I've noticed a few divergences between secondary dependencies of the various 
> hadoop subprojects. For example:
> {noformat}
> [ERROR]
> Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.avro:avro:1.7.4
>   +-org.apache.commons:commons-compress:1.4.1
> and
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.commons:commons-compress:1.4
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9935) set junit dependency to test scope

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769426#comment-13769426
 ] 

Hudson commented on HADOOP-9935:


SUCCESS: Integrated in Hadoop-Yarn-trunk #335 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/335/])
HADOOP-9935. Revert from trunk. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523839)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> set junit dependency to test scope
> --
>
> Key: HADOOP-9935
> URL: https://issues.apache.org/jira/browse/HADOOP-9935
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Assignee: André Kelpe
> Fix For: 3.0.0, 2.1.1-beta
>
> Attachments: HADOOP-9935.patch, HADOOP-9935.patch
>
>
> junit should be set to scope test in hadoop-mapreduce-project and 
> hadoop-yarn-project. This patch will fix the problem, that hadoop always 
> pulls in its own version of junit and that junit is even included in the 
> tarballs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769425#comment-13769425
 ] 

Hudson commented on HADOOP-9961:


SUCCESS: Integrated in Hadoop-Yarn-trunk #335 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/335/])
HADOOP-9961. versions of a few transitive dependencies diverged between hadoop 
subprojects. (rvs via tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523596)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-nfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> versions of a few transitive dependencies diverged between hadoop subprojects
> -
>
> Key: HADOOP-9961
> URL: https://issues.apache.org/jira/browse/HADOOP-9961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9961.patch.txt
>
>
> I've noticed a few divergences between secondary dependencies of the various 
> hadoop subprojects. For example:
> {noformat}
> [ERROR]
> Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.avro:avro:1.7.4
>   +-org.apache.commons:commons-compress:1.4.1
> and
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.commons:commons-compress:1.4
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9350) Hadoop not building against Java7 on OSX

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769431#comment-13769431
 ] 

Hudson commented on HADOOP-9350:


SUCCESS: Integrated in Hadoop-Yarn-trunk #335 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/335/])
HADOOP-9350. Moving to appropriate section in CHANGES.txt (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523891)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop not building against Java7 on OSX 
> -
>
> Key: HADOOP-9350
> URL: https://issues.apache.org/jira/browse/HADOOP-9350
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
> Environment: OSX, java version "1.7.0_15" -Oracle installation of JRE 
> and JDK
>Reporter: Steve Loughran
>Assignee: Robert Kanter
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: building.patch, HADOOP-9350.patch
>
>
> Maven stack-traces out in the {{jspc}} compilation as the JSPC plugin doesn't 
> work against the new JDK7 JAR layout. Needs a symlink set up to fix

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9944) RpcRequestHeaderProto defines callId as uint32 while ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769422#comment-13769422
 ] 

Hudson commented on HADOOP-9944:


SUCCESS: Integrated in Hadoop-Yarn-trunk #335 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/335/])
HADOOP-9944. Fix RpcRequestHeaderProto.callId to be sint32 rather than uint32 
since ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (i.e. -3). Contributed by 
Arun C. Murthy. (acmurthy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523885)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine.proto
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto


> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3)
> -
>
> Key: HADOOP-9944
> URL: https://issues.apache.org/jira/browse/HADOOP-9944
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>Priority: Blocker
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9944.patch, HADOOP-9944.patch, HADOOP-9944.patch
>
>
> RpcRequestHeaderProto defines callId as uint32 while 
> ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (-3).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9962) in order to avoid dependency divergence within Hadoop itself lets enable DependencyConvergence

2013-09-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769421#comment-13769421
 ] 

Hudson commented on HADOOP-9962:


SUCCESS: Integrated in Hadoop-Yarn-trunk #335 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/335/])
HADOOP-9962. in order to avoid dependency divergence within Hadoop itself lets 
enable DependencyConvergence. (rvs via tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1523599)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


> in order to avoid dependency divergence within Hadoop itself lets enable 
> DependencyConvergence
> --
>
> Key: HADOOP-9962
> URL: https://issues.apache.org/jira/browse/HADOOP-9962
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9962.patch.txt
>
>
> In order to avoid the likes of HADOOP-9961 it may be useful for us to enable 
> DependencyConvergence check in  maven-enforcer-plugin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9626) Add an interface for any exception to serve up an Exit code

2013-09-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769374#comment-13769374
 ] 

Steve Loughran commented on HADOOP-9626:


# Its complex with an interface -we have to deal with the special case that 
someone wants to terminate with something that isn't an exception. Or we take a 
throwable and just look for that interface
# I'm currently passing down ExitExceptions as they are what carries the exit 
code. If we add a new interface then yes, I could throw new exceptions. the 
nice thing about the current approach is that it doesn't change any public 
interfaces.
# -1 to an enum, as there are 255 exit codes and different bits of code can 
generate different codes with different meanings

As an example, YARN uses exit codes 1 and 2 for different meanings; #1 is 
"client initiated shutdown *without failures*: 
[https://github.com/hortonworks/hoya/blob/master/src/main/java/org/apache/hadoop/yarn/service/launcher/LauncherExitCodes.java].
 I have to translate a "1" coming off HBase or Accumulo into a failure code, 
which I do by having 64+ allocated to an AM-specific set of failure codes: 
[https://github.com/hortonworks/hoya/blob/master/src/main/java/org/apache/hadoop/hoya/HoyaExitCodes.java]

> Add an interface for any exception to serve up an Exit code
> ---
>
> Key: HADOOP-9626
> URL: https://issues.apache.org/jira/browse/HADOOP-9626
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9626-001.patch
>
>
> Various exception included exit codes, specifically 
> {{Shell.ExitCodeException}}, {{ExitUtils.ExitException()}}.
> If all exceptions that wanted to pass up an exit code to the main method 
> implemented an interface with the method {{int getExitCode()}}, it'd be 
> easier to extract exit codes from these methods in a unified way, so 
> generating the desired exit codes on the application itself

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-09-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769348#comment-13769348
 ] 

Steve Loughran commented on HADOOP-8545:


Suresh, the findbugs warnings are false alarms -they are warning that a stream 
isn't closed in the method where it is opened. This is the input stream that is 
kept open across methods in the Swift Input Stream -it is always closed in the 
close() method

> Filesystem Implementation for OpenStack Swift
> -
>
> Key: HADOOP-8545
> URL: https://issues.apache.org/jira/browse/HADOOP-8545
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 1.2.0, 2.0.3-alpha
>Reporter: Tim Miller
>Assignee: Dmitry Mezhensky
>  Labels: hadoop, patch
> Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
> HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-030.patch, 
> HADOOP-8545-031.patch, HADOOP-8545-032.patch, HADOOP-8545-033.patch, 
> HADOOP-8545-034.patch, HADOOP-8545-035.patch, HADOOP-8545-035.patch, 
> HADOOP-8545-10.patch, HADOOP-8545-11.patch, HADOOP-8545-12.patch, 
> HADOOP-8545-13.patch, HADOOP-8545-14.patch, HADOOP-8545-15.patch, 
> HADOOP-8545-16.patch, HADOOP-8545-17.patch, HADOOP-8545-18.patch, 
> HADOOP-8545-19.patch, HADOOP-8545-1.patch, HADOOP-8545-20.patch, 
> HADOOP-8545-21.patch, HADOOP-8545-22.patch, HADOOP-8545-23.patch, 
> HADOOP-8545-24.patch, HADOOP-8545-25.patch, HADOOP-8545-2.patch, 
> HADOOP-8545-3.patch, HADOOP-8545-4.patch, HADOOP-8545-5.patch, 
> HADOOP-8545-6.patch, HADOOP-8545-7.patch, HADOOP-8545-8.patch, 
> HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, 
> HADOOP-8545.patch, HADOOP-8545.suresh.patch
>
>
> ,Add a filesystem implementation for OpenStack Swift object store, similar to 
> the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira