[jira] [Updated] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11130:

Component/s: nfs

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11130:

Affects Version/s: 2.2.0

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11140) hadoop-aws only need test-scoped dependency on hadoop-common's tests jar

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149049#comment-14149049
 ] 

Hudson commented on HADOOP-11140:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #692 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/692/])
HADOOP-11140. hadoop-aws only need test-scoped dependency on hadoop-common's 
tests jar. Contributed by Juan Yu. (wang: rev 
4ea77efa3ab4160498fec54d4824321921c15124)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-aws/pom.xml


> hadoop-aws only need test-scoped dependency on hadoop-common's tests jar
> 
>
> Key: HADOOP-11140
> URL: https://issues.apache.org/jira/browse/HADOOP-11140
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Juan Yu
>Assignee: Juan Yu
> Fix For: 2.6.0
>
> Attachments: HDFS-7149.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11142) Remove hdfs dfs reference from file system shell documentation

2014-09-26 Thread Jonathan Allen (JIRA)
Jonathan Allen created HADOOP-11142:
---

 Summary: Remove hdfs dfs reference from file system shell 
documentation
 Key: HADOOP-11142
 URL: https://issues.apache.org/jira/browse/HADOOP-11142
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Allen
Priority: Minor


The File System Shell documentation references {{hdfs dfs}} in all of the 
examples. The FS shell is not specific about the underlying file system and so 
shouldn't reference hdfs. The correct usage should be {{hadoop fs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-09-26 Thread Jonathan Allen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149072#comment-14149072
 ] 

Jonathan Allen commented on HADOOP-8989:


HADOOP-11142 created to change the FS docs to refer to {{hadoop fs}} rather 
than {{hdfs dfs}}.

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11140) hadoop-aws only need test-scoped dependency on hadoop-common's tests jar

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149190#comment-14149190
 ] 

Hudson commented on HADOOP-11140:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1883 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1883/])
HADOOP-11140. hadoop-aws only need test-scoped dependency on hadoop-common's 
tests jar. Contributed by Juan Yu. (wang: rev 
4ea77efa3ab4160498fec54d4824321921c15124)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-aws/pom.xml


> hadoop-aws only need test-scoped dependency on hadoop-common's tests jar
> 
>
> Key: HADOOP-11140
> URL: https://issues.apache.org/jira/browse/HADOOP-11140
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Juan Yu
>Assignee: Juan Yu
> Fix For: 2.6.0
>
> Attachments: HDFS-7149.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11140) hadoop-aws only need test-scoped dependency on hadoop-common's tests jar

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149269#comment-14149269
 ] 

Hudson commented on HADOOP-11140:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1908 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1908/])
HADOOP-11140. hadoop-aws only need test-scoped dependency on hadoop-common's 
tests jar. Contributed by Juan Yu. (wang: rev 
4ea77efa3ab4160498fec54d4824321921c15124)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-aws/pom.xml


> hadoop-aws only need test-scoped dependency on hadoop-common's tests jar
> 
>
> Key: HADOOP-11140
> URL: https://issues.apache.org/jira/browse/HADOOP-11140
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Juan Yu
>Assignee: Juan Yu
> Fix For: 2.6.0
>
> Attachments: HDFS-7149.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2014-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11127:
---
Attachment: HADOOP-11064.003.patch

To keep the discussion fully documented in this issue, I'm re-uploading the 
patch Colin had attached to HADOOP-11064 for versioning in the library's file 
name.  This can be considered a work-in-progress implementation of idea #2 in 
my first comment.  (I say work-in-progress, because we haven't addressed 
winutils.exe yet in this patch.)

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Chris Nauroth
> Attachments: HADOOP-11064.003.patch
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149615#comment-14149615
 ] 

Jason Lowe commented on HADOOP-11049:
-

Thanks for the patch, Sangjin!

I think "application-classloader.properties" might be too generic.  Maybe we 
should add the org.apache.hadoop. prefix or otherwise make it more unique?

Having one giant property line is not great for maintenance, as every patch 
against it will be a big blob of text changing to another big blob of text.  We 
should split the value on multiple lines so it's easier to read and easier to 
maintain.



> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch, HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11143) NetUtils.wrapException loses inner stack trace on BindException

2014-09-26 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11143:
---

 Summary: NetUtils.wrapException loses inner stack trace on 
BindException
 Key: HADOOP-11143
 URL: https://issues.apache.org/jira/browse/HADOOP-11143
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.5.1
 Environment: machine that doesn't bind
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


{{NetUtils.wrapException}} is designed to aid debugging by including exception 
diagnostics in the wrapped & relayed exception.

When a BindException is caught, we build the new exception  but don't include 
the original as the inner cause.

this means it doesn't get logged, and while the host:port problem may be 
identifiable, the bit of the code playing up is now harder to track down.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11143) NetUtils.wrapException loses inner stack trace on BindException

2014-09-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11143:

Attachment: HADOOP-11143-001.patch

patch which has the bind exception mimic the rest of the methods; includes the 
nested strace

> NetUtils.wrapException loses inner stack trace on BindException
> ---
>
> Key: HADOOP-11143
> URL: https://issues.apache.org/jira/browse/HADOOP-11143
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.1
> Environment: machine that doesn't bind
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11143-001.patch
>
>
> {{NetUtils.wrapException}} is designed to aid debugging by including 
> exception diagnostics in the wrapped & relayed exception.
> When a BindException is caught, we build the new exception  but don't include 
> the original as the inner cause.
> this means it doesn't get logged, and while the host:port problem may be 
> identifiable, the bit of the code playing up is now harder to track down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11143) NetUtils.wrapException loses inner stack trace on BindException

2014-09-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11143:

Status: Patch Available  (was: Open)

> NetUtils.wrapException loses inner stack trace on BindException
> ---
>
> Key: HADOOP-11143
> URL: https://issues.apache.org/jira/browse/HADOOP-11143
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.1
> Environment: machine that doesn't bind
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11143-001.patch
>
>
> {{NetUtils.wrapException}} is designed to aid debugging by including 
> exception diagnostics in the wrapped & relayed exception.
> When a BindException is caught, we build the new exception  but don't include 
> the original as the inner cause.
> this means it doesn't get logged, and while the host:port problem may be 
> identifiable, the bit of the code playing up is now harder to track down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2014-09-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149741#comment-14149741
 ] 

Allen Wittenauer commented on HADOOP-8808:
--

+1 lgtm.

Will commit to branch-2 and trunk!

Thanks!

> Update FsShell documentation to mention deprecation of some of the commands, 
> and mention alternatives
> -
>
> Key: HADOOP-8808
> URL: https://issues.apache.org/jira/browse/HADOOP-8808
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.2.0
>Reporter: Hemanth Yamijala
>Assignee: Akira AJISAKA
> Attachments: HADOOP-8808.2.patch, HADOOP-8808.3.patch, 
> HADOOP-8808.patch
>
>
> In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
> favour of du -s, ls -r and rm -r respectively. The FsShell documentation 
> should be updated to mention these, so that users can start switching. Also, 
> there are places where we refer to the deprecated commands as alternatives. 
> This can be changed as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8808:
-
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

> Update FsShell documentation to mention deprecation of some of the commands, 
> and mention alternatives
> -
>
> Key: HADOOP-8808
> URL: https://issues.apache.org/jira/browse/HADOOP-8808
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.2.0
>Reporter: Hemanth Yamijala
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-8808.2.patch, HADOOP-8808.3.patch, 
> HADOOP-8808.patch
>
>
> In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
> favour of du -s, ls -r and rm -r respectively. The FsShell documentation 
> should be updated to mention these, so that users can start switching. Also, 
> there are places where we refer to the deprecated commands as alternatives. 
> This can be changed as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149763#comment-14149763
 ] 

Hudson commented on HADOOP-8808:


FAILURE: Integrated in Hadoop-trunk-Commit #6122 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6122/])
HADOOP-8808. Update FsShell documentation to mention deprecation of some of the 
commands, and mention alternatives (Akira AJISAKA via aw) (aw: rev 
df5fed5c0e5ef1e850bc6db7beb1beffd269e1ab)
* hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* hadoop-common-project/hadoop-common/CHANGES.txt


> Update FsShell documentation to mention deprecation of some of the commands, 
> and mention alternatives
> -
>
> Key: HADOOP-8808
> URL: https://issues.apache.org/jira/browse/HADOOP-8808
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.2.0
>Reporter: Hemanth Yamijala
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-8808.2.patch, HADOOP-8808.3.patch, 
> HADOOP-8808.patch
>
>
> In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
> favour of du -s, ls -r and rm -r respectively. The FsShell documentation 
> should be updated to mention these, so that users can start switching. Also, 
> there are places where we refer to the deprecated commands as alternatives. 
> This can be changed as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149765#comment-14149765
 ] 

Sangjin Lee commented on HADOOP-11049:
--

Hi Jason, thanks for the review as always!

Those are both good suggestions. I'll make those changes, and submit a new 
patch soon.

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch, HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10104) Update jackson to 1.9.13

2014-09-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149773#comment-14149773
 ] 

Andrew Wang commented on HADOOP-10104:
--

As an update, this apparently broke something in HBase: HBASE-12099. Our 
internal integration testing also show issues with Hive and Crunch, not sure if 
there are JIRAs for those yet.

I think this can be worked around, but this just reminds us that we need to be 
*very* careful about updating our dependencies. Honestly I'm wary of any 
classpath updates at all until we fix classpath isolation.

> Update jackson to 1.9.13
> 
>
> Key: HADOOP-10104
> URL: https://issues.apache.org/jira/browse/HADOOP-10104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.2.0, 2.3.0, 2.4.0
>Reporter: Steve Loughran
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HADOOP-10104-003.patch, HADOOP-10104.2.patch, 
> HADOOP-10104.4.patch, HADOOP-10104.patch
>
>
> Jackson is now at 1.9.13, 
> [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
>  hadoop 2.2 at 1.8.8.
> jackson isn't used that much in the code so risk from an update *should* be 
> low



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11143) NetUtils.wrapException loses inner stack trace on BindException

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149781#comment-14149781
 ] 

Hadoop QA commented on HADOOP-11143:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12671495/HADOOP-11143-001.patch
  against trunk revision 55302cc.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.random.TestOsSecureRandom

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4811//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4811//console

This message is automatically generated.

> NetUtils.wrapException loses inner stack trace on BindException
> ---
>
> Key: HADOOP-11143
> URL: https://issues.apache.org/jira/browse/HADOOP-11143
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.1
> Environment: machine that doesn't bind
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11143-001.patch
>
>
> {{NetUtils.wrapException}} is designed to aid debugging by including 
> exception diagnostics in the wrapped & relayed exception.
> When a BindException is caught, we build the new exception  but don't include 
> the original as the inner cause.
> this means it doesn't get logged, and while the host:port problem may be 
> identifiable, the bit of the code playing up is now harder to track down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11143) NetUtils.wrapException loses inner stack trace on BindException

2014-09-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149797#comment-14149797
 ] 

Allen Wittenauer commented on HADOOP-11143:
---

+1 lgtm. :)

> NetUtils.wrapException loses inner stack trace on BindException
> ---
>
> Key: HADOOP-11143
> URL: https://issues.apache.org/jira/browse/HADOOP-11143
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.1
> Environment: machine that doesn't bind
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11143-001.patch
>
>
> {{NetUtils.wrapException}} is designed to aid debugging by including 
> exception diagnostics in the wrapped & relayed exception.
> When a BindException is caught, we build the new exception  but don't include 
> the original as the inner cause.
> this means it doesn't get logged, and while the host:port problem may be 
> identifiable, the bit of the code playing up is now harder to track down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11008) Remove duplicated description about proxy-user in site documents

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11008:
--
Status: Patch Available  (was: Open)

> Remove duplicated description about proxy-user in site documents
> 
>
> Key: HADOOP-11008
> URL: https://issues.apache.org/jira/browse/HADOOP-11008
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11008-0.patch
>
>
> The one should be pointer to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11008) Remove duplicated description about proxy-user in site documents

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11008:
--
Status: Open  (was: Patch Available)

> Remove duplicated description about proxy-user in site documents
> 
>
> Key: HADOOP-11008
> URL: https://issues.apache.org/jira/browse/HADOOP-11008
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11008-0.patch
>
>
> The one should be pointer to the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10954) Adding site documents of hadoop-tools

2014-09-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149804#comment-14149804
 ] 

Allen Wittenauer commented on HADOOP-10954:
---

That sounds like a fine plan.

+1. Will commit to branch-2 and trunk.

Thanks!

> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10954) Adding site documents of hadoop-tools

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10954:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10954) Adding site documents of hadoop-tools

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149815#comment-14149815
 ] 

Hudson commented on HADOOP-10954:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #6123 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6123/])
HADOOP-10954. Adding site documents of hadoop-tools (Masatake Iwasaki via aw) 
(aw: rev 83264cf45764976165b0fab6b5d070bee94d6793)
* hadoop-tools/hadoop-rumen/src/site/markdown/Rumen.md.vm
* hadoop-tools/hadoop-gridmix/src/site/markdown/GridMix.md.vm
* hadoop-common-project/hadoop-common/CHANGES.txt


> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11049) javax package system class default is too broad

2014-09-26 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11049:
-
Attachment: HADOOP-11049.patch

Updated the patch (v.3). For your convenience, you can look at the link on 
github fork to see diffs more easily: https://github.com/apache/hadoop/pull/5

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch, HADOOP-11049.patch, 
> HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10976:
--
Status: Patch Available  (was: Open)

> moving the source code of hadoop-tools docs to the directry under hadoop-tools
> --
>
> Key: HADOOP-10976
> URL: https://issues.apache.org/jira/browse/HADOOP-10976
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10976-0.patch
>
>
> Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
> should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10976:
--
Status: Open  (was: Patch Available)

> moving the source code of hadoop-tools docs to the directry under hadoop-tools
> --
>
> Key: HADOOP-10976
> URL: https://issues.apache.org/jira/browse/HADOOP-10976
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10976-0.patch
>
>
> Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
> should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10976) moving the source code of hadoop-tools docs to the directry under hadoop-tools

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149837#comment-14149837
 ] 

Hadoop QA commented on HADOOP-10976:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662638/HADOOP-10976-0.patch
  against trunk revision 3a1f981.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4812//console

This message is automatically generated.

> moving the source code of hadoop-tools docs to the directry under hadoop-tools
> --
>
> Key: HADOOP-10976
> URL: https://issues.apache.org/jira/browse/HADOOP-10976
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10976-0.patch
>
>
> Some of the doc files of hadoop-tools are placed in the mapreduce project. It 
> should be moved for the ease of maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149873#comment-14149873
 ] 

Jason Lowe commented on HADOOP-11048:
-

+1 lgtm.  Committing this.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149882#comment-14149882
 ] 

Hudson commented on HADOOP-11048:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6125 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6125/])
HADOOP-11048. user/custom LogManager fails to load if the client classloader is 
enabled. Contributed by Sangjin Lee (jlowe: rev 
f154ebe8c44e41edc443198a14e0491604cc613f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-26 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-11048:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks, Sangjin!  I committed this to trunk and branch-2.

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11048) user/custom LogManager fails to load if the client classloader is enabled

2014-09-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149899#comment-14149899
 ] 

Sangjin Lee commented on HADOOP-11048:
--

Thanks Jason!

> user/custom LogManager fails to load if the client classloader is enabled
> -
>
> Key: HADOOP-11048
> URL: https://issues.apache.org/jira/browse/HADOOP-11048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-11048.patch, HADOOP-11048.patch
>
>
> If the client classloader is enabled (HADOOP-10893) and you happen to use a 
> user-provided log manager via -Djava.util.logging.manager, it fails to load 
> the custom log manager:
> {noformat}
> Could not load Logmanager "org.foo.LogManager"
> java.lang.ClassNotFoundException: org.foo.LogManager
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.util.logging.LogManager$1.run(LogManager.java:191)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.(LogManager.java:181)
> at java.util.logging.Logger.demandLogger(Logger.java:339)
> at java.util.logging.Logger.getLogger(Logger.java:393)
> at 
> com.google.common.collect.MapMakerInternalMap.(MapMakerInternalMap.java:136)
> at com.google.common.collect.MapMaker.makeCustomMap(MapMaker.java:602)
> at 
> com.google.common.collect.Interners$CustomInterner.(Interners.java:59)
> at com.google.common.collect.Interners.newWeakInterner(Interners.java:103)
> at org.apache.hadoop.util.StringInterner.(StringInterner.java:49)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2293)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:179)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}
> This is caused because Configuration.loadResources() is invoked before the 
> client classloader is created and made available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2014-09-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149935#comment-14149935
 ] 

Akira AJISAKA commented on HADOOP-8808:
---

Thank you, Allen!

> Update FsShell documentation to mention deprecation of some of the commands, 
> and mention alternatives
> -
>
> Key: HADOOP-8808
> URL: https://issues.apache.org/jira/browse/HADOOP-8808
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.2.0
>Reporter: Hemanth Yamijala
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-8808.2.patch, HADOOP-8808.3.patch, 
> HADOOP-8808.patch
>
>
> In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
> favour of du -s, ls -r and rm -r respectively. The FsShell documentation 
> should be updated to mention these, so that users can start switching. Also, 
> there are places where we refer to the deprecated commands as alternatives. 
> This can be changed as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11090) [Umbrella] Issues with Java 8 in Hadoop

2014-09-26 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150019#comment-14150019
 ] 

Mohammad Kamrul Islam commented on HADOOP-11090:


Yes.

We are running everything with Java 8. As it is running, we want to find out 
the issues or better configurations.
We will post the findings as we go.

In short, things are ok except : heap and VM usages are little higher in some 
instances.Our team is working to get handle of this.


> [Umbrella] Issues with Java 8 in Hadoop
> ---
>
> Key: HADOOP-11090
> URL: https://issues.apache.org/jira/browse/HADOOP-11090
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works  with Java 8 is important for the Apache community.
>   
> This JIRA is to track  the issues/experiences encountered during Java 8 
> migration. If you find a potential bug , please create a separate JIRA either 
> as a sub-task or linked into this JIRA.
> If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
> well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11130:

Assignee: Brandon Li
  Status: Patch Available  (was: Open)

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11130:

Attachment: HADOOP-11130.001.patch

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11130.001.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-26 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150024#comment-14150024
 ] 

Mohammad Kamrul Islam commented on HADOOP-11139:


Good find [~aw] !

I will post a comment in  YARN-2481 to make sure that JIRA has the exact same 
goal.
After than one can be closed in favor of other.

> Allow user to choose JVM for container execution
> 
>
> Key: HADOOP-11139
> URL: https://issues.apache.org/jira/browse/HADOOP-11139
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Hadoop currently supports one JVM defined through JAVA_HOME. 
> Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
> an user configuration to choose the custom but supported JVM for her job.
> In other words, user will be able to choose her expected JVM only for her 
> container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10731) Remove @date JavaDoc comment in ProgramDriver class

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10731:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

+1 lgtm.

Committing to branch-2 and trunk.

Thanks!

> Remove @date JavaDoc comment in ProgramDriver class
> ---
>
> Key: HADOOP-10731
> URL: https://issues.apache.org/jira/browse/HADOOP-10731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10731.patch
>
>
> Remove JavaDoc @date in the ProgramDriver class for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150038#comment-14150038
 ] 

Hadoop QA commented on HADOOP-11049:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12671514/HADOOP-11049.patch
  against trunk revision 3a1f981.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  org.apache.hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4813//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4813//console

This message is automatically generated.

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch, HADOOP-11049.patch, 
> HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-26 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150054#comment-14150054
 ] 

Arun C Murthy commented on HADOOP-11139:


I commented on YARN-2481, repeated here:

YARN already allows the {{JAVA_HOME}} to be overridable... take a look at 
{{ApplicationConstants.Environment.JAVA_HOME}} and 
{{YarnConfiguration.DEFAULT_NM_ENV_WHITELIST}} for the code-path.

> Allow user to choose JVM for container execution
> 
>
> Key: HADOOP-11139
> URL: https://issues.apache.org/jira/browse/HADOOP-11139
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Hadoop currently supports one JVM defined through JAVA_HOME. 
> Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
> an user configuration to choose the custom but supported JVM for her job.
> In other words, user will be able to choose her expected JVM only for her 
> container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10731) Remove @date JavaDoc comment in ProgramDriver class

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150058#comment-14150058
 ] 

Hudson commented on HADOOP-10731:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6128 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6128/])
HADOOP-10731. Remove @date JavaDoc comment in ProgramDriver class (Henry 
Saputra via aw) (aw: rev aa5d9256fb8d6403eef307c5114021be84538a85)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProgramDriver.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove @date JavaDoc comment in ProgramDriver class
> ---
>
> Key: HADOOP-10731
> URL: https://issues.apache.org/jira/browse/HADOOP-10731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10731.patch
>
>
> Remove JavaDoc @date in the ProgramDriver class for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10731) Remove @date JavaDoc comment in ProgramDriver class

2014-09-26 Thread Henry Saputra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150063#comment-14150063
 ] 

Henry Saputra commented on HADOOP-10731:


Thx [~aw] ! =)

> Remove @date JavaDoc comment in ProgramDriver class
> ---
>
> Key: HADOOP-10731
> URL: https://issues.apache.org/jira/browse/HADOOP-10731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10731.patch
>
>
> Remove JavaDoc @date in the ProgramDriver class for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-09-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150064#comment-14150064
 ] 

Allen Wittenauer commented on HADOOP-10552:
---

hadoop fs is definitely not deprecated.  So that's an error as well.

But that's a JIRA of a different sort.  As for this one, +1, lgtm.  I'll commit 
to branch-2 and trunk.

Thanks!

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Priority: Trivial
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10552:
--
Assignee: Kenji Kikushima

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Assignee: Kenji Kikushima
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-09-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10552:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Assignee: Kenji Kikushima
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150072#comment-14150072
 ] 

Hudson commented on HADOOP-10552:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6129 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6129/])
HADOOP-10552. Fix usage and example at FileSystemShell.apt.vm (Kenji Kikushima 
via aw) (aw: rev 6b7673e3cd7d36a6b9f8882442f73670cd03c687)
* hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Assignee: Kenji Kikushima
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150077#comment-14150077
 ] 

Hadoop QA commented on HADOOP-11130:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12671554/HADOOP-11130.001.patch
  against trunk revision c7c8e38.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4814//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4814//console

This message is automatically generated.

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2014-09-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150083#comment-14150083
 ] 

Allen Wittenauer commented on HADOOP-10050:
---

There are a lot of good changes in this patch, but with HADOOP-9902 committed, 
it's out of date. See HADOOP-10908 for the quick list I made.

> Update single node and cluster install instructions to work with latest bits
> 
>
> Key: HADOOP-10050
> URL: https://issues.apache.org/jira/browse/HADOOP-10050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
>Priority: Minor
> Attachments: ClusterSetup.html, HADOOP-10050.patch, 
> HADOOP-10050.patch, SingleCluster.html, mapred-site.xml, yarn-site.xml
>
>
> A few things i noticed
> 1. changes to yarn.nodemanager.aux-services
> 2. Set the framework to yarn in mapred-site.xml
> 3. Start the history server
> Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150087#comment-14150087
 ] 

Jing Zhao commented on HADOOP-11130:


The patch looks good to me. But as Allen suggested, we may also want to rename 
"LINUX_GET_ALL_USERS_CMD" to a less Linux specific name (maybe just remove the 
"LINUX_" prefix). 

Other than this +1.

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150088#comment-14150088
 ] 

Hadoop QA commented on HADOOP-10050:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12608962/HADOOP-10050.patch
  against trunk revision 6b7673e.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4815//console

This message is automatically generated.

> Update single node and cluster install instructions to work with latest bits
> 
>
> Key: HADOOP-10050
> URL: https://issues.apache.org/jira/browse/HADOOP-10050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
>Priority: Minor
> Attachments: ClusterSetup.html, HADOOP-10050.patch, 
> HADOOP-10050.patch, SingleCluster.html, mapred-site.xml, yarn-site.xml
>
>
> A few things i noticed
> 1. changes to yarn.nodemanager.aux-services
> 2. Set the framework to yarn in mapred-site.xml
> 3. Start the history server
> Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11144) Update website to reflect that we use git, not svn

2014-09-26 Thread Arun C Murthy (JIRA)
Arun C Murthy created HADOOP-11144:
--

 Summary: Update website to reflect that we use git, not svn
 Key: HADOOP-11144
 URL: https://issues.apache.org/jira/browse/HADOOP-11144
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy


We need to update http://hadoop.apache.org/version_control.html to reflect that 
we use git, not svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11144) Update website to reflect that we use git, not svn

2014-09-26 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150110#comment-14150110
 ] 

Gopal V commented on HADOOP-11144:
--

Also retire broken mirrors - https://github.com/apache/hadoop-common

> Update website to reflect that we use git, not svn
> --
>
> Key: HADOOP-11144
> URL: https://issues.apache.org/jira/browse/HADOOP-11144
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> We need to update http://hadoop.apache.org/version_control.html to reflect 
> that we use git, not svn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11130:

Attachment: HADOOP-11130.002.patch

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch, HADOOP-11130.002.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150122#comment-14150122
 ] 

Brandon Li commented on HADOOP-11130:
-

Thank you, Jing, for the review.
I've uploaded a new patch which removed the "LINUX_" prefix.

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch, HADOOP-11130.002.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9597) FileSystem open() API is not clear if FileNotFoundException is thrown when the path does not exist

2014-09-26 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HADOOP-9597:
-
Summary: FileSystem open() API is not clear if FileNotFoundException is 
thrown when the path does not exist  (was: FileSystem open() API is not clear 
if FileNotFoundException is throw when the path does not exist)

> FileSystem open() API is not clear if FileNotFoundException is thrown when 
> the path does not exist
> --
>
> Key: HADOOP-9597
> URL: https://issues.apache.org/jira/browse/HADOOP-9597
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.0.4-alpha
>Reporter: Jerry He
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.5.0
>
>
> The current FileSystem open() method throws a generic IOException in its API 
> specification.
> Some FileSystem implementations (DFS, RawLocalFileSystem ...) throws more 
> specific FileNotFoundException if the path does not exist.  Some throws 
> IOException only (FTPFileSystem, HftpFileSystem ...). 
> If we have a new FileSystem implementation, what should we follow exactly for 
> open()?
> What should the application expect in this case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11113) Namenode not able to reconnect to KMS after KMS restart

2014-09-26 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-3:
--
Assignee: Arun Suresh  (was: Charles Lamb)

> Namenode not able to reconnect to KMS after KMS restart
> ---
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> It is observed that if KMS is restarted without the Namenode being restarted, 
> NN will not be able to reconnect with KMS.
> It seems that the KMS auth cookie goes stale and it does not get flushed, so 
> the KMSClient in the NN cannot reconnect with the new KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150193#comment-14150193
 ] 

Hadoop QA commented on HADOOP-11130:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12671573/HADOOP-11130.002.patch
  against trunk revision 6b7673e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4816//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4816//console

This message is automatically generated.

> NFS updateMaps OS check is reversed
> ---
>
> Key: HADOOP-11130
> URL: https://issues.apache.org/jira/browse/HADOOP-11130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HADOOP-11130.001.patch, HADOOP-11130.002.patch
>
>
> getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
> non-Linux platforms. This code should for OS X and use dscl and, if not, then 
> use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11049) javax package system class default is too broad

2014-09-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150206#comment-14150206
 ] 

Sangjin Lee commented on HADOOP-11049:
--

Test failures unrelated.

> javax package system class default is too broad
> ---
>
> Key: HADOOP-11049
> URL: https://issues.apache.org/jira/browse/HADOOP-11049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Attachments: HADOOP-11049.patch, HADOOP-11049.patch, 
> HADOOP-11049.patch
>
>
> The system class default defined in ApplicationClassLoader has "javax.". This 
> is too broad. The intent of the system classes is to exempt classes that are 
> provided by the JDK along with hadoop and minimally necessary dependencies 
> that are guaranteed to be on the system classpath. "javax." is too broad for 
> that.
> For example, JSR-330 which is part of JavaEE (not JavaSE) has "javax.inject". 
> Packages like them should not be declared as system classes, as they will 
> result in ClassNotFoundException if they are needed and present on the user 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11110) JavaKeystoreProvider should not report a key as created if it was not flushed to the backing file

2014-09-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150262#comment-14150262
 ] 

Andrew Wang commented on HADOOP-0:
--

Hi Arun, this looks great. I just have a few small comments:

- KeyShell, I notice that we print the success message before flushing in 
various places. Should these prints be moved down? I think we wouldn't see this 
when testing with the KMS since it always flushes implicitly, but we might when 
using JKS.
- FailureInjectingJKSP, could we make the "failjceks" string a public constant 
like "jceks" is in JKSP? We can also use JKSP#SCHEME_NAME rather than 
hardcoding "jceks" again.

Test:
- Some lines longer than 80 chars
- "faulre furing" is in two comments, typo ;)
- Rather than the wrapper that checks the getClass() is FIJKSP, we could use 
KeyProviderFactory#get to get explicitly a failjceks. This is more of a sure 
thing, and also we'd definitely not skip the test if somehow what we get out is 
not a FIJKSP.

> JavaKeystoreProvider should not report a key as created if it was not flushed 
> to the backing file
> -
>
> Key: HADOOP-0
> URL: https://issues.apache.org/jira/browse/HADOOP-0
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Andrew Wang
>Assignee: Arun Suresh
> Attachments: HADOOP-0.1.patch
>
>
> Testing with the KMS backed by JKS reveals the following:
> {noformat}
> [root@dlo-4 ~]# hadoop key create testkey -provider 
> kms://http@localhost:16000/kms
> testkey has not been created. Mkdirs failed to create file:x
> 
> [root@dlo-4 ~]# hadoop key list -provider kms://http@localhost:16000/kms
> Listing keys for KeyProvider: 
> KMSClientProvider[http://localhost:16000/kms/v1/]
> testkey
> {noformat}
> The JKS still has the key in memory and serves it up, but will disappear if 
> the KMS is restarted since it's not flushed to the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10809) hadoop-azure: page blob support

2014-09-26 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HADOOP-10809:
-
Attachment: HADOOP-10809.05.patch

Backing up work in progress. Finished merge of all files and did basic 
verification in eclipse to make sure all the names resolve. Still need to 
compile and run Azure filesystem unit tests.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk

2014-09-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11125:
---
Labels: newbie  (was: )

> TestOsSecureRandom sometimes fails in trunk
> ---
>
> Key: HADOOP-11125
> URL: https://issues.apache.org/jira/browse/HADOOP-11125
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: newbie
>
> From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console :
> {code}
> Running org.apache.hadoop.crypto.random.TestOsSecureRandom
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec 
> <<< FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom
> testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) 
>  Time elapsed: 120.013 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.io.FileInputStream.readBytes(Native Method)
>   at java.io.FileInputStream.read(FileInputStream.java:220)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
>   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
>   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
>   at java.io.InputStreamReader.read(InputStreamReader.java:167)
>   at java.io.BufferedReader.fill(BufferedReader.java:136)
>   at java.io.BufferedReader.read1(BufferedReader.java:187)
>   at java.io.BufferedReader.read(BufferedReader.java:261)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
>   at org.apache.hadoop.util.Shell.run(Shell.java:455)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>   at 
> org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk

2014-09-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150426#comment-14150426
 ] 

Akira AJISAKA commented on HADOOP-11125:


+1 (non-binding) to remove this test.

> TestOsSecureRandom sometimes fails in trunk
> ---
>
> Key: HADOOP-11125
> URL: https://issues.apache.org/jira/browse/HADOOP-11125
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: newbie
>
> From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console :
> {code}
> Running org.apache.hadoop.crypto.random.TestOsSecureRandom
> Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec 
> <<< FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom
> testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) 
>  Time elapsed: 120.013 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.io.FileInputStream.readBytes(Native Method)
>   at java.io.FileInputStream.read(FileInputStream.java:220)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
>   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
>   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
>   at java.io.InputStreamReader.read(InputStreamReader.java:167)
>   at java.io.BufferedReader.fill(BufferedReader.java:136)
>   at java.io.BufferedReader.read1(BufferedReader.java:187)
>   at java.io.BufferedReader.read(BufferedReader.java:261)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
>   at org.apache.hadoop.util.Shell.run(Shell.java:455)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>   at 
> org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)