[jira] [Commented] (HADOOP-8328) Duplicate FileSystem Statistics object for 'file' scheme

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270411#comment-13270411
 ] 

Hudson commented on HADOOP-8328:


Integrated in Hadoop-Hdfs-0.23-Build #251 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/251/])
HADOOP-8328. Duplicate FileSystem Statistics object for 'file' scheme. 
(Revision 1335127)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335127
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Duplicate FileSystem Statistics object for 'file' scheme
> 
>
> Key: HADOOP-8328
> URL: https://issues.apache.org/jira/browse/HADOOP-8328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.3, 2.0.0
>
> Attachments: HADOOP-8328.patch
>
>
> Because of a change in HADOOP-8013, there are duplicate Statistics objects in 
> FileSystem's statistics table: one for LocalFileSystem and one for 
> RawLocalFileSystem. This causes MapReduce local file system counters to be 
> incorrect some of the time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270412#comment-13270412
 ] 

Hudson commented on HADOOP-8327:


Integrated in Hadoop-Hdfs-0.23-Build #251 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/251/])
svn merge -c 1335075 FIXES: HADOOP-8327. distcpv2 and distcpv1 jars should 
not coexist (Dave Thompson via bobby) (Revision 1335079)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335079
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java


> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8327-branch-0.23.2.patch, HADOOP-8327.patch, 
> HADOOP-8327.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8359) Clear up javadoc warnings in hadoop-common-project

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270432#comment-13270432
 ] 

Hudson commented on HADOOP-8359:


Integrated in Hadoop-Hdfs-trunk #1038 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1038/])
HADOOP-8359. Fix javadoc warnings in Configuration.  Contributed by Anupam 
Seth (Revision 1335258)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335258
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Clear up javadoc warnings in hadoop-common-project
> --
>
> Key: HADOOP-8359
> URL: https://issues.apache.org/jira/browse/HADOOP-8359
> Project: Hadoop Common
>  Issue Type: Task
>  Components: conf
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Anupam Seth
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HADOOP-8359-branch-2.patch
>
>
> Javadocs added in HADOOP-8172 has introduced two new javadoc warnings. Should 
> be easy to fix these (just missing #s for method refs).
> {code}
> [WARNING] Javadoc Warnings
> [WARNING] 
> /Users/harshchouraria/Work/code/apache/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java:334:
>  warning - Tag @link: missing '#': "addDeprecation(String key, String newKey)"
> [WARNING] 
> /Users/harshchouraria/Work/code/apache/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java:285:
>  warning - Tag @link: missing '#': "addDeprecation(String key, String newKey,
> [WARNING] String customMessage)"
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8328) Duplicate FileSystem Statistics object for 'file' scheme

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270433#comment-13270433
 ] 

Hudson commented on HADOOP-8328:


Integrated in Hadoop-Hdfs-trunk #1038 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1038/])
HADOOP-8328. Duplicate FileSystem Statistics object for 'file' scheme. 
(Revision 1335085)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335085
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Duplicate FileSystem Statistics object for 'file' scheme
> 
>
> Key: HADOOP-8328
> URL: https://issues.apache.org/jira/browse/HADOOP-8328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.3, 2.0.0
>
> Attachments: HADOOP-8328.patch
>
>
> Because of a change in HADOOP-8013, there are duplicate Statistics objects in 
> FileSystem's statistics table: one for LocalFileSystem and one for 
> RawLocalFileSystem. This causes MapReduce local file system counters to be 
> incorrect some of the time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270434#comment-13270434
 ] 

Hudson commented on HADOOP-8327:


Integrated in Hadoop-Hdfs-trunk #1038 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1038/])
HADOOP-8327. distcpv2 and distcpv1 jars should not coexist (Dave Thompson 
via bobby) (Revision 1335075)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335075
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java


> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8327-branch-0.23.2.patch, HADOOP-8327.patch, 
> HADOOP-8327.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed

2012-05-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270446#comment-13270446
 ] 

Daryn Sharp commented on HADOOP-7868:
-

+1 Looks good!  I'm not sure if the objdump check is still necessary, but it 
probably doesn't hurt.

> Hadoop native fails to compile when default linker option is -Wl,--as-needed
> 
>
> Key: HADOOP-7868
> URL: https://issues.apache.org/jira/browse/HADOOP-7868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 0.20.205.0, 1.0.0, 0.23.0
> Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable
>Reporter: James Page
> Attachments: HADOOP-7868-portable.patch, HADOOP-7868.patch
>
>
> Recent releases of Ubuntu and Debian have switched to using --as-needed as 
> default when linking binaries.
> As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names 
> during execution of configure resulting in a build failure.
> Explicitly using "-Wl,--no-as-needed" in this macro when required resolves 
> this issue.
> See http://wiki.debian.org/ToolChain/DSOLinking for a few more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8341:


   Resolution: Fixed
Fix Version/s: 3.0.0
   2.0.0
   0.23.3
   Status: Resolved  (was: Patch Available)

> Fix or filter findbugs issues in hadoop-tools
> -
>
> Key: HADOOP-8341
> URL: https://issues.apache.org/jira/browse/HADOOP-8341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8341.txt, HADOOP-8341.txt
>
>
> Now that the precommit build can test hadoop-tools we need to fix or filter 
> the many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270456#comment-13270456
 ] 

Hudson commented on HADOOP-8341:


Integrated in Hadoop-Hdfs-trunk-Commit #2279 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2279/])
HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby) 
(Revision 1335505)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335505
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/DeskewedJobTraceReader.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfPropertyNames.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedNetworkTopology.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/TraceBuilder.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordListAnonymizerUtility.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/NodeName.java
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java


> Fix or filter findbugs issues in hadoop-tools
> -
>
> Key: HADOOP-8341
> URL: https://issues.apache.org/jira/browse/HADOOP-8341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8341.txt, HADOOP-8341.txt
>
>
> Now that the precommit build can test hadoop-tools we need to fix or filter 
> the many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270457#comment-13270457
 ] 

Hudson commented on HADOOP-8341:


Integrated in Hadoop-Common-trunk-Commit #2204 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2204/])
HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby) 
(Revision 1335505)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335505
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/DeskewedJobTraceReader.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfPropertyNames.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedNetworkTopology.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/TraceBuilder.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordListAnonymizerUtility.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/NodeName.java
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java


> Fix or filter findbugs issues in hadoop-tools
> -
>
> Key: HADOOP-8341
> URL: https://issues.apache.org/jira/browse/HADOOP-8341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8341.txt, HADOOP-8341.txt
>
>
> Now that the precommit build can test hadoop-tools we need to fix or filter 
> the many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8359) Clear up javadoc warnings in hadoop-common-project

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270466#comment-13270466
 ] 

Hudson commented on HADOOP-8359:


Integrated in Hadoop-Mapreduce-trunk #1073 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1073/])
HADOOP-8359. Fix javadoc warnings in Configuration.  Contributed by Anupam 
Seth (Revision 1335258)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335258
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Clear up javadoc warnings in hadoop-common-project
> --
>
> Key: HADOOP-8359
> URL: https://issues.apache.org/jira/browse/HADOOP-8359
> Project: Hadoop Common
>  Issue Type: Task
>  Components: conf
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Anupam Seth
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HADOOP-8359-branch-2.patch
>
>
> Javadocs added in HADOOP-8172 has introduced two new javadoc warnings. Should 
> be easy to fix these (just missing #s for method refs).
> {code}
> [WARNING] Javadoc Warnings
> [WARNING] 
> /Users/harshchouraria/Work/code/apache/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java:334:
>  warning - Tag @link: missing '#': "addDeprecation(String key, String newKey)"
> [WARNING] 
> /Users/harshchouraria/Work/code/apache/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java:285:
>  warning - Tag @link: missing '#': "addDeprecation(String key, String newKey,
> [WARNING] String customMessage)"
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8328) Duplicate FileSystem Statistics object for 'file' scheme

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270467#comment-13270467
 ] 

Hudson commented on HADOOP-8328:


Integrated in Hadoop-Mapreduce-trunk #1073 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1073/])
HADOOP-8328. Duplicate FileSystem Statistics object for 'file' scheme. 
(Revision 1335085)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335085
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Duplicate FileSystem Statistics object for 'file' scheme
> 
>
> Key: HADOOP-8328
> URL: https://issues.apache.org/jira/browse/HADOOP-8328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.3, 2.0.0
>
> Attachments: HADOOP-8328.patch
>
>
> Because of a change in HADOOP-8013, there are duplicate Statistics objects in 
> FileSystem's statistics table: one for LocalFileSystem and one for 
> RawLocalFileSystem. This causes MapReduce local file system counters to be 
> incorrect some of the time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270468#comment-13270468
 ] 

Hudson commented on HADOOP-8327:


Integrated in Hadoop-Mapreduce-trunk #1073 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1073/])
HADOOP-8327. distcpv2 and distcpv1 jars should not coexist (Dave Thompson 
via bobby) (Revision 1335075)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335075
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java


> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8327-branch-0.23.2.patch, HADOOP-8327.patch, 
> HADOOP-8327.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270481#comment-13270481
 ] 

Hudson commented on HADOOP-8341:


Integrated in Hadoop-Mapreduce-trunk-Commit #2221 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2221/])
HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby) 
(Revision 1335505)

 Result = ABORTED
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1335505
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/DeskewedJobTraceReader.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfPropertyNames.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedNetworkTopology.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/TraceBuilder.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordListAnonymizerUtility.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/NodeName.java
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/dev-support/findbugs-exclude.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java


> Fix or filter findbugs issues in hadoop-tools
> -
>
> Key: HADOOP-8341
> URL: https://issues.apache.org/jira/browse/HADOOP-8341
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8341.txt, HADOOP-8341.txt
>
>
> Now that the precommit build can test hadoop-tools we need to fix or filter 
> the many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-08 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8368:
---

 Target Version/s: 2.0.0
Affects Version/s: 2.0.0

> Use CMake rather than autotools to build native code
> 
>
> Key: HADOOP-8368
> URL: https://issues.apache.org/jira/browse/HADOOP-8368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-05-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270515#comment-13270515
 ] 

Daryn Sharp commented on HADOOP-7967:
-

@Sanjay: I'd like to make a suggestion to accelerate fixing these token bugs.  
The mutually agreed upon solution in MAPREDUCE-3825 is more extensive and 
breaks api compatibility, so would it be reasonable to commit this 
backward-compatible change, with an immediate followup jira for the new apis?

> Need generalized multi-token filesystem support
> ---
>
> Key: HADOOP-7967
> URL: https://issues.apache.org/jira/browse/HADOOP-7967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, security
>Affects Versions: 0.23.1, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
> HADOOP-7967-4.patch, HADOOP-7967.patch
>
>
> Multi-token filesystem support and its interactions with the MR 
> {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
> knowledge to know if the tokens for a filesystem are available, which it 
> can't possibly know for multi-token filesystems.  Filtered filesystems are 
> also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
> will become a problem with the current implementation.  Currently 
> {{FileSystem}} will leak tokens even when some tokens are already present.
> The decision for token acquisition, and which tokens, should be pushed all 
> the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
> ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8304) DNSToSwitchMapping should add interface to resolve individual host besides a list of host

2012-05-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270586#comment-13270586
 ] 

Eli Collins commented on HADOOP-8304:
-

bq. Do you see any scenario to resolve a list of host? (not counting the unit 
test)

DatanodeManager does that today with a list obtained from the hosts files.

bq. I don't understand the question of last comment there as I just want to fix 
the interface here

You mentioned earlier "identify a potential bug that a hostname start with 
number may not been resolved properly" - doesn't that issue still need to be 
addressed?

> DNSToSwitchMapping should add interface to resolve individual host besides a 
> list of host
> -
>
> Key: HADOOP-8304
> URL: https://issues.apache.org/jira/browse/HADOOP-8304
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.0.0
>
> Attachments: HADOOP-8304-V2.patch, HADOOP-8304-V2.patch, 
> HADOOP-8304.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DNSToSwitchMapping now has only one API to resolve a host list: public 
> List resolve(List names). But the two major caller: 
> RackResolver.resolve() and DatanodeManager.resolveNetworkLocation() are 
> taking single host name but have to wrapper it to an single entry ArrayList. 
> This is not necessary especially the host has been cached before.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8304) DNSToSwitchMapping should add interface to resolve individual host besides a list of host

2012-05-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270607#comment-13270607
 ] 

Junping Du commented on HADOOP-8304:


That's for caching only. If you think list interface here is good, I agree that 
it may be better to leave it there which will not take complexity of 
incompatibility. 
About the bug that wrongly resolve the hostname starting with number, yes. I 
will go ahead to file a jira and it could be easy to fix it.


> DNSToSwitchMapping should add interface to resolve individual host besides a 
> list of host
> -
>
> Key: HADOOP-8304
> URL: https://issues.apache.org/jira/browse/HADOOP-8304
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.0.0
>
> Attachments: HADOOP-8304-V2.patch, HADOOP-8304-V2.patch, 
> HADOOP-8304.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DNSToSwitchMapping now has only one API to resolve a host list: public 
> List resolve(List names). But the two major caller: 
> RackResolver.resolve() and DatanodeManager.resolveNetworkLocation() are 
> taking single host name but have to wrapper it to an single entry ArrayList. 
> This is not necessary especially the host has been cached before.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed

2012-05-08 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270622#comment-13270622
 ] 

Trevor Robinson commented on HADOOP-7868:
-

I was surprised that supporting three different tools was necessary, but I 
wasn't bold enough to assume it was safe to remove any. ;-)

As a bit of context for someone thinking about committing this patch (please 
do!), it along with HADOOP-8370 and HDFS-3383 enabling building on Ubuntu 12.04 
ARM Server.

> Hadoop native fails to compile when default linker option is -Wl,--as-needed
> 
>
> Key: HADOOP-7868
> URL: https://issues.apache.org/jira/browse/HADOOP-7868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 0.20.205.0, 1.0.0, 0.23.0
> Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable
>Reporter: James Page
> Attachments: HADOOP-7868-portable.patch, HADOOP-7868.patch
>
>
> Recent releases of Ubuntu and Debian have switched to using --as-needed as 
> default when linking binaries.
> As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names 
> during execution of configure resulting in a build failure.
> Explicitly using "-Wl,--no-as-needed" in this macro when required resolves 
> this issue.
> See http://wiki.debian.org/ToolChain/DSOLinking for a few more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-08 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-8366:
-

 Target Version/s: 2.0.0
Affects Version/s: (was: 0.3.0)
   (was: 0.2.0)
   2.0.0

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed

2012-05-08 Thread James Page (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270625#comment-13270625
 ] 

James Page commented on HADOOP-7868:


+1 Works for me

(and +1 to including the other two patches that Trevor identified for building 
on ARM server).

> Hadoop native fails to compile when default linker option is -Wl,--as-needed
> 
>
> Key: HADOOP-7868
> URL: https://issues.apache.org/jira/browse/HADOOP-7868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 0.20.205.0, 1.0.0, 0.23.0
> Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable
>Reporter: James Page
> Attachments: HADOOP-7868-portable.patch, HADOOP-7868.patch
>
>
> Recent releases of Ubuntu and Debian have switched to using --as-needed as 
> default when linking binaries.
> As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names 
> during execution of configure resulting in a build failure.
> Explicitly using "-Wl,--no-as-needed" in this macro when required resolves 
> this issue.
> See http://wiki.debian.org/ToolChain/DSOLinking for a few more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-08 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-8366:
-

Attachment: hadoop-8366-3.patch

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
> hadoop-8366-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8369) Failing tests in branch-2

2012-05-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8369:
---

Assignee: Eli Collins

> Failing tests in branch-2
> -
>
> Key: HADOOP-8369
> URL: https://issues.apache.org/jira/browse/HADOOP-8369
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arun C Murthy
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec <<< 
> FAILURE!
> 
> TestCodec failed since I didn't pass -Pnative, the test could be improved to 
> ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270646#comment-13270646
 ] 

Hadoop QA commented on HADOOP-8366:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526013/hadoop-8366-3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/959//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/959//console

This message is automatically generated.

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
> hadoop-8366-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8369) Failing tests in branch-2

2012-05-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270655#comment-13270655
 ] 

Eli Collins commented on HADOOP-8369:
-

Hey Arun

TestFSMainOperationsLocalFileSystem and TestCodec (w/p -Pnative) both pass for 
me on the top of branch-2. What change did you run these at, and what specific 
tests failed?

TestViewFsTrash fails sporadically, this is HADOOP-8110, I'll take a look now, 
but don't think this flaky test a blocker for an alpha.

Thanks,
Eli

> Failing tests in branch-2
> -
>
> Key: HADOOP-8369
> URL: https://issues.apache.org/jira/browse/HADOOP-8369
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arun C Murthy
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec <<< 
> FAILURE!
> 
> TestCodec failed since I didn't pass -Pnative, the test could be improved to 
> ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (HADOOP-8369) Failing tests in branch-2

2012-05-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270655#comment-13270655
 ] 

Eli Collins edited comment on HADOOP-8369 at 5/8/12 6:01 PM:
-

Hey Arun

TestFSMainOperationsLocalFileSystem and TestCodec (w/o -Pnative) both pass for 
me on the top of branch-2. What change did you run these at, and what specific 
tests failed?

TestViewFsTrash fails sporadically, this is HADOOP-8110, I'll take a look now, 
but don't think this flaky test a blocker for an alpha.

Thanks,
Eli

  was (Author: eli2):
Hey Arun

TestFSMainOperationsLocalFileSystem and TestCodec (w/p -Pnative) both pass for 
me on the top of branch-2. What change did you run these at, and what specific 
tests failed?

TestViewFsTrash fails sporadically, this is HADOOP-8110, I'll take a look now, 
but don't think this flaky test a blocker for an alpha.

Thanks,
Eli
  
> Failing tests in branch-2
> -
>
> Key: HADOOP-8369
> URL: https://issues.apache.org/jira/browse/HADOOP-8369
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arun C Murthy
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec <<< 
> FAILURE!
> 
> TestCodec failed since I didn't pass -Pnative, the test could be improved to 
> ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-08 Thread Giri (JIRA)
Giri created HADOOP-8371:


 Summary: Hadoop 1.0.1 release - DFS rollback issues
 Key: HADOOP-8371
 URL: https://issues.apache.org/jira/browse/HADOOP-8371
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.1
 Environment: All tests were done on a single node cluster, that runs 
namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 12.04
Reporter: Giri
Priority: Minor


h1.Test Setup
All tests were done on a single node cluster, that runs namenode, 
secondarynamenode, datanode, all on one machine, running Ubuntu
12.04.
/usr/local/hadoop/ is a soft link to /usr/local/hadoop-0.20.203.0/
/usr/local/hadoop-1.0.1 contains the upgrade version.
h1.Version - 0.20.203.0
* Formatted name node.
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Copied a few test files into HDFS.
* Output from "fs -lsr /" command
{quote}
hduser@ruff790:/usr/local/hadoop/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* Executed "hadoop dfsadmin -finalizeUpgrade" (I do not think this is required, 
but i do not think it should matter either).
* Stopped DFS by executing "stop-dfs.sh"

h1. Version - 1.0.1
h2. Upgrade
* Tried starting DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh
* As expected the name node start failed due to a version mismatch.
{quote}
2012-05-08 08:22:38,166 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException:
File system image contains an old layout version -31.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
{quote}
* Ran /usr/local/hadoop-1.0.1/bin/stop-dfs.sh to stop datanode and 
secondarynamenode.
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -upgrade
* Checked upgrade status by calling /usr/local/hadoop-1.0.1/bin/hadoop dfsadmin 
-upgradeProgress status
{quote}
Upgrade for version -32 has been completed.
Upgrade is not finalized.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
#Tue May 08 08:25:51 EDT 2012
namespaceID=350250898
cTime=1336479951669
storageType=NAME_NODE
layoutVersion=-32
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
* Stop DFS.

h2.Rollback
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback
* As per contents of "hadoop-hduser-namenode-ruff790.log", rollback seems to 
have succeeded.
{quote}
012-05-08 08:37:41,799 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rolling back storage
directory /usr/local/app/hadoop/tmp/dfs/name.
new LV = -31; new CTime = 0
2012-05-08 08:37:41,801 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rollback of
/usr/local/app/hadoop/tmp/dfs/name is complete.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:37:42 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
{quote}
hduser@ruff790:/usr/local/hadoop-1.0.1/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* However at this point i could not browse the file system from WebUI. Then i 
realized that data node is not really running. From the data
node log file it seems like it had shut down during the rollback process.
{quote}
012-05-08 08:37:57,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is shutting
down: org.apache.hadoop.ipc.RemoteExceptio

[jira] [Updated] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8353:
-

Attachment: HADOOP-8353.patch.txt

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8353:
-

Status: Patch Available  (was: Open)

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8371:


Description: See the next comment for details.  (was: h1.Test Setup
All tests were done on a single node cluster, that runs namenode, 
secondarynamenode, datanode, all on one machine, running Ubuntu
12.04.
/usr/local/hadoop/ is a soft link to /usr/local/hadoop-0.20.203.0/
/usr/local/hadoop-1.0.1 contains the upgrade version.
h1.Version - 0.20.203.0
* Formatted name node.
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Copied a few test files into HDFS.
* Output from "fs -lsr /" command
{quote}
hduser@ruff790:/usr/local/hadoop/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* Executed "hadoop dfsadmin -finalizeUpgrade" (I do not think this is required, 
but i do not think it should matter either).
* Stopped DFS by executing "stop-dfs.sh"

h1. Version - 1.0.1
h2. Upgrade
* Tried starting DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh
* As expected the name node start failed due to a version mismatch.
{quote}
2012-05-08 08:22:38,166 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException:
File system image contains an old layout version -31.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
{quote}
* Ran /usr/local/hadoop-1.0.1/bin/stop-dfs.sh to stop datanode and 
secondarynamenode.
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -upgrade
* Checked upgrade status by calling /usr/local/hadoop-1.0.1/bin/hadoop dfsadmin 
-upgradeProgress status
{quote}
Upgrade for version -32 has been completed.
Upgrade is not finalized.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
#Tue May 08 08:25:51 EDT 2012
namespaceID=350250898
cTime=1336479951669
storageType=NAME_NODE
layoutVersion=-32
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
* Stop DFS.

h2.Rollback
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback
* As per contents of "hadoop-hduser-namenode-ruff790.log", rollback seems to 
have succeeded.
{quote}
012-05-08 08:37:41,799 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rolling back storage
directory /usr/local/app/hadoop/tmp/dfs/name.
new LV = -31; new CTime = 0
2012-05-08 08:37:41,801 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rollback of
/usr/local/app/hadoop/tmp/dfs/name is complete.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:37:42 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
{quote}
hduser@ruff790:/usr/local/hadoop-1.0.1/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* However at this point i could not browse the file system from WebUI. Then i 
realized that data node is not really running. From the data
node log file it seems like it had shut down during the rollback process.
{quote}
012-05-08 08:37:57,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is shutting
down: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Unregistered 
data node:
127.0.0.1:50010
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.verifyRequest(NameNode.java:1077)
{quote}
* So i ran "stop-dfs.sh" to shut down namnode and secondarynamenode.
* Next "start-dfs.sh" fail

[jira] [Commented] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270706#comment-13270706
 ] 

Suresh Srinivas commented on HADOOP-8371:
-

bq. Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback

When you upgrade from v1 to v2, you do it by running start-dfs.sh -upgrade on 
v2. After upgrade, to rollback, you have to do start-dfs.sh -rollback on * v1 * 
version of the software and not * v2 * as you have done here. That is the 
reason why you are seeing the problem.

We should still log a bug on why rollback was allowed from 1.0.1, which rolled 
back to namenode state from 0.20.203.

> Hadoop 1.0.1 release - DFS rollback issues
> --
>
> Key: HADOOP-8371
> URL: https://issues.apache.org/jira/browse/HADOOP-8371
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: All tests were done on a single node cluster, that runs 
> namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 
> 12.04
>Reporter: Giri
>Priority: Minor
>  Labels: hdfs
>
> See the next comment for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6546) BloomMapFile can return false negatives

2012-05-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270731#comment-13270731
 ] 

Suresh Srinivas commented on HADOOP-6546:
-

I committed this patch to branch-1. It should be available in release 1.1.

> BloomMapFile can return false negatives
> ---
>
> Key: HADOOP-6546
> URL: https://issues.apache.org/jira/browse/HADOOP-6546
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.1
>Reporter: Clark Jefcoat
>Assignee: Clark Jefcoat
> Fix For: 0.21.0
>
> Attachments: HADOOP-6546.patch
>
>
> BloomMapFile can return false negatives when using keys of varying sizes.  If 
> the amount of data written by the write() method of your key class differs 
> between instance of your key, your BloomMapFile may return false negatives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270733#comment-13270733
 ] 

Hadoop QA commented on HADOOP-8353:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526026/HADOOP-8353.patch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/960//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/960//console

This message is automatically generated.

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-08 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Attachment: HADOOP-8319.patch

Attaching a patch with the ability to cache AFS states so that states are not 
lost as pointed by Daryn.

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-08 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Open  (was: Patch Available)

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-08 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270773#comment-13270773
 ] 

Hadoop QA commented on HADOOP-8319:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526037/HADOOP-8319.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated -6 warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/961//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/961//console

This message is automatically generated.

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6546) BloomMapFile can return false negatives

2012-05-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-6546:


Fix Version/s: 1.1.0

> BloomMapFile can return false negatives
> ---
>
> Key: HADOOP-6546
> URL: https://issues.apache.org/jira/browse/HADOOP-6546
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.1
>Reporter: Clark Jefcoat
>Assignee: Clark Jefcoat
> Fix For: 1.1.0, 0.21.0
>
> Attachments: HADOOP-6546.patch
>
>
> BloomMapFile can return false negatives when using keys of varying sizes.  If 
> the amount of data written by the write() method of your key class differs 
> between instance of your key, your BloomMapFile may return false negatives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270779#comment-13270779
 ] 

Robert Joseph Evans commented on HADOOP-8354:
-

I just saw a very strange issue while running some of the tests.

https://issues.apache.org/jira/browse/MAPREDUCE-4233?focusedCommentId=13270757&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13270757

I am putting it here because I think it is for the same reason.  The Jars were 
not installed first so the tests were run against an older version of the code, 
not the patched version.

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans reassigned HADOOP-8354:
---

Assignee: Robert Joseph Evans

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270815#comment-13270815
 ] 

Robert Joseph Evans commented on HADOOP-8354:
-

I just verified that this fixed both the fidbugs and the junit issues.  I will 
be uploading a patch shortly.

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8354.txt
>
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8354:


Status: Patch Available  (was: Open)

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8354.txt
>
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8354:


Attachment: HADOOP-8354.txt

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8354.txt
>
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270833#comment-13270833
 ] 

Hadoop QA commented on HADOOP-8354:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526041/HADOOP-8354.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/962//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/962//console

This message is automatically generated.

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8354.txt
>
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8304) DNSToSwitchMapping should add interface to resolve individual host besides a list of host

2012-05-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8304:


Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Cool, closing this one as won't fix.  Thanks for filing a separate jira, please 
link it in here.

> DNSToSwitchMapping should add interface to resolve individual host besides a 
> list of host
> -
>
> Key: HADOOP-8304
> URL: https://issues.apache.org/jira/browse/HADOOP-8304
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.0.0
>
> Attachments: HADOOP-8304-V2.patch, HADOOP-8304-V2.patch, 
> HADOOP-8304.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DNSToSwitchMapping now has only one API to resolve a host list: public 
> List resolve(List names). But the two major caller: 
> RackResolver.resolve() and DatanodeManager.resolveNetworkLocation() are 
> taking single host name but have to wrapper it to an single entry ArrayList. 
> This is not necessary especially the host has been cached before.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8369) Failing tests in branch-2

2012-05-08 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8369:
--

Priority: Major  (was: Blocker)

Thanks for taking a look Eli, downgrading from 'blocker'.

> Failing tests in branch-2
> -
>
> Key: HADOOP-8369
> URL: https://issues.apache.org/jira/browse/HADOOP-8369
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arun C Murthy
>Assignee: Eli Collins
> Fix For: 2.0.0
>
>
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
> <<< FAILURE!
> --
> Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec <<< 
> FAILURE!
> 
> TestCodec failed since I didn't pass -Pnative, the test could be improved to 
> ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8292) TableMapping does not refresh when topology is updated

2012-05-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-8292:
--

Assignee: Alejandro Abdelnur

> TableMapping does not refresh when topology is updated
> --
>
> Key: HADOOP-8292
> URL: https://issues.apache.org/jira/browse/HADOOP-8292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
>
> HADOOP-7030 introduced TableMapping, an implementation of DNSToSwitchMapping 
> which uses a file to map from IPs/hosts to their racks.  It's intended to 
> replace ScriptBasedMapping for cases where the latter was just a complicated 
> way of looking up the rack in a file.
> Though there was discussion of it on the JIRA, the TableMapping 
> implementation is not 'refreshable'.  i.e., if you want to add a host to your 
> cluster, and that host wasn't in the topology file to begin with, it will 
> never be added.
> TableMapping should refresh, either based on a command that can be executed, 
> or, perhaps, if the file on disk changes.
> I'll also point out that TableMapping extends CachedDNSToSwitchMapping, but, 
> since it does no refreshing, I don't see what the caching gets you: I think 
> the cache ends up being a second copy of the underlying map, always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8292) TableMapping does not refresh when topology is updated

2012-05-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8292:
---

Status: Patch Available  (was: Open)

> TableMapping does not refresh when topology is updated
> --
>
> Key: HADOOP-8292
> URL: https://issues.apache.org/jira/browse/HADOOP-8292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8292.patch
>
>
> HADOOP-7030 introduced TableMapping, an implementation of DNSToSwitchMapping 
> which uses a file to map from IPs/hosts to their racks.  It's intended to 
> replace ScriptBasedMapping for cases where the latter was just a complicated 
> way of looking up the rack in a file.
> Though there was discussion of it on the JIRA, the TableMapping 
> implementation is not 'refreshable'.  i.e., if you want to add a host to your 
> cluster, and that host wasn't in the topology file to begin with, it will 
> never be added.
> TableMapping should refresh, either based on a command that can be executed, 
> or, perhaps, if the file on disk changes.
> I'll also point out that TableMapping extends CachedDNSToSwitchMapping, but, 
> since it does no refreshing, I don't see what the caching gets you: I think 
> the cache ends up being a second copy of the underlying map, always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8292) TableMapping does not refresh when topology is updated

2012-05-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8292:
---

Attachment: HADOOP-8292.patch

adding a lazy reload check on resolve(), the reload check will check the 
mapping file for changes using a configurable minimum interval (default value 
set to 10 secs).

> TableMapping does not refresh when topology is updated
> --
>
> Key: HADOOP-8292
> URL: https://issues.apache.org/jira/browse/HADOOP-8292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8292.patch
>
>
> HADOOP-7030 introduced TableMapping, an implementation of DNSToSwitchMapping 
> which uses a file to map from IPs/hosts to their racks.  It's intended to 
> replace ScriptBasedMapping for cases where the latter was just a complicated 
> way of looking up the rack in a file.
> Though there was discussion of it on the JIRA, the TableMapping 
> implementation is not 'refreshable'.  i.e., if you want to add a host to your 
> cluster, and that host wasn't in the topology file to begin with, it will 
> never be added.
> TableMapping should refresh, either based on a command that can be executed, 
> or, perhaps, if the file on disk changes.
> I'll also point out that TableMapping extends CachedDNSToSwitchMapping, but, 
> since it does no refreshing, I don't see what the caching gets you: I think 
> the cache ends up being a second copy of the underlying map, always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8352) We should always generate a new configure script for the c++ code

2012-05-08 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8352:
---

Release Note: 
If you are compiling c++, the configure script will now be automatically 
regenerated as it should be.
This requires autoconf version 2.61 or greater.

> We should always generate a new configure script for the c++ code
> -
>
> Key: HADOOP-8352
> URL: https://issues.apache.org/jira/browse/HADOOP-8352
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3, 1.1.0
>
> Attachments: gen-c++.lst, git-ignore.patch, hadoop-8352.patch
>
>
> If you are compiling c++, you should always generate a configure script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270991#comment-13270991
 ] 

Eli Collins commented on HADOOP-8366:
-

Isn't the status field non-optional in RpcResponseHeaderProto?  Otherwise looks 
great.

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
> hadoop-8366-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8292) TableMapping does not refresh when topology is updated

2012-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270998#comment-13270998
 ] 

Hadoop QA commented on HADOOP-8292:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526085/HADOOP-8292.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/963//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/963//console

This message is automatically generated.

> TableMapping does not refresh when topology is updated
> --
>
> Key: HADOOP-8292
> URL: https://issues.apache.org/jira/browse/HADOOP-8292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8292.patch
>
>
> HADOOP-7030 introduced TableMapping, an implementation of DNSToSwitchMapping 
> which uses a file to map from IPs/hosts to their racks.  It's intended to 
> replace ScriptBasedMapping for cases where the latter was just a complicated 
> way of looking up the rack in a file.
> Though there was discussion of it on the JIRA, the TableMapping 
> implementation is not 'refreshable'.  i.e., if you want to add a host to your 
> cluster, and that host wasn't in the topology file to begin with, it will 
> never be added.
> TableMapping should refresh, either based on a command that can be executed, 
> or, perhaps, if the file on disk changes.
> I'll also point out that TableMapping extends CachedDNSToSwitchMapping, but, 
> since it does no refreshing, I don't see what the caching gets you: I think 
> the cache ends up being a second copy of the underlying map, always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8292) TableMapping does not refresh when topology is updated

2012-05-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271000#comment-13271000
 ] 

Todd Lipcon commented on HADOOP-8292:
-

Few quick notes (didn't look in serious detail yet):
- is it OK to be doing file system access while holding this lock, and in the 
"hot path" of resolve? I worry that this might slow down client requests, for 
example.
- I think we should avoid reading the file if the modification time is within 
the last couple of seconds -- with some editors and config management systems, 
updating a file might temporarily leave it in an empty state before re-filling 
it again with the new data. Well behaved systems won't do that, but I think 
it's better for us to be resilient to it than for us to end up loading an empty 
topology mapping.
- clearCache() is called from resolve() without the lock held. That might cause 
multiple threads to call clear() on a map at once, which might result in an 
exception or something.

How does this interact with the HDFS topology code which needs to check when a 
cluster changes from single-rack to multi-rack? When a node's topology changes, 
don't we need to re-check replication policies for all the blocks, etc? Maybe 
this isn't a new issue, but it's certainly strange.

> TableMapping does not refresh when topology is updated
> --
>
> Key: HADOOP-8292
> URL: https://issues.apache.org/jira/browse/HADOOP-8292
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8292.patch
>
>
> HADOOP-7030 introduced TableMapping, an implementation of DNSToSwitchMapping 
> which uses a file to map from IPs/hosts to their racks.  It's intended to 
> replace ScriptBasedMapping for cases where the latter was just a complicated 
> way of looking up the rack in a file.
> Though there was discussion of it on the JIRA, the TableMapping 
> implementation is not 'refreshable'.  i.e., if you want to add a host to your 
> cluster, and that host wasn't in the topology file to begin with, it will 
> never be added.
> TableMapping should refresh, either based on a command that can be executed, 
> or, perhaps, if the file on disk changes.
> I'll also point out that TableMapping extends CachedDNSToSwitchMapping, but, 
> since it does no refreshing, I don't see what the caching gets you: I think 
> the cache ends up being a second copy of the underlying map, always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7489) Hadoop logs errors upon startup on OS X 10.7

2012-05-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271034#comment-13271034
 ] 

Allen Wittenauer commented on HADOOP-7489:
--

Java has its own internal version of Kerberos.  That version is very, very 
stupid in 1.6 and earlier when it comes to using naming services for 
auto-discovery of the realm and KDC information.  You'll see similar weirdness 
even on non-OS X boxes when the krb5.conf doesn't explicitly list the realm 
information.  The same configuration fix mentioned here applies there as well.  
This has been fixed in JRE 1.7.  Allegedly.

> Hadoop logs errors upon startup on OS X 10.7
> 
>
> Key: HADOOP-7489
> URL: https://issues.apache.org/jira/browse/HADOOP-7489
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Mac OS X 10.7, Java 1.6.0_26
>Reporter: Bryan Keller
>Priority: Minor
>
> When starting Hadoop on OS X 10.7 ("Lion") using start-all.sh, Hadoop logs 
> the following errors:
> 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
> SCDynamicStore
> Hadoop does seem to function properly despite this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7489) Hadoop logs errors upon startup on OS X 10.7

2012-05-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7489.
--

Resolution: Won't Fix

I'm closing this as Won't Fix since this is a known JRE bug and not 
particularly anything we can do about it in Hadoop.

> Hadoop logs errors upon startup on OS X 10.7
> 
>
> Key: HADOOP-7489
> URL: https://issues.apache.org/jira/browse/HADOOP-7489
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Mac OS X 10.7, Java 1.6.0_26
>Reporter: Bryan Keller
>Priority: Minor
>
> When starting Hadoop on OS X 10.7 ("Lion") using start-all.sh, Hadoop logs 
> the following errors:
> 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
> SCDynamicStore
> Hadoop does seem to function properly despite this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8308) Support cross-project Jenkins builds

2012-05-08 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White resolved HADOOP-8308.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

> Support cross-project Jenkins builds
> 
>
> Key: HADOOP-8308
> URL: https://issues.apache.org/jira/browse/HADOOP-8308
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-8308.patch
>
>
> This issue is to change test-patch to run only the tests for modules that 
> have changed and then run from the top-level. See discussion at 
> http://mail-archives.aurora.apache.org/mod_mbox/hadoop-common-dev/201204.mbox/%3ccaf-wd4tvkwypuuq9ibxv4uz8b2behxnpfkb5mq3d-pwvksh...@mail.gmail.com%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed

2012-05-08 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271052#comment-13271052
 ] 

Tom White commented on HADOOP-8354:
---

+1 Thanks for fixing this Robert. 

> test-patch findbugs may fail if a dependent module is changed
> -
>
> Key: HADOOP-8354
> URL: https://issues.apache.org/jira/browse/HADOOP-8354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tom White
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8354.txt
>
>
> This can happen when code in a dependent module is changed, but the change 
> isn't picked up. E.g. 
> https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266867#comment-13266867
> We can fix this by running 'mvn install -DskipTests 
> -Dmaven.javadoc.skip=true' first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira