[jira] [Commented] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566241#comment-13566241
 ] 

Jerry Chen commented on HADOOP-9260:


Please note that I changed the hadoop version manually to Hadoop 2.0.2-alpha 
for my own testing purposes and that doesn't related to the problem.

> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566239#comment-13566239
 ] 

Jerry Chen commented on HADOOP-9260:


The latest trunk -r 1440286 has the same problem. 

On one machine:
[hadoop@bdpe01 tools]$ hadoop version
Hadoop ${pom.version}
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by ${user.name} on ${version-info.build.time}
>From source with checksum ${version-info.source.md5}

While on another machine with the same binary:
[hadoop@bdpe02 ~]$ hadoop version
Hadoop 2.0.2-alpha
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1440286
Compiled by haifeng on 2013-01-30T18:41Z
>From source with checksum 5fab855acaace993f9bbf06ebda1fa9e


> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566149#comment-13566149
 ] 

Steve Loughran commented on HADOOP-9176:


@Arpit: I'm always right.

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.2.0, 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1.2.0
>
> Attachments: HADOOP-9176.patch
>
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9262) Allow jobs to override the input Key/Value read from a sequence file's headers

2013-01-29 Thread David Parks (JIRA)
David Parks created HADOOP-9262:
---

 Summary: Allow jobs to override the input Key/Value read from a 
sequence file's headers
 Key: HADOOP-9262
 URL: https://issues.apache.org/jira/browse/HADOOP-9262
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.3
Reporter: David Parks
Priority: Minor


There's no clean way to upgrade a sequence file when the model objects in an 
existing sequence file change in the development process.

If we could override the Key/Value class types read from the sequence file 
headers we could write jobs that read in the old version of a model object 
using a different name (MyModel_old for example) make the necessary updates, 
and write out the new version of the object (MyModel for example).

The problem we experience now is that we have to hack up the code to match the 
Key/Value class types written to the sequence file, or manually change the 
headers of each sequence file.

Versioning model objects every time they change isn't a good approach to 
development because it introduces the likelyhood of less maintained code using 
an incorrect, old version of the model object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566081#comment-13566081
 ] 

Hudson commented on HADOOP-9221:


Integrated in Hadoop-trunk-Commit #3298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3298/])
Move HADOOP-9221 to correct section of CHANGES.txt. (Revision 1440248)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1440248
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.3-alpha
>
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566078#comment-13566078
 ] 

Jerry Chen commented on HADOOP-9260:


Hi Chris,
The trunk revision I checked out is -r 1439752 which already include 
HADOOP-9246 (-r 1439620).
This is a different problem phenomenon. Before HADOOP-9246, the 
common-version-info.properties in hadoop-common-3.0.0-SNAPSHOT.jar is NOT 
correctly generated. But in trunk -r 1439752 (which is with HADOOP-9246) the 
common-version-info.properties in hadoop-common-3.0.0-SNAPSHOT.jar is correctly 
generated. But another common-version-info.properties with incorrect 
information was packaged to hadoop-common-3.0.0-SNAPSHOT-sources.jar which 
cause the runtime may load the incorrect file since both files are in the class 
path.

Jerry

> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566075#comment-13566075
 ] 

Hudson commented on HADOOP-9221:


Integrated in Hadoop-trunk-Commit #3297 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3297/])
HADOOP-9221. Convert remaining xdocs to APT. Contributed by Andy Isaacson. 
(Revision 1440245)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1440245
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Superusers.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/FaultInjectFramework.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsEditsViewer.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsImageViewer.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsPermissionsGuide.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsQuotaAdminGuide.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsUserGuide.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Hftp.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/LibHdfs.apt.vm
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/SLGUserGuide.apt.vm


> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.3-alpha
>
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/d

[jira] [Updated] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9221:
---

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Andy.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.3-alpha
>
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566038#comment-13566038
 ] 

Aaron T. Myers commented on HADOOP-9221:


+1, patch looks good to me. I'm going to commit this momentarily. What I'm in 
fact going to do is apply the patch and then do an `svn rm' on the following 
files:

{noformat}
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
{noformat}

and an `svn add' on the following files:

{noformat}
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/Hftp.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsUserGuide.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/SLGUserGuide.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsQuotaAdminGuide.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/FaultInjectFramework.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsImageViewer.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/LibHdfs.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsPermissionsGuide.apt.vm
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsEditsViewer.apt.vm
hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
hadoop-common-project/hadoop-common/src/site/apt/Superusers.apt.vm
hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm
{noformat}

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to

[jira] [Updated] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9176:
--

Attachment: HADOOP-9176.patch

[~ste...@apache.org] You are right and this is Windows being fussier about 
closing in-use files. Happily this instance turns out to be a test bug. An 
input stream was not closed on exit.

Attached a fix and removed the workaround I put in place earlier with 
MAPREDUCE-4099.

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.2.0, 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1.2.0
>
> Attachments: HADOOP-9176.patch
>
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9176:
--

Affects Version/s: 1.2.0

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.2.0, 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-9176.patch
>
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9176:
--

Fix Version/s: 1.2.0

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.2.0, 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 1.2.0
>
> Attachments: HADOOP-9176.patch
>
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9176:
--

Component/s: test

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9176) RawLocalFileSystem.delete unexpected behavior on Windows while running Mapreduce tests with Open JDK 7

2013-01-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9176:
--

Target Version/s:   (was: 1-win)

> RawLocalFileSystem.delete unexpected behavior on Windows while running 
> Mapreduce tests with Open JDK 7
> --
>
> Key: HADOOP-9176
> URL: https://issues.apache.org/jira/browse/HADOOP-9176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> RawLocalFileSystem.delete fails on Windows even when the files are not 
> expected to be in use. It does not reproduce with Sun JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9261) S3 and S3 filesystems can move a directory under itself -and so lose data

2013-01-29 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9261:
--

 Summary: S3 and S3 filesystems can move a directory under itself 
-and so lose data
 Key: HADOOP-9261
 URL: https://issues.apache.org/jira/browse/HADOOP-9261
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.0.2-alpha, 1.1.1
 Environment: Testing against S3 bucket stored on US West (Read after 
Write consistency; eventual for read-after-delete or write-after-write)
Reporter: Steve Loughran


In the S3 filesystem clients, {{rename()}} doesn't make sure that the 
destination directory is not a child or other descendant of the source 
directory. The files are copied to the new destination, then the source 
directory is recursively deleted, so losing data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565924#comment-13565924
 ] 

Hudson commented on HADOOP-9249:


Integrated in Hadoop-trunk-Commit #3296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3296/])
HADOOP-9249. hadoop-maven-plugins version-info goal causes build failure 
when running with Clover. Contributed by Chris Nauroth. (Revision 1440200)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1440200
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-maven-plugins/pom.xml


> hadoop-maven-plugins version-info goal causes build failure when running with 
> Clover
> 
>
> Key: HADOOP-9249
> URL: https://issues.apache.org/jira/browse/HADOOP-9249
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9249.1.patch
>
>
> Running Maven with the -Pclover option for code coverage causes the build to 
> fail because of not finding a Clover class while running hadoop-maven-plugins 
> version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9249:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thank you Chris!

> hadoop-maven-plugins version-info goal causes build failure when running with 
> Clover
> 
>
> Key: HADOOP-9249
> URL: https://issues.apache.org/jira/browse/HADOOP-9249
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9249.1.patch
>
>
> Running Maven with the -Pclover option for code coverage causes the build to 
> fail because of not finding a Clover class while running hadoop-maven-plugins 
> version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565912#comment-13565912
 ] 

Suresh Srinivas commented on HADOOP-9249:
-

+! for the patch. I verified that this patch indeed fixes the problem.

> hadoop-maven-plugins version-info goal causes build failure when running with 
> Clover
> 
>
> Key: HADOOP-9249
> URL: https://issues.apache.org/jira/browse/HADOOP-9249
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-9249.1.patch
>
>
> Running Maven with the -Pclover option for code coverage causes the build to 
> fail because of not finding a Clover class while running hadoop-maven-plugins 
> version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565900#comment-13565900
 ] 

Suresh Srinivas commented on HADOOP-9241:
-

+1 for the patch. Thanks for reworking the patch.

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8562) Enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2013-01-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8562:


Attachment: branch-trunk-win.patch

Latest merge patch.

> Enhancements to Hadoop for Windows Server and Windows Azure development and 
> runtime environments
> 
>
> Key: HADOOP-8562
> URL: https://issues.apache.org/jira/browse/HADOOP-8562
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Attachments: branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, branch-trunk-win.patch, 
> branch-trunk-win.patch, branch-trunk-win.patch, test-untar.tar, test-untar.tgz
>
>
> This JIRA tracks the work that needs to be done on trunk to enable Hadoop to 
> run on Windows Server and Azure environments. This incorporates porting 
> relevant work from the similar effort on branch 1 tracked via HADOOP-8079.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565694#comment-13565694
 ] 

Hadoop QA commented on HADOOP-9221:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12567027/hadoop9221-2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2114//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2114//console

This message is automatically generated.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HADOOP-9221:
--

Attachment: hadoop9221-2.txt

Update patch after HADOOP-9190.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565653#comment-13565653
 ] 

Chris Nauroth commented on HADOOP-9260:
---

Hi, Jerry.

I expected this to be resolved by the patch on HADOOP-9246, which was committed 
on 1/28.  Can you try again with the latest trunk code, and let me know if you 
still see a problem?  You can be sure that you have the HADOOP-9246 fix if the 
pom.xml for hadoop-common and hadoop-yarn-common show that the {{version-info}} 
goal is bound to {{generate-resources}}.

Thanks!


> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565623#comment-13565623
 ] 

Steve Loughran commented on HADOOP-9258:


Tom the S3 failures are bugs there, the rename one being more serious, as it 
will lose data (after the rename, the parent dir is deleted).

I don't know enough about the code there to fix fast -can you do it?. FWIW, the 
swift:// test for child-ness was trivial:

{code}
  public static boolean isChildOf(SwiftObjectPath parent,
  SwiftObjectPath possibleChild) {
return possibleChild.getObject().startsWith(parent.getObject() + "/");
  }
{code}

I'm not going to do Local FS test with this once this is in I plan to write 
some Junit4 tests alongside this contract to be even more rigorous.

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9528-2.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6688) FileSystem.delete(...) implementations should not throw FileNotFoundException

2013-01-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565602#comment-13565602
 ] 

Steve Loughran commented on HADOOP-6688:


I agree with Danny -it should be a skip. If a file/subdir has gone away during 
a delete, that is not a failure, as the outcome of the delete operation "the 
subtree is deleted" is still the same.

The lack of atomicity on recursive delete on blobstores is more dangerous if a 
recursive delete commences while a file is being written to a path underneath 
it takes place. It may be that the newly created file is created, while the 
delete removes the entries that represent the directories above it.

> FileSystem.delete(...) implementations should not throw FileNotFoundException
> -
>
> Key: HADOOP-6688
> URL: https://issues.apache.org/jira/browse/HADOOP-6688
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Affects Versions: 0.20.2
> Environment: Amazon EC2/S3
>Reporter: Danny Leshem
>Priority: Minor
>
> S3FileSystem.delete(Path path, boolean recursive) may fail and throw a 
> FileNotFoundException if a directory is being deleted while at the same time 
> some of its files are deleted in the background.
> This is definitely not the expected behavior of a delete method. If one of 
> the to-be-deleted files is found missing, the method should not fail and 
> simply continue. This is true for the general contract of FileSystem.delete, 
> and also for its various implementations: RawLocalFileSystem (and 
> specifically FileUtil.fullyDelete) exhibits the same problem.
> The fix is to silently catch and ignore FileNotFoundExceptions in delete 
> loops. This can very easily be unit-tested, at least for RawLocalFileSystem.
> The reason this issue bothers me is that the cleanup part of a long (Mahout) 
> MR job inconsistently fails for me, and I think this is the root problem. The 
> log shows:
> {code}
> java.io.FileNotFoundException: 
> s3://S3-BUCKET/tmp/0008E25BF7554CA9/2521362836721872/DistributedMatrix.times.outputVector/_temporary/_attempt_201004061215_0092_r_02_0/part-2:
>  No such file or directory.
>   at 
> org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:334)
>   at 
> org.apache.hadoop.fs.s3.S3FileSystem.listStatus(S3FileSystem.java:193)
>   at org.apache.hadoop.fs.s3.S3FileSystem.delete(S3FileSystem.java:303)
>   at org.apache.hadoop.fs.s3.S3FileSystem.delete(S3FileSystem.java:312)
>   at 
> org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:64)
>   at 
> org.apache.hadoop.mapred.OutputCommitter.cleanupJob(OutputCommitter.java:135)
>   at org.apache.hadoop.mapred.Task.runJobCleanupTask(Task.java:826)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:292)
>   at org.apache.hadoop.mapred.Child.main(Child.java:170)
> {code}
> (similar errors are displayed for ReduceTask.run)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6208) Block loss in S3FS due to S3 inconsistency on file rename

2013-01-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565597#comment-13565597
 ] 

Steve Loughran commented on HADOOP-6208:


# what's the status of this?
# which s3 endpoint was being used to observe the problem -and to test that the 
changes worked? US default or something else?

> Block loss in S3FS due to S3 inconsistency on file rename
> -
>
> Key: HADOOP-6208
> URL: https://issues.apache.org/jira/browse/HADOOP-6208
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.20.0, 0.20.1
> Environment: Ubuntu Linux 8.04 on EC2, Mac OS X 10.5, likely to 
> affect any Hadoop environment
>Reporter: Bradley Buda
>Assignee: Bradley Buda
> Attachments: HADOOP-6208.patch, S3FSConsistencyPollingTest.java, 
> S3FSConsistencyTest.java
>
>
> Under certain S3 consistency scenarios, Hadoop's S3FileSystem can 'truncate' 
> files, especially when writing reduce outputs.  We've noticed this at 
> tracksimple where we use the S3FS as the direct input and output of our 
> MapReduce jobs.  The symptom of this problem is a file in the filesystem that 
> is an exact multiple of the FS block size - exactly 32MB, 64MB, 96MB, etc. in 
> length.
> The issue appears to be caused by renaming a file that has recently been 
> written, and getting a stale INode read from S3.  When a reducer is writing 
> job output to the S3FS, the normal series of S3 key writes for a 3-block file 
> looks something like this:
> Task Output:
> 1) Write the first block (block_99)
> 2) Write an INode 
> (/myjob/_temporary/_attempt_200907142159_0306_r_000133_0/part-00133.gz) 
> containing [block_99]
> 3) Write the second block (block_81)
> 4) Rewrite the INode with new contents [block_99, block_81]
> 5) Write the last block (block_-101)
> 6) Rewrite the INode with the final contents [block_99, block_81, block_-101]
> Copy Output to Final Location (ReduceTask#copyOutput):
> 1) Read the INode contents from 
> /myjob/_temporary/_attempt_200907142159_0306_r_000133_0/part-00133.gz, which 
> gives [block_99, block_81, block_-101]
> 2) Write the data from #1 to the final location, /myjob/part-00133.gz
> 3) Delete the old INode 
> The output file is truncated if S3 serves a stale copy of the temporary 
> INode.  In copyOutput, step 1 above, it is possible for S3 to return a 
> version of the temporary INode that contains just [block_99, block_81].  In 
> this case, we write this new data to the final output location, and 'lose' 
> block_-101 in the process.  Since we then delete the temporary INode, we've 
> lost all references to the final block of this file and it's orphaned in the 
> S3 bucket.
> This type of consistency error is infrequent but not impossible. We've 
> observed these failures about once a week for one of our large jobs which 
> runs daily and has 200 reduce outputs; so we're seeing an error rate of 
> something like 0.07% per reduce.
> These kind of errors are generally difficult to handle in a system like S3.  
> We have a few ideas about how to fix this:
> 1) HACK! Sleep during S3OutputStream#close or #flush to wait for S3 to catch 
> up and make these less likely.
> 2) Poll for updated MD5 or INode data in Jets3tFileSystemStore#storeINode 
> until S3 says the INode contents are the same as our local copy.  This could 
> be a config option - "fs.s3.verifyInodeWrites" or something like that.
> 3) Cache INode contents in-process, so we don't have to go back to S3 to ask 
> for the current version of an INode.
> 4) Only write INodes once, when the output stream is closed.  This would 
> basically make S3OutputStream#flush() a no-op.
> 5) Modify the S3FS to somehow version INodes (unclear how we would do this, 
> need some design work).
> 6) Avoid using the S3FS for temporary task attempt files.
> 7) Avoid using the S3FS completely.
> We wanted to get some guidance from the community before we went down any of 
> these paths.  Has anyone seen this issue?  Any other suggested workarounds?  
> We at tracksimple are willing to invest some time in fixing this and (of 
> course) contributing our fix back, but we wanted to get an 'ack' from others 
> before we try anything crazy :-).
> I've attached a test app if anyone wants to try and reproduce this 
> themselves.  It takes a while to run (depending on the 'weather' in S3 right 
> now), but should eventually detect a consistency 'error' that manifests 
> itself as a truncated file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9101) make s3n NativeFileSystemStore interface public instead of package-private

2013-01-29 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9101.


Resolution: Won't Fix

wontfix -there's enough of a difference between swift and s3 that I don't see 
that this would work

> make s3n NativeFileSystemStore interface public instead of package-private
> --
>
> Key: HADOOP-9101
> URL: https://issues.apache.org/jira/browse/HADOOP-9101
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Trivial
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> It would be easier to implement new blockstore filesystems if the the 
> {{NativeFileSystemStore} and dependent classes in the 
> {{org.apache.hadoop.fs.s3native}} package were public -currently you need to 
> put them into the s3 directory.
> They could be made public with the appropriate scope attribute. Internal?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9190:
--

Fix Version/s: 0.23.7
   2.0.3-alpha

I merged this to branch-2 and branch-0.23

> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565421#comment-13565421
 ] 

Hadoop QA commented on HADOOP-9124:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566975/HADOOP-9124.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2113//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2113//console

This message is automatically generated.

> SortedMapWritable violates contract of Map interface for equals() and 
> hashCode()
> 
>
> Key: HADOOP-9124
> URL: https://issues.apache.org/jira/browse/HADOOP-9124
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2-alpha
>Reporter: Patrick Hunt
>Priority: Minor
> Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch
>
>
> This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
> MRUNIT-158, specifically 
> https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
> --
> o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
> does not define an implementation of the equals() or hashCode() methods; 
> instead the default implementations in java.lang.Object are used.
> This violates the contract of the Map interface which defines different 
> behaviour for equals() and hashCode() than Object does. More information 
> here: 
> http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
> The practical consequence is that SortedMapWritables containing equal entries 
> cannot be compared properly. We were bitten by this when trying to write an 
> MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
> test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-29 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9124:


Attachment: HADOOP-9124.patch

> SortedMapWritable violates contract of Map interface for equals() and 
> hashCode()
> 
>
> Key: HADOOP-9124
> URL: https://issues.apache.org/jira/browse/HADOOP-9124
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2-alpha
>Reporter: Patrick Hunt
>Priority: Minor
> Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch
>
>
> This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
> MRUNIT-158, specifically 
> https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
> --
> o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
> does not define an implementation of the equals() or hashCode() methods; 
> instead the default implementations in java.lang.Object are used.
> This violates the contract of the Map interface which defines different 
> behaviour for equals() and hashCode() than Object does. More information 
> here: 
> http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
> The practical consequence is that SortedMapWritables containing equal entries 
> cannot be compared properly. We were bitten by this when trying to write an 
> MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
> test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565398#comment-13565398
 ] 

Hadoop QA commented on HADOOP-9124:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566969/HADOOP-9124.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2112//console

This message is automatically generated.

> SortedMapWritable violates contract of Map interface for equals() and 
> hashCode()
> 
>
> Key: HADOOP-9124
> URL: https://issues.apache.org/jira/browse/HADOOP-9124
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2-alpha
>Reporter: Patrick Hunt
>Priority: Minor
> Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch
>
>
> This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
> MRUNIT-158, specifically 
> https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
> --
> o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
> does not define an implementation of the equals() or hashCode() methods; 
> instead the default implementations in java.lang.Object are used.
> This violates the contract of the Map interface which defines different 
> behaviour for equals() and hashCode() than Object does. More information 
> here: 
> http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
> The practical consequence is that SortedMapWritables containing equal entries 
> cannot be compared properly. We were bitten by this when trying to write an 
> MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
> test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-29 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9124:


Attachment: HADOOP-9124.patch

Sorry about that [~tomwhite]. Updated.

> SortedMapWritable violates contract of Map interface for equals() and 
> hashCode()
> 
>
> Key: HADOOP-9124
> URL: https://issues.apache.org/jira/browse/HADOOP-9124
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2-alpha
>Reporter: Patrick Hunt
>Priority: Minor
> Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch
>
>
> This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
> MRUNIT-158, specifically 
> https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
> --
> o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
> does not define an implementation of the equals() or hashCode() methods; 
> instead the default implementations in java.lang.Object are used.
> This violates the contract of the Map interface which defines different 
> behaviour for equals() and hashCode() than Object does. More information 
> here: 
> http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
> The practical consequence is that SortedMapWritables containing equal entries 
> cannot be compared properly. We were bitten by this when trying to write an 
> MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
> test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565394#comment-13565394
 ] 

Tom White commented on HADOOP-9258:
---

Steve, good to see more tests. The S3 failures sound like bugs to me, so it 
should be possible to fix them without breaking applications. Do you agree?

Are you planning on including the local FS too?

> Add stricter tests to FileSystemContractTestBase
> 
>
> Key: HADOOP-9258
> URL: https://issues.apache.org/jira/browse/HADOOP-9258
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9528-2.patch, HADOOP-9528.patch
>
>
> The File System Contract contains implicit assumptions that aren't checked in 
> the contract test base. Add more tests to define the contract's assumptions 
> more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover "generateXxx" properties to make them re-definable via -D in mvn calls

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565382#comment-13565382
 ] 

Hudson commented on HADOOP-9247:


Integrated in Hadoop-Mapreduce-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1328/])
Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt (Revision 1439539)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439539
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> parametrize Clover "generateXxx" properties to make them re-definable via -D 
> in mvn calls
> -
>
> Key: HADOOP-9247
> URL: https://issues.apache.org/jira/browse/HADOOP-9247
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9247-trunk.patch
>
>
> The suggested parametrization is needed in order 
> to be able to re-define these properties with "-Dk=v" maven options.
> For some reason the expressions declared in clover 
> docs like "${maven.clover.generateHtml}" (see 
> http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
> work in that way. 
> However, the parametrized properties are confirmed to work: e.g. 
> -DcloverGenHtml=false switches off the Html generation, if defined 
> ${cloverGenHtml}.
> The default values provided here exactly correspond to Clover defaults, so
> the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565380#comment-13565380
 ] 

Hudson commented on HADOOP-9246:


Integrated in Hadoop-Mapreduce-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1328/])
HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth (Revision 
1439620)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439620
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


> Execution phase for hadoop-maven-plugin should be process-resources
> ---
>
> Key: HADOOP-9246
> URL: https://issues.apache.org/jira/browse/HADOOP-9246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, trunk-win
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 3.0.0
>
> Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch
>
>
> Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
> should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565381#comment-13565381
 ] 

Hudson commented on HADOOP-9190:


Integrated in Hadoop-Mapreduce-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1328/])
HADOOP-9190. packaging docs is broken. Contributed by Andy Isaacson. 
(Revision 1439796)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439796
Files : 
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/forrest.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/forrest.properties
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565379#comment-13565379
 ] 

Hudson commented on HADOOP-9255:


Integrated in Hadoop-Mapreduce-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1328/])
HADOOP-9255. relnotes.py missing last jira (tgraves) (Revision 1439588)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439588
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> relnotes.py missing last jira
> -
>
> Key: HADOOP-9255
> URL: https://issues.apache.org/jira/browse/HADOOP-9255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.23.6
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7
>
> Attachments: HADOOP-9255.patch
>
>
> generating the release notes for 0.23.6 via " python 
> ./dev-support/relnotes.py -v 0.23.6 " misses the last jira that was 
> committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565377#comment-13565377
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Mapreduce-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1328/])
Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit. (Revision 1439750)
Reverting HADOOP-9241. To be fixed and reviewed. (Revision 1439748)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439750
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439748
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565369#comment-13565369
 ] 

Tom White commented on HADOOP-9124:
---

What about my comment above about mirroring the change from HADOOP-7153?

> SortedMapWritable violates contract of Map interface for equals() and 
> hashCode()
> 
>
> Key: HADOOP-9124
> URL: https://issues.apache.org/jira/browse/HADOOP-9124
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2-alpha
>Reporter: Patrick Hunt
>Priority: Minor
> Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
> HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch
>
>
> This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
> MRUNIT-158, specifically 
> https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
> --
> o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
> does not define an implementation of the equals() or hashCode() methods; 
> instead the default implementations in java.lang.Object are used.
> This violates the contract of the Map interface which defines different 
> behaviour for equals() and hashCode() than Object does. More information 
> here: 
> http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
> The practical consequence is that SortedMapWritables containing equal entries 
> cannot be compared properly. We were bitten by this when trying to write an 
> MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
> test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover "generateXxx" properties to make them re-definable via -D in mvn calls

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565338#comment-13565338
 ] 

Hudson commented on HADOOP-9247:


Integrated in Hadoop-Hdfs-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/])
Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt (Revision 1439539)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439539
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> parametrize Clover "generateXxx" properties to make them re-definable via -D 
> in mvn calls
> -
>
> Key: HADOOP-9247
> URL: https://issues.apache.org/jira/browse/HADOOP-9247
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9247-trunk.patch
>
>
> The suggested parametrization is needed in order 
> to be able to re-define these properties with "-Dk=v" maven options.
> For some reason the expressions declared in clover 
> docs like "${maven.clover.generateHtml}" (see 
> http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
> work in that way. 
> However, the parametrized properties are confirmed to work: e.g. 
> -DcloverGenHtml=false switches off the Html generation, if defined 
> ${cloverGenHtml}.
> The default values provided here exactly correspond to Clover defaults, so
> the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565336#comment-13565336
 ] 

Hudson commented on HADOOP-9246:


Integrated in Hadoop-Hdfs-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/])
HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth (Revision 
1439620)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439620
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


> Execution phase for hadoop-maven-plugin should be process-resources
> ---
>
> Key: HADOOP-9246
> URL: https://issues.apache.org/jira/browse/HADOOP-9246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, trunk-win
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 3.0.0
>
> Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch
>
>
> Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
> should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565337#comment-13565337
 ] 

Hudson commented on HADOOP-9190:


Integrated in Hadoop-Hdfs-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/])
HADOOP-9190. packaging docs is broken. Contributed by Andy Isaacson. 
(Revision 1439796)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439796
Files : 
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/forrest.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/forrest.properties
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565333#comment-13565333
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Hdfs-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/])
Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit. (Revision 1439750)
Reverting HADOOP-9241. To be fixed and reviewed. (Revision 1439748)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439750
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439748
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565335#comment-13565335
 ] 

Hudson commented on HADOOP-9255:


Integrated in Hadoop-Hdfs-trunk #1300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/])
HADOOP-9255. relnotes.py missing last jira (tgraves) (Revision 1439588)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439588
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> relnotes.py missing last jira
> -
>
> Key: HADOOP-9255
> URL: https://issues.apache.org/jira/browse/HADOOP-9255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.23.6
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7
>
> Attachments: HADOOP-9255.patch
>
>
> generating the release notes for 0.23.6 via " python 
> ./dev-support/relnotes.py -v 0.23.6 " misses the last jira that was 
> committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover "generateXxx" properties to make them re-definable via -D in mvn calls

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565319#comment-13565319
 ] 

Hudson commented on HADOOP-9247:


Integrated in Hadoop-Hdfs-0.23-Build #509 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/509/])
HADOOP-9247. Merge r1438698 from trunk (Revision 1439533)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439533
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/pom.xml


> parametrize Clover "generateXxx" properties to make them re-definable via -D 
> in mvn calls
> -
>
> Key: HADOOP-9247
> URL: https://issues.apache.org/jira/browse/HADOOP-9247
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9247-trunk.patch
>
>
> The suggested parametrization is needed in order 
> to be able to re-define these properties with "-Dk=v" maven options.
> For some reason the expressions declared in clover 
> docs like "${maven.clover.generateHtml}" (see 
> http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
> work in that way. 
> However, the parametrized properties are confirmed to work: e.g. 
> -DcloverGenHtml=false switches off the Html generation, if defined 
> ${cloverGenHtml}.
> The default values provided here exactly correspond to Clover defaults, so
> the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565318#comment-13565318
 ] 

Hudson commented on HADOOP-9255:


Integrated in Hadoop-Hdfs-0.23-Build #509 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/509/])
HADOOP-9255. relnotes.py missing last jira (tgraves) (Revision 1439604)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439604
Files : 
* /hadoop/common/branches/branch-0.23/dev-support/relnotes.py
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt


> relnotes.py missing last jira
> -
>
> Key: HADOOP-9255
> URL: https://issues.apache.org/jira/browse/HADOOP-9255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.23.6
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7
>
> Attachments: HADOOP-9255.patch
>
>
> generating the release notes for 0.23.6 via " python 
> ./dev-support/relnotes.py -v 0.23.6 " misses the last jira that was 
> committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9235) Avoid Clover instrumentation of classes in module "hadoop-maven-plugins"

2013-01-29 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9235:
---

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Duplicate of HADOOP-9249, which suggests better fix.

> Avoid Clover instrumentation of classes in module "hadoop-maven-plugins" 
> -
>
> Key: HADOOP-9235
> URL: https://issues.apache.org/jira/browse/HADOOP-9235
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9235-trunk.patch
>
>
> The module "hadoop-maven-plugins" was introduced by fix HADOOP-8924.
> After that fix the full build with Clover instrumentation fails because 
> clover instruments all the modules, including classes from 
> "hadoop-maven-plugins", which are executed by maven without having the clover 
> jar in the classpath.
> So, the following build sequence fails being executed in the root folder of 
> the source tree:
> mvn clean install -DskipTests
> mvn -e -X install -Pclover -DskipTests
> ...
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info 
> (version-info) on project hadoop-common: Execution version-info of goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info failed: A 
> required class was missing while executing 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info: 
> com_cenqua_clover/CoverageRecorder

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-29 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky resolved HADOOP-9256.


Resolution: Duplicate

Duplicate of YARN-361.

> A number of Yarn and Mapreduce tests fail due to not substituted values in 
> *-version-info.properties
> 
>
> Key: HADOOP-9256
> URL: https://issues.apache.org/jira/browse/HADOOP-9256
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan A. Veselovsky
>
> Newly added plugin VersionInfoMojo should calculate properties (like time, 
> scm branch, etc.), and after that the resource plugin should make 
> replacements in the following files: 
> ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
> ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
> , that are read later in test run-time. 
> But for some reason it does not do that.
> As a result, a bunch of tests are permanently failing because the code of 
> these tests is veryfying the corresponding property files for correctness:
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
> org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault
> Some of these failures can be observed in Apache builds, e.g.: 
> https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/
> As far as I see the substitution does not happen because corresponding 
> properties are set by the VersionInfoMojo plugin *after* the corresponding 
> resource plugin task is executed.
> Workaround: manually change files 
> ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
> and set arbitrary reasonable non-${} string parameters as the values.
> After that the tests pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover "generateXxx" properties to make them re-definable via -D in mvn calls

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565270#comment-13565270
 ] 

Hudson commented on HADOOP-9247:


Integrated in Hadoop-Yarn-trunk #111 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/111/])
Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt (Revision 1439539)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439539
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> parametrize Clover "generateXxx" properties to make them re-definable via -D 
> in mvn calls
> -
>
> Key: HADOOP-9247
> URL: https://issues.apache.org/jira/browse/HADOOP-9247
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9247-trunk.patch
>
>
> The suggested parametrization is needed in order 
> to be able to re-define these properties with "-Dk=v" maven options.
> For some reason the expressions declared in clover 
> docs like "${maven.clover.generateHtml}" (see 
> http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
> work in that way. 
> However, the parametrized properties are confirmed to work: e.g. 
> -DcloverGenHtml=false switches off the Html generation, if defined 
> ${cloverGenHtml}.
> The default values provided here exactly correspond to Clover defaults, so
> the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565269#comment-13565269
 ] 

Hudson commented on HADOOP-9190:


Integrated in Hadoop-Yarn-trunk #111 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/111/])
HADOOP-9190. packaging docs is broken. Contributed by Andy Isaacson. 
(Revision 1439796)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439796
Files : 
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/forrest.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/forrest.properties
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565268#comment-13565268
 ] 

Hudson commented on HADOOP-9246:


Integrated in Hadoop-Yarn-trunk #111 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/111/])
HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth (Revision 
1439620)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439620
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


> Execution phase for hadoop-maven-plugin should be process-resources
> ---
>
> Key: HADOOP-9246
> URL: https://issues.apache.org/jira/browse/HADOOP-9246
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, trunk-win
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 3.0.0
>
> Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch
>
>
> Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
> should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565267#comment-13565267
 ] 

Hudson commented on HADOOP-9255:


Integrated in Hadoop-Yarn-trunk #111 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/111/])
HADOOP-9255. relnotes.py missing last jira (tgraves) (Revision 1439588)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439588
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> relnotes.py missing last jira
> -
>
> Key: HADOOP-9255
> URL: https://issues.apache.org/jira/browse/HADOOP-9255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.23.6
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7
>
> Attachments: HADOOP-9255.patch
>
>
> generating the release notes for 0.23.6 via " python 
> ./dev-support/relnotes.py -v 0.23.6 " misses the last jira that was 
> committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565265#comment-13565265
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Yarn-trunk #111 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/111/])
Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit. (Revision 1439750)
Reverting HADOOP-9241. To be fixed and reviewed. (Revision 1439748)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439750
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439748
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565219#comment-13565219
 ] 

Hudson commented on HADOOP-9190:


Integrated in Hadoop-trunk-Commit #3293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3293/])
HADOOP-9190. packaging docs is broken. Contributed by Andy Isaacson. 
(Revision 1439796)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1439796
Files : 
* /hadoop/common/trunk/BUILDING.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/forrest.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/forrest.properties
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565217#comment-13565217
 ] 

Hadoop QA commented on HADOOP-9221:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565572/hadoop9221-1.txt
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2111//console

This message is automatically generated.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9221:
---

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

Marking patch available so that test-patch runs.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha, 3.0.0
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9190:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk.

Thanks a lot for the contribution, Andy.

> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Fix For: 3.0.0
>
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565212#comment-13565212
 ] 

Aaron T. Myers commented on HADOOP-9190:


+1, the patch looks good to me. I confirmed that the patch does indeed get the 
docs package build back to working.

I'm going to commit this momentarily, and also perform the two `svn rm's that 
Andy suggests as well.

> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565191#comment-13565191
 ] 

Hadoop QA commented on HADOOP-9241:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566924/HADOOP-9241.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2110//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2110//console

This message is automatically generated.

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9260:
---

Description: 
1. Check out the trunk from 
http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
2. Compile package
   m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
3. Hadoop version of compiled dist shows the following:

Hadoop ${pom.version}
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by ${user.name} on ${version-info.build.time}
>From source with checksum ${version-info.source.md5}

And in a real cluster, the log in name node shows:

2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG: 
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = ${pom.version}
STARTUP_MSG:   classpath = ...
STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
compiled by '${user.name}' on ${version-info.build.time}
STARTUP_MSG:   java = 1.6.0_33

While some data nodes with the same binary shows the correct version 
information.


  was:
1. Check out the trunk from 
http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
2. Compile package
   m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
3. Hadoop version of compiled dist shows the following:

Hadoop ${pom.version}
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by ${user.name} on ${version-info.build.time}
>From source with checksum ${version-info.source.md5}

And in a real cluster, the log in name node shows:

2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG: 
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = ${pom.version}
STARTUP_MSG:   classpath = ...
STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
compiled by '${user.name}' on ${version-info.build.time}
STARTUP_MSG:   java = 1.6.0_33

While some data nodes with the same binary shows the correct version 
information.



> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ -r 1439752
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9260) Hadoop version may be not correct when starting name node or data node

2013-01-29 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9260:
---

Summary: Hadoop version may be not correct when starting name node or data 
node  (was: Hadoop version commands show incorrect information on trunk)

> Hadoop version may be not correct when starting name node or data node
> --
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-29 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13565176#comment-13565176
 ] 

Jerry Chen commented on HADOOP-9260:


I tried to find the problem that why this random phenomenon happens even at the 
runtime.

The problem lies on the class path using to load the 
common-version-info.properties. I checked that the 
common-version-info.properties file in the hadoop-common-3.0.0-SNAPSHOT.jar has 
the right information. But it loaded from another 
common-version-info.properties file which is in the 
hadoop-common-3.0.0-SNAPSHOT-sources.jar( when sources was packaged). While the 
values of common-version-info.properties file in the 
hadoop-common-3.0.0-SNAPSHOT-sources.jar are uninitialized and thus shows 
values such as ${pom.version}.

In some environments, the hadoop-common-3.0.0-SNAPSHOT-sources.jar may appear 
after hadoop-common-3.0.0-SNAPSHOT.jar in the class path. But in some others, 
it map appear before and cause problems.

I checked that in trunk version without this problem, the 
common-version-info.properties doesn't exists in 
hadoop-common-3.0.0-SNAPSHOT-sources.jar. So it has no problem.

> Hadoop version commands show incorrect information on trunk
> ---
>
> Key: HADOOP-9260
> URL: https://issues.apache.org/jira/browse/HADOOP-9260
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Jerry Chen
>Priority: Critical
>
> 1. Check out the trunk from 
> http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
> 2. Compile package
>m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
> 3. Hadoop version of compiled dist shows the following:
> Hadoop ${pom.version}
> Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
> Compiled by ${user.name} on ${version-info.build.time}
> From source with checksum ${version-info.source.md5}
> And in a real cluster, the log in name node shows:
> 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> STARTUP_MSG: 
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = ${pom.version}
> STARTUP_MSG:   classpath = ...
> STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
> compiled by '${user.name}' on ${version-info.build.time}
> STARTUP_MSG:   java = 1.6.0_33
> While some data nodes with the same binary shows the correct version 
> information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9241:


Status: Patch Available  (was: Reopened)

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9241:


Attachment: HADOOP-9241.patch

Patch with 10m default preserved.

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9241:


Target Version/s: 2.0.3-alpha

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9241:


Fix Version/s: (was: 2.0.3-alpha)

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HADOOP-9241.patch, HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9241) DU refresh interval is not configurable

2013-01-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reopened HADOOP-9241:
-


Thanks Nicholas; I have reverted HADOOP-9241 from trunk and branch-2. I will 
attach a proper patch now.

> DU refresh interval is not configurable
> ---
>
> Key: HADOOP-9241
> URL: https://issues.apache.org/jira/browse/HADOOP-9241
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9241.patch
>
>
> While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
> isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira