[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-28 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564158#comment-13564158
 ] 

Ivan A. Veselovsky commented on HADOOP-9249:


This seems to be a duplicate of HADOOP-9235.

 hadoop-maven-plugins version-info goal causes build failure when running with 
 Clover
 

 Key: HADOOP-9249
 URL: https://issues.apache.org/jira/browse/HADOOP-9249
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls

2013-01-28 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564160#comment-13564160
 ] 

Ivan A. Veselovsky commented on HADOOP-9247:


Hi, Chris, 
the problem com_cenqua_clover/CoverageRecorder you mentioned above addressed 
in HADOOP-9235. 

 parametrize Clover generateXxx properties to make them re-definable via -D 
 in mvn calls
 -

 Key: HADOOP-9247
 URL: https://issues.apache.org/jira/browse/HADOOP-9247
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9247-trunk.patch


 The suggested parametrization is needed in order 
 to be able to re-define these properties with -Dk=v maven options.
 For some reason the expressions declared in clover 
 docs like ${maven.clover.generateHtml} (see 
 http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
 work in that way. 
 However, the parametrized properties are confirmed to work: e.g. 
 -DcloverGenHtml=false switches off the Html generation, if defined 
 generateHtml${cloverGenHtml}/generateHtml.
 The default values provided here exactly correspond to Clover defaults, so
 the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9254) Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash

2013-01-28 Thread Vadim Bondarev (JIRA)
Vadim Bondarev created HADOOP-9254:
--

 Summary: Cover packages org.apache.hadoop.util.bloom, 
org.apache.hadoop.util.hash
 Key: HADOOP-9254
 URL: https://issues.apache.org/jira/browse/HADOOP-9254
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9254) Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash

2013-01-28 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9254:
---

Status: Patch Available  (was: Open)

 Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash
 

 Key: HADOOP-9254
 URL: https://issues.apache.org/jira/browse/HADOOP-9254
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: YAHOO-9254-branch-0.23-a.patch, 
 YAHOO-9254-branch-2-a.patch, YAHOO-9254-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9254) Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash

2013-01-28 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9254:
---

Attachment: YAHOO-9254-trunk-a.patch
YAHOO-9254-branch-2-a.patch
YAHOO-9254-branch-0.23-a.patch

 Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash
 

 Key: HADOOP-9254
 URL: https://issues.apache.org/jira/browse/HADOOP-9254
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: YAHOO-9254-branch-0.23-a.patch, 
 YAHOO-9254-branch-2-a.patch, YAHOO-9254-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564177#comment-13564177
 ] 

Tom White commented on HADOOP-9124:
---

{quote}
So, different *MapWritables have added Writables in different order won't lead 
to same class-id mappings. Also, After a mapping is removed, there is no 
reference counting to remove the class-id mapping that is no longer needed. 
Hence, we should not check classToIdMap  idToClassMap
{quote}

I agree. According to 
[http://docs.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)]

bq. two maps {{m1}} and {{m2}} represent the same mappings if 
{{m1.entrySet().equals(m2.entrySet())}}

So only the values of the entry set should be used for the basis for testing 
equality. 

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564185#comment-13564185
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Yarn-trunk #110 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/110/])
HADOOP-9241. DU refresh interval is not configurable. Contributed by Harsh 
J. (harsh) (Revision 1439129)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439129
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9254) Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564192#comment-13564192
 ] 

Hadoop QA commented on HADOOP-9254:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12566740/YAHOO-9254-trunk-a.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2100//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2100//console

This message is automatically generated.

 Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash
 

 Key: HADOOP-9254
 URL: https://issues.apache.org/jira/browse/HADOOP-9254
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: YAHOO-9254-branch-0.23-a.patch, 
 YAHOO-9254-branch-2-a.patch, YAHOO-9254-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7435) Make pre-commit checks run against the correct branch

2013-01-28 Thread Dennis Y (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Y updated HADOOP-7435:
-

Attachment: 
HADOOP-7435-branch-0.23-patch-from-[branch-0.23-gd]-to-[fb-HADOOP-7435-branch-0.23-gd]-N2-1.patch

updated patch for branch-0.23

 Make pre-commit checks run against the correct branch
 -

 Key: HADOOP-7435
 URL: https://issues.apache.org/jira/browse/HADOOP-7435
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 0.23.0
Reporter: Aaron T. Myers
Assignee: Matt Foley
 Attachments: 
 HADOOP-7435-branch-0.23-patch-from-[branch-0.23-gd]-to-[fb-HADOOP-7435-branch-0.23-gd]-N2-1.patch,
  HADOOP-7435-for-branch-0.23.patch, HADOOP-7435-for-branch-2.patch, 
 HADOOP-7435-for-trunk-do-not-apply-this.patch


 The Hudson pre-commit tests are presently only capable of testing a patch 
 against trunk. It'd be nice if this could be extended to automatically run 
 against the correct branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7435) Make pre-commit checks run against the correct branch

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564202#comment-13564202
 ] 

Hadoop QA commented on HADOOP-7435:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12566749/HADOOP-7435-branch-0.23-patch-from-%5Bbranch-0.23-gd%5D-to-%5Bfb-HADOOP-7435-branch-0.23-gd%5D-N2-1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2101//console

This message is automatically generated.

 Make pre-commit checks run against the correct branch
 -

 Key: HADOOP-7435
 URL: https://issues.apache.org/jira/browse/HADOOP-7435
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 0.23.0
Reporter: Aaron T. Myers
Assignee: Matt Foley
 Attachments: 
 HADOOP-7435-branch-0.23-patch-from-[branch-0.23-gd]-to-[fb-HADOOP-7435-branch-0.23-gd]-N2-1.patch,
  HADOOP-7435-for-branch-0.23.patch, HADOOP-7435-for-branch-2.patch, 
 HADOOP-7435-for-trunk-do-not-apply-this.patch


 The Hudson pre-commit tests are presently only capable of testing a patch 
 against trunk. It'd be nice if this could be extended to automatically run 
 against the correct branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564251#comment-13564251
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Hdfs-trunk #1299 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1299/])
HADOOP-9241. DU refresh interval is not configurable. Contributed by Harsh 
J. (harsh) (Revision 1439129)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439129
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls

2013-01-28 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564265#comment-13564265
 ] 

Ivan A. Veselovsky commented on HADOOP-9247:


Hi, Suresh, 
can you please also commit this patch to branch-2 and branch-0.23?
thanks in advance.

 parametrize Clover generateXxx properties to make them re-definable via -D 
 in mvn calls
 -

 Key: HADOOP-9247
 URL: https://issues.apache.org/jira/browse/HADOOP-9247
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9247-trunk.patch


 The suggested parametrization is needed in order 
 to be able to re-define these properties with -Dk=v maven options.
 For some reason the expressions declared in clover 
 docs like ${maven.clover.generateHtml} (see 
 http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
 work in that way. 
 However, the parametrized properties are confirmed to work: e.g. 
 -DcloverGenHtml=false switches off the Html generation, if defined 
 generateHtml${cloverGenHtml}/generateHtml.
 The default values provided here exactly correspond to Clover defaults, so
 the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564269#comment-13564269
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-Mapreduce-trunk #1327 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1327/])
HADOOP-9241. DU refresh interval is not configurable. Contributed by Harsh 
J. (harsh) (Revision 1439129)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439129
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6962) FileSystem.mkdirs(Path, FSPermission) should use the permission for all of the created directories

2013-01-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564339#comment-13564339
 ] 

Daryn Sharp commented on HADOOP-6962:
-

Allen, how did the patch fail to work as expected?  I don't have a 1.x cluster 
handy at the moment.

 FileSystem.mkdirs(Path, FSPermission) should use the permission for all of 
 the created directories
 --

 Key: HADOOP-6962
 URL: https://issues.apache.org/jira/browse/HADOOP-6962
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 1.0.4
Reporter: Owen O'Malley
Assignee: Daryn Sharp
Priority: Blocker
  Labels: security
 Attachments: HADOOP-6962.patch


 Currently, FileSystem.mkdirs only applies the permissions to the last level 
 if it was created. It should be applied to *all* levels that are created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-01-28 Thread Dmitry Mezhensky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Mezhensky updated HADOOP-8545:
-

Attachment: HADOOP-8545-5.patch

Patch contains:
  -multiple data centres/Swift installations support
  -improvements
  -unit tests
  -bug fixes

Patch merged with Steve Loughran code from 
https://github.com/steveloughran/Hadoop-and-Swift-integration

Original repo with documentation 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration 

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha, 1.1.2
Reporter: Tim Miller
Priority: Minor
 Attachments: HADOOP-8545-1.patch, HADOOP-8545-2.patch, 
 HADOOP-8545-3.patch, HADOOP-8545-4.patch, HADOOP-8545-5.patch, 
 HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch


 Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564373#comment-13564373
 ] 

Hadoop QA commented on HADOOP-8545:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566775/HADOOP-8545-5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2102//console

This message is automatically generated.

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha, 1.1.2
Reporter: Tim Miller
Priority: Minor
 Attachments: HADOOP-8545-1.patch, HADOOP-8545-2.patch, 
 HADOOP-8545-3.patch, HADOOP-8545-4.patch, HADOOP-8545-5.patch, 
 HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch


 Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564385#comment-13564385
 ] 

Chris Nauroth commented on HADOOP-9249:
---

Sorry, I had missed HADOOP-9235.  Thank you for pointing it out.

I think the patch shown here on HADOOP-9249 is preferable, because the skip 
declaration skips all Clover processing for the sub-module.  If we ever change 
the internal code structure of hadoop-maven-plugins (i.e. add code outside of 
org/apache/hadoop/maven/plugin), then we won't need to remember to update the 
exclude filter in the top-level pom.xml.  Also, if we ever decide we'd rather 
deploy hadoop-maven-plugins as a separate project instead of a sub-module of 
hadoop-common, then we won't need to remember to clean up the top-level pom.xml.

Ivan, what do you think?


 hadoop-maven-plugins version-info goal causes build failure when running with 
 Clover
 

 Key: HADOOP-9249
 URL: https://issues.apache.org/jira/browse/HADOOP-9249
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9255:
-

 Summary: relnotes.py missing last jira
 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical


generating the release notes for 0.23.6 via  python ./dev-support/relnotes.py 
-v 0.23.6  misses the last jira that was committed.  In this case it was 
YARN-354.




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564398#comment-13564398
 ] 

Thomas Graves commented on HADOOP-9255:
---

This might be due to the query line:
project in (YARN) and fixVersion in ('+' , '.join(versions)+') and 
resolution = Fixed, 'startAt':at+1, 'maxResults':count}

For some reason it starts at: at+1.  If I remove the +1 then YARN-354 shows up. 

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical

 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564402#comment-13564402
 ] 

Arpit Gupta commented on HADOOP-9253:
-

bq. Does this also work in context of a secure DN startup? Does the logged 
ulimit reflect the actual JVM's instead of the wrapper's?


Good point let me test this out and see what it will log.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani updated HADOOP-9124:


Attachment: HADOOP-9124.patch

Patch ready for review.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564434#comment-13564434
 ] 

Suresh Srinivas commented on HADOOP-9241:
-

[~qwertymaniac] Even for trivial jiras, I suggest getting the code review done 
before committing the code. Such changes are easy and quick to review.

In this patch, did DU interval become 1 minute instead of 10 minutes?
{code}
-this(path, 60L);
-//10 minutes default refresh interval
+this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
+CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));


+  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
+  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
+  /** Default value for FS_DU_INTERVAL_KEY */
+  public static final longFS_DU_INTERVAL_DEFAULT = 6;
{code}

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9249) hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-01-28 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564445#comment-13564445
 ] 

Ivan A. Veselovsky commented on HADOOP-9249:


Chris, yes, I absolutely agree with you. This fix is much better.

 hadoop-maven-plugins version-info goal causes build failure when running with 
 Clover
 

 Key: HADOOP-9249
 URL: https://issues.apache.org/jira/browse/HADOOP-9249
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls

2013-01-28 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9247:


Fix Version/s: (was: 3.0.0)
   0.23.7
   2.0.3-alpha

I merged the patch to branch-2 and 0.23.

 parametrize Clover generateXxx properties to make them re-definable via -D 
 in mvn calls
 -

 Key: HADOOP-9247
 URL: https://issues.apache.org/jira/browse/HADOOP-9247
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9247-trunk.patch


 The suggested parametrization is needed in order 
 to be able to re-define these properties with -Dk=v maven options.
 For some reason the expressions declared in clover 
 docs like ${maven.clover.generateHtml} (see 
 http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
 work in that way. 
 However, the parametrized properties are confirmed to work: e.g. 
 -DcloverGenHtml=false switches off the Html generation, if defined 
 generateHtml${cloverGenHtml}/generateHtml.
 The default values provided here exactly correspond to Clover defaults, so
 the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8923) WEBUI shows an intermediatory page when the cookie expires.

2013-01-28 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564465#comment-13564465
 ] 

Benoy Antony commented on HADOOP-8923:
--

This patch applies on branch 1.1 . Let me know if any changes are required.

 WEBUI shows an intermediatory page when the cookie expires.
 ---

 Key: HADOOP-8923
 URL: https://issues.apache.org/jira/browse/HADOOP-8923
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HADOOP-8923.patch


 The WEBUI does Authentication (SPNEGO/Custom) and then drops a cookie. 
 Once the cookie expires, the webui displays a page saying that 
 authentication token expired. The user has to refresh the page to get 
 authenticated again. This page can be avoided and the user can authenticated 
 without showing such a page to the user.
 Also the when the cookie expires, a warning is logged. But there is no need 
 to log this as this is not of any significance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9247) parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564471#comment-13564471
 ] 

Hudson commented on HADOOP-9247:


Integrated in Hadoop-trunk-Commit #3286 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3286/])
Move HADOOP-9247 to release 0.23.7 section in CHANGES.txt (Revision 1439539)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439539
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 parametrize Clover generateXxx properties to make them re-definable via -D 
 in mvn calls
 -

 Key: HADOOP-9247
 URL: https://issues.apache.org/jira/browse/HADOOP-9247
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9247-trunk.patch


 The suggested parametrization is needed in order 
 to be able to re-define these properties with -Dk=v maven options.
 For some reason the expressions declared in clover 
 docs like ${maven.clover.generateHtml} (see 
 http://docs.atlassian.com/maven-clover2-plugin/3.0.2/clover-mojo.html) do not 
 work in that way. 
 However, the parametrized properties are confirmed to work: e.g. 
 -DcloverGenHtml=false switches off the Html generation, if defined 
 generateHtml${cloverGenHtml}/generateHtml.
 The default values provided here exactly correspond to Clover defaults, so
 the behavior is 100% backwards compatible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564473#comment-13564473
 ] 

Hadoop QA commented on HADOOP-9124:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566787/HADOOP-9124.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2103//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2103//console

This message is automatically generated.

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564484#comment-13564484
 ] 

Karthik Kambatla commented on HADOOP-9124:
--

Thanks for the clarification, Tom. Suren - the patch looks good to me. Thanks.

+1

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-28 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9256:
--

 Summary: A number of Yarn and Mapreduce tests fail due to not 
substituted values in *-version-info.properties
 Key: HADOOP-9256
 URL: https://issues.apache.org/jira/browse/HADOOP-9256
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky


Newly added plugin VersionInfoMojo should calculate properties (like time, scm 
branch, etc.), and after that the resource plugin should make replacements in 
the following files: 
./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
, that are read later in test run-time. 
But for some reason it does not do that.
As a result, a bunch of tests are permanently failing because the code of these 
tests is veryfying the corresponding property files for correctness:
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault

Some of these failures can be observed in Apache builds, e.g.: 
https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/

As far as I see the substitution does not happen because corresponding 
properties are set by the VersionInfoMojo plugin *after* the corresponding 
resource plugin task is executed.

Workaround: manually change files 
./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
and set arbitrary reasonable non-${} string parameters as the values.
After that the tests pass.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9245) mvn clean without running mvn install before fails

2013-01-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564506#comment-13564506
 ] 

Jason Lowe commented on HADOOP-9245:


Note that this change broke web service tests in YARN.  See YARN-361.  I'm 
guessing this is related to the issue Alejandro brought up earlier about the 
version-info plugin?  Was there any followup?

 mvn clean without running mvn install before fails
 --

 Key: HADOOP-9245
 URL: https://issues.apache.org/jira/browse/HADOOP-9245
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, trunk-win
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 3.0.0

 Attachments: HADOOP-9245.patch


 HADOOP-8924 introduces plugin dependency on hadoop-maven-plugins in 
 hadoop-common and hadoop-yarn-common.
 Calling mvn clean on a fresh m2/repository (missing hadoop-maven-plugins) 
 fails due to this dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9257) HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally

2013-01-28 Thread Harsh J (JIRA)
Harsh J created HADOOP-9257:
---

 Summary: HADOOP-9241 changed DN's default DU interval to 1m 
instead of 10m accidentally
 Key: HADOOP-9257
 URL: https://issues.apache.org/jira/browse/HADOOP-9257
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J


Suresh caught this on HADOOP-9241:

{quote}
Even for trivial jiras, I suggest getting the code review done before 
committing the code. Such changes are easy and quick to review.
In this patch, did DU interval become 1 minute instead of 10 minutes?
{code}
-this(path, 60L);
-//10 minutes default refresh interval
+this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
+CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));


+  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
+  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
+  /** Default value for FS_DU_INTERVAL_KEY */
+  public static final longFS_DU_INTERVAL_DEFAULT = 6;
{code}
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564509#comment-13564509
 ] 

Harsh J commented on HADOOP-9241:
-

Sorry about that Suresh, will await a review for trivial ones in future as 
well. Indeed, I've made a big bad blooper thats not gonna help the DNs :) 
Fixing via HADOOP-9257 - apologies!

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564510#comment-13564510
 ] 

Jason Lowe commented on HADOOP-9256:


This is likely caused by HADOOP-9245, see YARN-361.

 A number of Yarn and Mapreduce tests fail due to not substituted values in 
 *-version-info.properties
 

 Key: HADOOP-9256
 URL: https://issues.apache.org/jira/browse/HADOOP-9256
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky

 Newly added plugin VersionInfoMojo should calculate properties (like time, 
 scm branch, etc.), and after that the resource plugin should make 
 replacements in the following files: 
 ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 , that are read later in test run-time. 
 But for some reason it does not do that.
 As a result, a bunch of tests are permanently failing because the code of 
 these tests is veryfying the corresponding property files for correctness:
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
 org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault
 Some of these failures can be observed in Apache builds, e.g.: 
 https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/
 As far as I see the substitution does not happen because corresponding 
 properties are set by the VersionInfoMojo plugin *after* the corresponding 
 resource plugin task is executed.
 Workaround: manually change files 
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 and set arbitrary reasonable non-${} string parameters as the values.
 After that the tests pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9241:


Release Note: The 'du' (disk usage command from Unix) script refresh 
monitor is now configurable in the same way as its 'df' counterpart, via the 
property 'fs.du.interval', the default of which is 10 minute (in ms).  (was: 
The 'du' (disk usage command from Unix) script refresh monitor is now 
configurable in the same way as its 'df' counterpart, via the property 
'fs.du.interval', the default of which is 1 minute (in ms).)

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564514#comment-13564514
 ] 

Suresh Srinivas commented on HADOOP-9241:
-

Is it better to revert this and fix it?

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9257) HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally

2013-01-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9257:


Attachment: HADOOP-9257.patch

 HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally
 --

 Key: HADOOP-9257
 URL: https://issues.apache.org/jira/browse/HADOOP-9257
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-9257.patch


 Suresh caught this on HADOOP-9241:
 {quote}
 Even for trivial jiras, I suggest getting the code review done before 
 committing the code. Such changes are easy and quick to review.
 In this patch, did DU interval become 1 minute instead of 10 minutes?
 {code}
 -this(path, 60L);
 -//10 minutes default refresh interval
 +this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
 +CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));
 +  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
 +  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
 +  /** Default value for FS_DU_INTERVAL_KEY */
 +  public static final longFS_DU_INTERVAL_DEFAULT = 6;
 {code}
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9257) HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally

2013-01-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9257:


Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

 HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally
 --

 Key: HADOOP-9257
 URL: https://issues.apache.org/jira/browse/HADOOP-9257
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-9257.patch


 Suresh caught this on HADOOP-9241:
 {quote}
 Even for trivial jiras, I suggest getting the code review done before 
 committing the code. Such changes are easy and quick to review.
 In this patch, did DU interval become 1 minute instead of 10 minutes?
 {code}
 -this(path, 60L);
 -//10 minutes default refresh interval
 +this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
 +CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));
 +  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
 +  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
 +  /** Default value for FS_DU_INTERVAL_KEY */
 +  public static final longFS_DU_INTERVAL_DEFAULT = 6;
 {code}
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564520#comment-13564520
 ] 

Harsh J commented on HADOOP-9241:
-

I suppose thats okay too - but I just submitted up a patch to HADOOP-9257. If 
you feel reverting is better, I'll get it done that way, let me know :)

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9245) mvn clean without running mvn install before fails

2013-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564523#comment-13564523
 ] 

Karthik Kambatla commented on HADOOP-9245:
--

Filed HADOOP-9246 - I believe it is ready to reviewed/committed. Thanks Jason.

 mvn clean without running mvn install before fails
 --

 Key: HADOOP-9245
 URL: https://issues.apache.org/jira/browse/HADOOP-9245
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, trunk-win
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 3.0.0

 Attachments: HADOOP-9245.patch


 HADOOP-8924 introduces plugin dependency on hadoop-maven-plugins in 
 hadoop-common and hadoop-yarn-common.
 Calling mvn clean on a fresh m2/repository (missing hadoop-maven-plugins) 
 fails due to this dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

Attachment: HADOOP-9255.patch

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

Status: Patch Available  (was: Open)

Fix by simply removing the +1 from at+1.  Now it gets entries 0 up to max 100, 
then 100, up to max 200, etc.

I tested by generating the notes for 0.23.6 which is  100 and then I generated 
them for 0.23.3 which has 266 jira.  I compared it to manually doing the query 
through jira.

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9256) A number of Yarn and Mapreduce tests fail due to not substituted values in *-version-info.properties

2013-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564545#comment-13564545
 ] 

Karthik Kambatla commented on HADOOP-9256:
--

I believe HADOOP-9246 addresses this. If you think so, we can may be close this 
as a duplicate? 

 A number of Yarn and Mapreduce tests fail due to not substituted values in 
 *-version-info.properties
 

 Key: HADOOP-9256
 URL: https://issues.apache.org/jira/browse/HADOOP-9256
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky

 Newly added plugin VersionInfoMojo should calculate properties (like time, 
 scm branch, etc.), and after that the resource plugin should make 
 replacements in the following files: 
 ./hadoop-common-project/hadoop-common/target/classes/common-version-info.properties
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/yarn-version-info.properties
 ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 , that are read later in test run-time. 
 But for some reason it does not do that.
 As a result, a bunch of tests are permanently failing because the code of 
 these tests is veryfying the corresponding property files for correctness:
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHS
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testHSXML
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfo
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoSlash
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoDefault
 org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServices.testInfoXML
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNode
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfo
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoSlash
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testNodeInfoDefault
 org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices.testSingleNodesXML
 org.apache.hadoop.yarn.server.resourcemanager.security.TestApplicationTokens.testTokenExpiry
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoXML
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testCluster
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testClusterDefault
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfo
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoSlash
 org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testInfoDefault
 Some of these failures can be observed in Apache builds, e.g.: 
 https://builds.apache.org/view/Hadoop/job/PreCommit-YARN-Build/370/testReport/
 As far as I see the substitution does not happen because corresponding 
 properties are set by the VersionInfoMojo plugin *after* the corresponding 
 resource plugin task is executed.
 Workaround: manually change files 
 ./hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
 and set arbitrary reasonable non-${} string parameters as the values.
 After that the tests pass.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564554#comment-13564554
 ] 

Robert Joseph Evans commented on HADOOP-9255:
-

I ran it and verified that it is not dropping jira.  Sorry about my off by 1 
error glad that you caught it.  +1.  feel free to check it in.

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564562#comment-13564562
 ] 

Hadoop QA commented on HADOOP-9255:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566799/HADOOP-9255.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2105//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2105//console

This message is automatically generated.

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9252) StringUtils.limitDecimalTo2(..) is unnecessarily synchronized

2013-01-28 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-9252:
---

Attachment: c9252_20130128.patch

All new javac warnings are deprecated warnings.

c9252_20130128.patch: fixes the javadoc warning.

 StringUtils.limitDecimalTo2(..) is unnecessarily synchronized
 -

 Key: HADOOP-9252
 URL: https://issues.apache.org/jira/browse/HADOOP-9252
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c9252_20130127.patch, c9252_20130128.patch


 limitDecimalTo2(double) currently uses decimalFormat, which is a static 
 field, so that it is synchronized.  Synchronization is unnecessary since it 
 can simply uses String.format(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564577#comment-13564577
 ] 

Hudson commented on HADOOP-9255:


Integrated in Hadoop-trunk-Commit #3288 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3288/])
HADOOP-9255. relnotes.py missing last jira (tgraves) (Revision 1439588)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439588
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9257) HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564579#comment-13564579
 ] 

Hadoop QA commented on HADOOP-9257:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566798/HADOOP-9257.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2104//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2104//console

This message is automatically generated.

 HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally
 --

 Key: HADOOP-9257
 URL: https://issues.apache.org/jira/browse/HADOOP-9257
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-9257.patch


 Suresh caught this on HADOOP-9241:
 {quote}
 Even for trivial jiras, I suggest getting the code review done before 
 committing the code. Such changes are easy and quick to review.
 In this patch, did DU interval become 1 minute instead of 10 minutes?
 {code}
 -this(path, 60L);
 -//10 minutes default refresh interval
 +this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
 +CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));
 +  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
 +  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
 +  /** Default value for FS_DU_INTERVAL_KEY */
 +  public static final longFS_DU_INTERVAL_DEFAULT = 6;
 {code}
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-01-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564593#comment-13564593
 ] 

Chris Nauroth commented on HADOOP-9232:
---

Thanks, Ivan.  I applied the patch locally and tested a few HDFS operations and 
MapReduce jobs.  I didn't need to override the config to 
{{ShellBasedUnixGroupsMapping}}.  It worked great!  I also did a build with 
-Pnative in an Ubuntu VM to confirm that it didn't accidentally harm native 
compilation on Linux.

Here are a few questions:

1. Regarding {{throw_ioe}}, it looks like an almost-copy of the #ifdef WINDOWS 
path of the function in {{NativeIO.c}}.  Can we refactor and reuse the same 
{{throw_ioe}} everywhere, or is that too cumbersome?

2. Assuming that the answer to #1 is that we really need to keep a separate 
{{throw_ioe}} in here, then is it intentional that this version uses 
LPSTR/FormatMessageA, whereas the version in {{NativeIO.c}} uses 
LPWSTR/FormatMessageW?

{code}
  ...
  LPSTR buffer = NULL;
  const char* message = NULL;

  len = FormatMessageA(
  ...
{code}

3. Once again assuming that we need to keep the separate {{throw_ioe}}, I don't 
think we need to NULL out buffer before returning.  For the version in 
{{NativeIO.c}}, this was required to prevent a double-free later in the control 
flow, but this version only has one possible path to calling {{LocalFree}}.

{code}
   ...
  LocalFree(buffer);
  buffer = NULL;

  return;
}
{code}

4. Is the following code not thread-safe?

{code}
...
static jobjectArray emptyGroups = NULL;
...
  if (emptyGroups == NULL) {
jobjectArray lEmptyGroups = (jobjectArray)(*env)-NewObjectArray(env, 0,
(*env)-FindClass(env, java/lang/String), NULL);
if (lEmptyGroups == NULL) {
  goto cleanup;
}
emptyGroups = (*env)-NewGlobalRef(env, lEmptyGroups);
if (emptyGroups == NULL) {
  goto cleanup;
}
  }
{code}

For example, assume 2 threads concurrently call {{getGroupForUser}}.  Thread 1 
executes the NULL check, enters the if body, and then gets suspended by the OS. 
 Thread 2 executes and {{emptyGroups}} is still NULL, so it initializes it.  
Then, the OS resumes thread 1, which proceeds inside the if body and calls 
NewObjectArray again.  Since {{emptyGroups}} never gets freed, I believe the 
net effect would be a small memory leak.

5.  On the {{THROW}} calls, can we add strings that describe the point of 
failure (i.e. Couldn't allocate memory for user) instead of NULL for the 
third argument?

{code}
  user = (*env)-GetStringChars(env, juser, NULL);
  if (user == NULL) {
THROW(env, java/lang/OutOfMemoryError, NULL);
goto cleanup;
  }
{code}

Thanks, again!


 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-01-28 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9232:
--

Status: Open  (was: Patch Available)

Clicking Cancel Patch, since Jenkins only knows how to apply patches for 
pre-commit builds going to trunk.

 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9252) StringUtils.limitDecimalTo2(..) is unnecessarily synchronized

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564607#comment-13564607
 ] 

Hadoop QA commented on HADOOP-9252:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566806/c9252_20130128.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 2054 javac 
compiler warnings (more than the trunk's current 2014 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2106//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2106//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2106//console

This message is automatically generated.

 StringUtils.limitDecimalTo2(..) is unnecessarily synchronized
 -

 Key: HADOOP-9252
 URL: https://issues.apache.org/jira/browse/HADOOP-9252
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: c9252_20130127.patch, c9252_20130128.patch


 limitDecimalTo2(double) currently uses decimalFormat, which is a static 
 field, so that it is synchronized.  Synchronization is unnecessary since it 
 can simply uses String.format(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

   Resolution: Fixed
Fix Version/s: 0.23.7
   0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7

 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564628#comment-13564628
 ] 

Andy Isaacson commented on HADOOP-9253:
---

It's pretty odd to append to {{$log}} using {{}} and then print only the 
beginning of {{$log}} using {{head}}.  This results in the output duplicating 
the previous stanza's leftover contents of {{$log}}.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564667#comment-13564667
 ] 

Jason Lowe commented on HADOOP-9246:


+1

 Execution phase for hadoop-maven-plugin should be process-resources
 ---

 Key: HADOOP-9246
 URL: https://issues.apache.org/jira/browse/HADOOP-9246
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, trunk-win
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch


 Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
 should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-9246:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Karthik and Chris!  I committed this to trunk.

 Execution phase for hadoop-maven-plugin should be process-resources
 ---

 Key: HADOOP-9246
 URL: https://issues.apache.org/jira/browse/HADOOP-9246
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, trunk-win
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 3.0.0

 Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch


 Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
 should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9246) Execution phase for hadoop-maven-plugin should be process-resources

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564677#comment-13564677
 ] 

Hudson commented on HADOOP-9246:


Integrated in Hadoop-trunk-Commit #3289 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3289/])
HADOOP-9246. Execution phase for hadoop-maven-plugin should be 
process-resources. Contributed by Karthik Kambatla and Chris Nauroth (Revision 
1439620)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439620
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


 Execution phase for hadoop-maven-plugin should be process-resources
 ---

 Key: HADOOP-9246
 URL: https://issues.apache.org/jira/browse/HADOOP-9246
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, trunk-win
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 3.0.0

 Attachments: HADOOP-9246.2.patch, hadoop-9246.patch, hadoop-9246.patch


 Per discussion on HADOOP-9245, the execution phase of hadoop-maven-plugin 
 should be _process-resources_ and not _compile_.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564681#comment-13564681
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-9241:


Harsh, please revert this for two reasons, (1) there is a bug and, more 
importantly, (2) it has not yet been reviewed.

Also, please don't commit without review.  Everyone has to follow the bylaws.  
If you feel that the current bylaws is inappropriate, start a discussion to 
change the bylaws first.

 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9258:
--

 Summary: Add stricter tests to FileSystemContractTestBase
 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran


The File System Contract contains implicit assumptions that aren't checked in 
the contract test base. Add more tests to define the contract's assumptions 
more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9259) FileSystemContractBaseTest should be less brittle in teardown

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-4409 to HADOOP-9259:
--

  Component/s: (was: test)
   test
Affects Version/s: (was: 1.1.2)
   (was: 3.0.0)
   1.1.2
   3.0.0
  Key: HADOOP-9259  (was: HDFS-4409)
  Project: Hadoop Common  (was: Hadoop HDFS)

 FileSystemContractBaseTest should be less brittle in teardown
 -

 Key: HADOOP-9259
 URL: https://issues.apache.org/jira/browse/HADOOP-9259
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0, 1.1.2
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-4409.patch


 the teardown code in FileSystemContractBaseTest assumes that {{fs!=null}} and 
 that it's OK to throw an exception if the delete operation fails. Better to 
 check the {{fs}} value and catch and convert an exception in the 
 {{fs.delete()}} operation to a {{LOG.error()}} instead.
 This will stop failures in teardown becoming a distraction from the root 
 causes of the problem (that your FileSystem is broken)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Attachment: HADOOP-9528.patch

Single integrated patch for stricter checks on filesystems. 

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Patch Available  (was: Open)

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9154) SortedMapWritable#putAll() doesn't add key/value classes to the map

2013-01-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9154:
-

Attachment: hadoop-9154.patch

Uploading a patch that applies on top of the latest patch in HADOOP-9124. Will 
submit the patch after HADOOP-9124 gets submitted.

 SortedMapWritable#putAll() doesn't add key/value classes to the map
 ---

 Key: HADOOP-9154
 URL: https://issues.apache.org/jira/browse/HADOOP-9154
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-9124.patch, hadoop-9154-draft.patch, 
 hadoop-9154-draft.patch, hadoop-9154.patch, hadoop-9154.patch, 
 hadoop-9154.patch


 In the following code from {{SortedMapWritable}}, #putAll() doesn't add 
 key/value classes to the class-id maps.
 {code}
   @Override
   public Writable put(WritableComparable key, Writable value) {
 addToMap(key.getClass());
 addToMap(value.getClass());
 return instance.put(key, value);
   }
   @Override
   public void putAll(Map? extends WritableComparable, ? extends Writable t){
 for (Map.Entry? extends WritableComparable, ? extends Writable e:
   t.entrySet()) {
   
   instance.put(e.getKey(), e.getValue());
 }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564726#comment-13564726
 ] 

Karthik Kambatla commented on HADOOP-9124:
--

[~snihalani], looks like this applies to branch-1 as well. 

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8446) make hadoop-core jar OSGi friendly

2013-01-28 Thread Zafar Khaydarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564744#comment-13564744
 ] 

Zafar Khaydarov commented on HADOOP-8446:
-

Also would be very nice to connect from OSGi to Hadoop. Awating for it. Happy 
to help if you point to docs and ways.


 make hadoop-core jar OSGi friendly
 --

 Key: HADOOP-8446
 URL: https://issues.apache.org/jira/browse/HADOOP-8446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Freeman Fang

 hadoop-core isn't OSGi friendly, so for those who wanna use it in OSGi 
 container, must wrap it with tool like bnd/maven-bundle-plugin. Apache 
 Servicemix always wrap 3rd party jars which isn't OSGi friendly,  you can see 
 we've done it for lots of jars here[1], more specifically for several 
 hadoop-core versions[2].  Though we may keep this way doing it, the problem 
 is that we need do it for every new released version for 3rd party jars, more 
 importantly we need ensure other Apache projects communities are aware of 
 we're doing it.
 In Servicemix we just wrap hadoop-core 1.0.3, issues to track it in 
 Servicemix is[3].
 We hope Apache Hadoop can offer OSGi friendly jars, in most cases, it's 
 should be straightforward, as it just need add OSGi metadata headers to 
 MANIFEST.MF, this could be done easily with maven-bundle-plugin if build with 
 maven.  There's also some other practice should be followed like different 
 modules shouldn't share same package(avoid split pacakge). 
 thanks
 [1]http://repo2.maven.org/maven2/org/apache/servicemix/bundles
 [2]http://repo2.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.hadoop-core/
 [3]https://issues.apache.org/jira/browse/SMX4-1147

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9259) FileSystemContractBaseTest should be less brittle in teardown

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564747#comment-13564747
 ] 

Hadoop QA commented on HADOOP-9259:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565109/HDFS-4409.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2107//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2107//console

This message is automatically generated.

 FileSystemContractBaseTest should be less brittle in teardown
 -

 Key: HADOOP-9259
 URL: https://issues.apache.org/jira/browse/HADOOP-9259
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0, 1.1.2
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-4409.patch


 the teardown code in FileSystemContractBaseTest assumes that {{fs!=null}} and 
 that it's OK to throw an exception if the delete operation fails. Better to 
 check the {{fs}} value and catch and convert an exception in the 
 {{fs.delete()}} operation to a {{LOG.error()}} instead.
 This will stop failures in teardown becoming a distraction from the root 
 causes of the problem (that your FileSystem is broken)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564776#comment-13564776
 ] 

Hadoop QA commented on HADOOP-9258:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566825/HADOOP-9528.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
  org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
  org.apache.hadoop.fs.TestTrash

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2108//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2108//console

This message is automatically generated.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8418) Fix UGI for IBM JDK running on Windows

2013-01-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564788#comment-13564788
 ] 

Eric Yang commented on HADOOP-8418:
---

hi Matt,

HDFS build failure occurred 1 build earlier than build 1248.  I recommend to 
look at failures caused by prior commit between Nov 27 2012, and Dec 07 2012.  
The build breaks are related to this issue.

 Fix UGI for IBM JDK running on Windows
 --

 Key: HADOOP-8418
 URL: https://issues.apache.org/jira/browse/HADOOP-8418
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Gao
  Labels: ibm-jdk, windows
 Fix For: 1.1.2

 Attachments: hadoop-8414-branch-1.0.patch, 
 hadoop-8418-branch-2.patch, hadoop-8418.patch


 The login module and user principal classes are different for 32 and 64-bit 
 Windows in IBM J9 JDK 6 SR10. Hadoop 1.0.3 does not run on either because it 
 uses the 32 bit login module and the 64-bit user principal class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8418) Fix UGI for IBM JDK running on Windows

2013-01-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564790#comment-13564790
 ] 

Eric Yang commented on HADOOP-8418:
---

Not related to this issue.  I meant.

 Fix UGI for IBM JDK running on Windows
 --

 Key: HADOOP-8418
 URL: https://issues.apache.org/jira/browse/HADOOP-8418
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Gao
  Labels: ibm-jdk, windows
 Fix For: 1.1.2

 Attachments: hadoop-8414-branch-1.0.patch, 
 hadoop-8418-branch-2.patch, hadoop-8418.patch


 The login module and user principal classes are different for 32 and 64-bit 
 Windows in IBM J9 JDK 6 SR10. Hadoop 1.0.3 does not run on either because it 
 uses the 32 bit login module and the 64-bit user principal class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-01-28 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564799#comment-13564799
 ] 

Surenkumar Nihalani commented on HADOOP-9124:
-

I didn't know about branches in any of the HowToContribute wiki pages. Where 
can I read up on them?

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2-alpha
Reporter: Patrick Hunt
Priority: Minor
 Attachments: HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT, incremental

2013-01-28 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564802#comment-13564802
 ] 

Andy Isaacson commented on HADOOP-8427:
---

bq. Is there a jira that tracks the remaining work? Noticed there's still an 
xdocs directory.

HADOOP-9190
HADOOP-9221

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564825#comment-13564825
 ] 

Steve Loughran commented on HADOOP-9258:


{{TestTrash}} failing on mkdir
{code}
2013-01-28 22:25:42,002 WARN  fs.TrashPolicyDefault 
(TrashPolicyDefault.java:moveToTrash(138)) - Can't create trash directory: 
file:/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/testTrash/user/test/.Trash/Current/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/testTrash/test/mkdirs
{code}


 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Status: Open  (was: Patch Available)

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.branch-1.patch

@Harsh
I have updated the patch to handle a secure datanode startup. I tested on a 
secure and un secure cluster and the appropriate info was captured. Let me know 
if the approach looks good and i will provide a similar patch for trunk.

@Andy
I am not quite sure i understand what you are referring to. The log file that 
is being printed to the console should never have any left over contents as 
start commands overwrites it.

{code}
nohup nice -n $HADOOP_NICENESS $HADOOP_PREFIX/bin/hadoop --config 
$HADOOP_CONF_DIR $command $@  $log 21  /dev/null 
{code}

But if you think the problem still exists can open another jira for it.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564968#comment-13564968
 ] 

Andy Isaacson commented on HADOOP-9253:
---

bq. I am not quite sure i understand what you are referring to. The log file 
that is being printed to the console should never have any left over contents 
as start commands overwrites it.

Your patch has:
{noformat}
+++ hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
@@ -154,7 +154,11 @@ case $startStop in
   ;;
 esac
 echo $!  $pid
-sleep 1; head $log
+sleep 1
+# capture the ulimit output
+echo ulimit -a  $log
+ulimit -a  $log 21
+head $log
{noformat}

The file {{$log}} might be empty, or it might have some content from the 
'nohup' command line a few lines up.  Regardless, your patch then adds two 
commands (echo, then ulimit) that {{}} append to {{$log}}. Together those 
will append 17 lines of output to {{$log}}.

Then you use {{head}} to print out the first 10 lines of {{$log}}.  These 10 
lines might include some errors or warning messages from nohup, and then a few 
lines of the 17 that were printed by ulimit.

So I have two feedback items: 1. it's unclear why to write {{ulimit}} to 
{{$log}} at all. Why not just write ulimit output directly to console? 2. If 
writing ulimit to $log, why use {{head}} to truncate the output?  At least 
change the {{head}} command to print the entire expected output, {{head -20}} 
or similar.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564972#comment-13564972
 ] 

Arpit Gupta commented on HADOOP-9253:
-

bq. it's unclear why to write ulimit to $log at all

This is being added so we can debug issues related to limits being set for 
user. Thus capturing in the log so the user can refer to them at a later time.

bq. 2. If writing ulimit to $log, why use head to truncate the output

{code}
head $log
{code}

Is something that existed before and hence i left it as is. I can certainly 
change it to -20 but as you mention if there are errors in the nohup command it 
will log to this file as well so changing it to printing 20 lines might not 
help in that case.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Open  (was: Patch Available)

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Attachment: HADOOP-9528-2.patch

Updated patch, adds another part of the contract, that 
{{FileSystem.getFileStatus(/)}} must always return a valid entry. That is: 
there is always a root directory.

The S3 Native tests that are failing (both in memory and tested against s3) 
break this requirement.

Similarly, both S3 filesystems let you rename a directory into a child 
directory.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Patch Available  (was: Open)

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564996#comment-13564996
 ] 

Andy Isaacson commented on HADOOP-9253:
---

bq. {{head $log}} Is something that existed before and hence i left it as is.

Previously it made sense since {{$log}} was probably only a few lines long.  
Now that your code is changing {{$log}} to be guaranteed to be more than 10 
lines long, please adjust the {{head}} command as appropriate.

The reason for using {{head}} here is, there may be a few lines of output in 
the log that would be helpful for debugging.  But it's also possible that the 
log has thousands of lines of errors which would not be helpful.  With head you 
get the first few errors and avoid potentially dumping MBs of errors to the 
terminal.  Please preserve that behavior.  Since you're adding 17 lines of 
output, perhaps add 17 lines to the number that {{head}} will print. 

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565011#comment-13565011
 ] 

Hadoop QA commented on HADOOP-9258:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566884/HADOOP-9528-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  
org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
  org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2109//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2109//console

This message is automatically generated.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9257) HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally

2013-01-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9257:


Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Reverted HADOOP-9241 instead of doing this increment fix. Resolving as dupe of 
HADOOP-9241.

 HADOOP-9241 changed DN's default DU interval to 1m instead of 10m accidentally
 --

 Key: HADOOP-9257
 URL: https://issues.apache.org/jira/browse/HADOOP-9257
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-9257.patch


 Suresh caught this on HADOOP-9241:
 {quote}
 Even for trivial jiras, I suggest getting the code review done before 
 committing the code. Such changes are easy and quick to review.
 In this patch, did DU interval become 1 minute instead of 10 minutes?
 {code}
 -this(path, 60L);
 -//10 minutes default refresh interval
 +this(path, conf.getLong(CommonConfigurationKeys.FS_DU_INTERVAL_KEY,
 +CommonConfigurationKeys.FS_DU_INTERVAL_DEFAULT));
 +  /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
 +  public static final String  FS_DU_INTERVAL_KEY = fs.du.interval;
 +  /** Default value for FS_DU_INTERVAL_KEY */
 +  public static final longFS_DU_INTERVAL_DEFAULT = 6;
 {code}
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)
Jerry Chen created HADOOP-9260:
--

 Summary: Hadoop version commands show incorrect information on 
trunk
 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen


1. Check out the trunk from 
http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
2. Compile package
   m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
3. Hadoop version of compiled dist shows the following:

Hadoop 3.0.0-SNAPSHOT
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by haifeng on ${version-info.build.time}
From source with checksum ${version-info.source.md5}

While using the same workflow, trunk revision 1429810 didn't have the problem.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9241) DU refresh interval is not configurable

2013-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565098#comment-13565098
 ] 

Hudson commented on HADOOP-9241:


Integrated in Hadoop-trunk-Commit #3292 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3292/])
Revert HADOOP-9241 properly this time. Left the core-default.xml in 
previous commit. (Revision 1439750)
Reverting HADOOP-9241. To be fixed and reviewed. (Revision 1439748)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439750
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1439748
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DU.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java


 DU refresh interval is not configurable
 ---

 Key: HADOOP-9241
 URL: https://issues.apache.org/jira/browse/HADOOP-9241
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9241.patch


 While the {{DF}} class's refresh interval is configurable, the {{DU}}'s 
 isn't. We should ensure both be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565102#comment-13565102
 ] 

Jerry Chen commented on HADOOP-9260:


After again updated to latest trunk, the problem is resolved.

 Hadoop version commands show incorrect information on trunk
 ---

 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen

 1. Check out the trunk from 
 http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
 2. Compile package
m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
 3. Hadoop version of compiled dist shows the following:
 Hadoop 3.0.0-SNAPSHOT
 Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
 Compiled by haifeng on ${version-info.build.time}
 From source with checksum ${version-info.source.md5}
 While using the same workflow, trunk revision 1429810 didn't have the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen resolved HADOOP-9260.


Resolution: Invalid

Already fixed by latest version.

 Hadoop version commands show incorrect information on trunk
 ---

 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen

 1. Check out the trunk from 
 http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
 2. Compile package
m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
 3. Hadoop version of compiled dist shows the following:
 Hadoop 3.0.0-SNAPSHOT
 Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
 Compiled by haifeng on ${version-info.build.time}
 From source with checksum ${version-info.source.md5}
 While using the same workflow, trunk revision 1429810 didn't have the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9260:
---

Priority: Critical  (was: Major)

 Hadoop version commands show incorrect information on trunk
 ---

 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen
Priority: Critical

 1. Check out the trunk from 
 http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
 2. Compile package
m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
 3. Hadoop version of compiled dist shows the following:
 Hadoop 3.0.0-SNAPSHOT
 Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
 Compiled by haifeng on ${version-info.build.time}
 From source with checksum ${version-info.source.md5}
 While using the same workflow, trunk revision 1429810 didn't have the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen reopened HADOOP-9260:



The latest trunk only fix the compile problem of generating the right 
version-info file. 
But using the lastest trunk (-r 1439752), the runtime has a critical problem 
that the hadoop name node and data node starts up with incorrect version 
information. Thus cause problems for other systems.


 Hadoop version commands show incorrect information on trunk
 ---

 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen

 1. Check out the trunk from 
 http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
 2. Compile package
m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
 3. Hadoop version of compiled dist shows the following:
 Hadoop 3.0.0-SNAPSHOT
 Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
 Compiled by haifeng on ${version-info.build.time}
 From source with checksum ${version-info.source.md5}
 While using the same workflow, trunk revision 1429810 didn't have the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9260) Hadoop version commands show incorrect information on trunk

2013-01-28 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9260:
---

Description: 
1. Check out the trunk from 
http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
2. Compile package
   m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
3. Hadoop version of compiled dist shows the following:

Hadoop ${pom.version}
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by ${user.name} on ${version-info.build.time}
From source with checksum ${version-info.source.md5}

And in a real cluster, the log in name node shows:

2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG: 
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = ${pom.version}
STARTUP_MSG:   classpath = ...
STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
compiled by '${user.name}' on ${version-info.build.time}
STARTUP_MSG:   java = 1.6.0_33

While some data nodes with the same binary shows the correct version 
information.


  was:
1. Check out the trunk from 
http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
2. Compile package
   m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
3. Hadoop version of compiled dist shows the following:

Hadoop 3.0.0-SNAPSHOT
Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
Compiled by haifeng on ${version-info.build.time}
From source with checksum ${version-info.source.md5}

While using the same workflow, trunk revision 1429810 didn't have the problem.



 Hadoop version commands show incorrect information on trunk
 ---

 Key: HADOOP-9260
 URL: https://issues.apache.org/jira/browse/HADOOP-9260
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Jerry Chen
Priority: Critical

 1. Check out the trunk from 
 http://svn.apache.org/repos/asf/hadoop/common/trunk/ 
 2. Compile package
m2 package -Pdist -Psrc -Pnative -Dtar -DskipTests
 3. Hadoop version of compiled dist shows the following:
 Hadoop ${pom.version}
 Subversion ${version-info.scm.uri} -r ${version-info.scm.commit}
 Compiled by ${user.name} on ${version-info.build.time}
 From source with checksum ${version-info.source.md5}
 And in a real cluster, the log in name node shows:
 2013-01-29 15:23:42,738 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
 STARTUP_MSG: 
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = bdpe01.sh.intel.com/10.239.47.101
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = ${pom.version}
 STARTUP_MSG:   classpath = ...
 STARTUP_MSG:   build = ${version-info.scm.uri} -r ${version-info.scm.commit}; 
 compiled by '${user.name}' on ${version-info.build.time}
 STARTUP_MSG:   java = 1.6.0_33
 While some data nodes with the same binary shows the correct version 
 information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira