[jira] [Commented] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2013-12-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851582#comment-13851582
 ] 

Steve Loughran commented on HADOOP-10134:
-

+1, ship it

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-trunk.patch, 
 10134-trunk.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851583#comment-13851583
 ] 

Hudson commented on HADOOP-10168:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #425 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/425/])
HADOOP-10168. fix javadoc of ReflectionUtils#copy. Contributed by Thejas Nair. 
(suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551646)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 2.4.0

 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9611) mvn-rpmbuild against google-guice 3.0 yields missing cglib dependency

2013-12-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851604#comment-13851604
 ] 

Steve Loughran commented on HADOOP-9611:


+1 committed to branch2+. thanks!

 mvn-rpmbuild against google-guice  3.0 yields missing cglib dependency
 ---

 Key: HADOOP-9611
 URL: https://issues.apache.org/jira/browse/HADOOP-9611
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: maven
 Fix For: 2.4.0

 Attachments: HADOOP-2.2.0-9611.patch, HADOOP-9611.patch


 Google guice 3.0 repackaged some external dependencies (cglib), which are 
 broken out and exposed when running a mvn-rpmbuild against a stock Fedora 18 
 machine (3.1.2-6).  By adding the explicit dependency, it fixes the error and 
 causes no impact to normal mvn builds.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-9611) mvn-rpmbuild against google-guice 3.0 yields missing cglib dependency

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9611:
---

   Resolution: Fixed
Fix Version/s: 2.4.0
 Assignee: Timothy St. Clair
   Status: Resolved  (was: Patch Available)

 mvn-rpmbuild against google-guice  3.0 yields missing cglib dependency
 ---

 Key: HADOOP-9611
 URL: https://issues.apache.org/jira/browse/HADOOP-9611
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Fix For: 2.4.0

 Attachments: HADOOP-2.2.0-9611.patch, HADOOP-9611.patch


 Google guice 3.0 repackaged some external dependencies (cglib), which are 
 broken out and exposed when running a mvn-rpmbuild against a stock Fedora 18 
 machine (3.1.2-6).  By adding the explicit dependency, it fixes the error and 
 causes no impact to normal mvn builds.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9611) mvn-rpmbuild against google-guice 3.0 yields missing cglib dependency

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851609#comment-13851609
 ] 

Hudson commented on HADOOP-9611:


SUCCESS: Integrated in Hadoop-trunk-Commit #4908 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4908/])
HADOOP-9611 mvn-rpmbuild against google-guice  3.0 yields missing cglib 
dependency (stevel: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551916)
* /hadoop/common/trunk
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 mvn-rpmbuild against google-guice  3.0 yields missing cglib dependency
 ---

 Key: HADOOP-9611
 URL: https://issues.apache.org/jira/browse/HADOOP-9611
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Fix For: 2.4.0

 Attachments: HADOOP-2.2.0-9611.patch, HADOOP-9611.patch


 Google guice 3.0 repackaged some external dependencies (cglib), which are 
 broken out and exposed when running a mvn-rpmbuild against a stock Fedora 18 
 machine (3.1.2-6).  By adding the explicit dependency, it fixes the error and 
 causes no impact to normal mvn builds.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851695#comment-13851695
 ] 

Hudson commented on HADOOP-10168:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1616/])
HADOOP-10168. fix javadoc of ReflectionUtils#copy. Contributed by Thejas Nair. 
(suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551646)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 2.4.0

 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9611) mvn-rpmbuild against google-guice 3.0 yields missing cglib dependency

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851708#comment-13851708
 ] 

Hudson commented on HADOOP-9611:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1616 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1616/])
HADOOP-9611 mvn-rpmbuild against google-guice  3.0 yields missing cglib 
dependency (stevel: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551916)
* /hadoop/common/trunk
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 mvn-rpmbuild against google-guice  3.0 yields missing cglib dependency
 ---

 Key: HADOOP-9611
 URL: https://issues.apache.org/jira/browse/HADOOP-9611
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Fix For: 2.4.0

 Attachments: HADOOP-2.2.0-9611.patch, HADOOP-9611.patch


 Google guice 3.0 repackaged some external dependencies (cglib), which are 
 broken out and exposed when running a mvn-rpmbuild against a stock Fedora 18 
 machine (3.1.2-6).  By adding the explicit dependency, it fixes the error and 
 causes no impact to normal mvn builds.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (HADOOP-9991) Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-9991:
--

Assignee: Steve Loughran

 Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions
 -

 Key: HADOOP-9991
 URL: https://issues.apache.org/jira/browse/HADOOP-9991
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.4.0, 2.1.1-beta
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: hadoop-9991-v1.txt


 If you try using Hadoop downstream with a classpath shared with HBase and 
 Accumulo, you soon discover how messy the dependencies are.
 Hadoop's side of this problem is
 # not being up to date with some of the external releases of common JARs
 # not locking down/excluding inconsistent versions of artifacts provided down 
 the dependency graph



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10101:


Status: Patch Available  (was: Open)

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch, 
 HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10101:


Status: Open  (was: Patch Available)

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch, 
 HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10101:


Attachment: HADOOP-10101-004.patch

This is the previous patch resynced with trunk and all trailing CR's stripped. 

Vinay, can you make sure that your editor/OS/SCM tool isn't adding the wrong 
line endings?

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
 HADOOP-10101.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10101:


Status: Patch Available  (was: Open)

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
 HADOOP-10101.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9650) Update jetty dependencies

2013-12-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851744#comment-13851744
 ] 

Steve Loughran commented on HADOOP-9650:


catching up with these, I'm going to propose that we move up to jetty-8, but 
with anyone who can offer some load testing of terasort to give it a go just so 
we can be happy that it's not going to have issues

 Update jetty dependencies 
 --

 Key: HADOOP-9650
 URL: https://issues.apache.org/jira/browse/HADOOP-9650
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.1.0-beta
Reporter: Timothy St. Clair
  Labels: build, maven
 Attachments: HADOOP-9650.patch, HADOOP-trunk-9650.patch


 Update deprecated jetty 6 dependencies, moving forwards to jetty 8.  This 
 enables mvn-rpmbuild on Fedora 18   platforms. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10147) Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10147:


Status: Patch Available  (was: Open)

 Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
 --

 Key: HADOOP-10147
 URL: https://issues.apache.org/jira/browse/HADOOP-10147
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0
Reporter: Eric Sirianni
Priority: Minor
 Attachments: HADOOP-10147-001.patch


 There is a deadlock in commons-logging 1.1.1 (see LOGGING-119) that can 
 manifest itself while running {{MiniDFSCluster}} JUnit tests.
 This deadlock has been fixed in commons-logging 1.1.2.  The latest version 
 available is commons-logging 1.1.3, and Hadoop should upgrade to that in 
 order to address this deadlock.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10147) Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster

2013-12-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10147:


Attachment: HADOOP-10147-001.patch

patch to update hadoop-project/pom.xml

 Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
 --

 Key: HADOOP-10147
 URL: https://issues.apache.org/jira/browse/HADOOP-10147
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0
Reporter: Eric Sirianni
Priority: Minor
 Attachments: HADOOP-10147-001.patch


 There is a deadlock in commons-logging 1.1.1 (see LOGGING-119) that can 
 manifest itself while running {{MiniDFSCluster}} JUnit tests.
 This deadlock has been fixed in commons-logging 1.1.2.  The latest version 
 available is commons-logging 1.1.3, and Hadoop should upgrade to that in 
 order to address this deadlock.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10147) Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster

2013-12-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851755#comment-13851755
 ] 

Steve Loughran commented on HADOOP-10147:
-

link to HDFS-5678, which files this problem as an HDFS issue

 Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
 --

 Key: HADOOP-10147
 URL: https://issues.apache.org/jira/browse/HADOOP-10147
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0
Reporter: Eric Sirianni
Priority: Minor
 Attachments: HADOOP-10147-001.patch


 There is a deadlock in commons-logging 1.1.1 (see LOGGING-119) that can 
 manifest itself while running {{MiniDFSCluster}} JUnit tests.
 This deadlock has been fixed in commons-logging 1.1.2.  The latest version 
 available is commons-logging 1.1.3, and Hadoop should upgrade to that in 
 order to address this deadlock.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10147) Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster

2013-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851774#comment-13851774
 ] 

Hadoop QA commented on HADOOP-10147:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12619321/HADOOP-10147-001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3369//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3369//console

This message is automatically generated.

 Upgrade to commons-logging 1.1.3 to avoid potential deadlock in MiniDFSCluster
 --

 Key: HADOOP-10147
 URL: https://issues.apache.org/jira/browse/HADOOP-10147
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0
Reporter: Eric Sirianni
Priority: Minor
 Attachments: HADOOP-10147-001.patch


 There is a deadlock in commons-logging 1.1.1 (see LOGGING-119) that can 
 manifest itself while running {{MiniDFSCluster}} JUnit tests.
 This deadlock has been fixed in commons-logging 1.1.2.  The latest version 
 available is commons-logging 1.1.3, and Hadoop should upgrade to that in 
 order to address this deadlock.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9611) mvn-rpmbuild against google-guice 3.0 yields missing cglib dependency

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851815#comment-13851815
 ] 

Hudson commented on HADOOP-9611:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1642 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1642/])
HADOOP-9611 mvn-rpmbuild against google-guice  3.0 yields missing cglib 
dependency (stevel: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551916)
* /hadoop/common/trunk
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 mvn-rpmbuild against google-guice  3.0 yields missing cglib dependency
 ---

 Key: HADOOP-9611
 URL: https://issues.apache.org/jira/browse/HADOOP-9611
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Fix For: 2.4.0

 Attachments: HADOOP-2.2.0-9611.patch, HADOOP-9611.patch


 Google guice 3.0 repackaged some external dependencies (cglib), which are 
 broken out and exposed when running a mvn-rpmbuild against a stock Fedora 18 
 machine (3.1.2-6).  By adding the explicit dependency, it fixes the error and 
 causes no impact to normal mvn builds.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851802#comment-13851802
 ] 

Hudson commented on HADOOP-10168:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1642 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1642/])
HADOOP-10168. fix javadoc of ReflectionUtils#copy. Contributed by Thejas Nair. 
(suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1551646)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java


 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 2.4.0

 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10169) remove the unnecessary synchronized in JvmMetrics class

2013-12-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851875#comment-13851875
 ] 

Jason Lowe commented on HADOOP-10169:
-

Doh, in my haste I misread the code, so my pseudo-code isn't relevant.  We're 
not initializing the map just once overall, we're initializing it once per key. 
 As you point out, we can be initializing a new key as we're busy accessing an 
old key, and something like a ConcurrentMap is more appropriate.

However in general we cannot just replace HashMap with ConcurrentHashMap, 
remove the synchronized keywords, and expect it to work properly in all cases.  
There's now a race in getGcInfo where thread A comes along, sees there isn't an 
entry in the map for key K, and starts creating an empty MetricsInfo for it.  
Meanwhile thread B comes along, also sees there isn't an entry for key K, 
creates an empty MetricsInfo, puts it in the map, updates the MetricsInfo with 
new metrics, and continues on.  Thread A then wakes back up and pokes the empty 
MetricsInfo into the map for key K, causing data loss of the metrics computed 
by thread B.  The gcInfoCache.put needs to be gcInfoCache.putIfAbsent, and if 
putIfAbsent returns a value then we need to return that instead of the empty 
metrics info.

Couple of other nits on the patch: the com.google.common.collect.Maps import is 
no longer necessary, and the patch includes an unrelated whitespace change that 
pushes one of the modified lines over 80 columns.

 remove the unnecessary  synchronized in JvmMetrics class
 

 Key: HADOOP-10169
 URL: https://issues.apache.org/jira/browse/HADOOP-10169
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0, 2.2.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-10169-v2.txt, HADOOP-10169.txt


 When i looked into a HBase JvmMetric impl, just found this synchronized seems 
 not essential.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851879#comment-13851879
 ] 

Hadoop QA commented on HADOOP-10101:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617772/HADOOP-10101.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3367//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3367//console

This message is automatically generated.

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
 HADOOP-10101.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851896#comment-13851896
 ] 

Hadoop QA commented on HADOOP-10101:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12619314/HADOOP-10101-004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3368//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3368//console

This message is automatically generated.

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
 HADOOP-10101.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10171) TestRPC fails intermittently on jkd7

2013-12-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-10171:
-

Summary: TestRPC fails intermittently on jkd7  (was: TestRPC fails on 
Branch-2)

 TestRPC fails intermittently on jkd7
 

 Key: HADOOP-10171
 URL: https://issues.apache.org/jira/browse/HADOOP-10171
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Mit Desai
Assignee: Mit Desai
  Labels: java7
 Attachments: HADOOP-10171-branch-2.patch, HADOOP-10171-trunk.patch


 Branch-2 runs JDK7 which has a random test order. So we get an error in 
 TestRPC (testStopsAllThreads) failing on the AssertEquals.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10171) TestRPC fails intermittently on jkd7

2013-12-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-10171:
-

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks, Mit.

 TestRPC fails intermittently on jkd7
 

 Key: HADOOP-10171
 URL: https://issues.apache.org/jira/browse/HADOOP-10171
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Mit Desai
Assignee: Mit Desai
  Labels: java7
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10171-branch-2.patch, HADOOP-10171-trunk.patch


 Branch-2 runs JDK7 which has a random test order. So we get an error in 
 TestRPC (testStopsAllThreads) failing on the AssertEquals.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10171) TestRPC fails intermittently on jkd7

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851926#comment-13851926
 ] 

Hudson commented on HADOOP-10171:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4909 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4909/])
HADOOP-10171. TestRPC fails intermittently on jkd7 (Mit Desai via jeagles) 
(jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1552024)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


 TestRPC fails intermittently on jkd7
 

 Key: HADOOP-10171
 URL: https://issues.apache.org/jira/browse/HADOOP-10171
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Mit Desai
Assignee: Mit Desai
  Labels: java7
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10171-branch-2.patch, HADOOP-10171-trunk.patch


 Branch-2 runs JDK7 which has a random test order. So we get an error in 
 TestRPC (testStopsAllThreads) failing on the AssertEquals.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-12-18 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851964#comment-13851964
 ] 

Jimmy Xiang commented on HADOOP-9079:
-

Let me think about it again and come up a test.

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: hadoop-9079-v2.txt, trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-12-18 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang reassigned HADOOP-9079:
---

Assignee: Jimmy Xiang

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: hadoop-9079-v2.txt, trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10161) Add a method to change the default value of dmax in hadoop.properties

2013-12-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852081#comment-13852081
 ] 

Colin Patrick McCabe commented on HADOOP-10161:
---

[~stack], can you look at this when you get a chance?  I'm not really familiar 
with ganglia

 Add a method to change the default value of dmax in hadoop.properties
 -

 Key: HADOOP-10161
 URL: https://issues.apache.org/jira/browse/HADOOP-10161
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.2.0
Reporter: Yang He
 Attachments: HADOOP-10161_0_20131211.patch, 
 HADOOP-10161_1_20131217.patch, HADOOP-10161_DESCRIPTION, 
 hadoop-metrics.properties, hadoop-metrics2.properties


 The property of dmax in ganglia is a configurable time to rotate metrics. 
 Therefore, no more value of the metric will be emit to the gmond, after 
 'dmax' seconds, then gmond will destroy the metric in memory. In Hadoop 
 metrics framework, the default value of 'dmax' is 0. It means the gmond will 
 never destroy the metric although the metric is disappeared. The gmetad 
 daemon also does not delete the rrdtool file forever. 
 We need to add a method to configure the default value of dmax for all 
 metrics in hadoop.properties.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-18 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10101:
--

Hadoop Flags: Incompatible change

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
 HADOOP-10101.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-18 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852147#comment-13852147
 ] 

Robert Joseph Evans commented on HADOOP-10164:
--

Great I'll merge it in.

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10146) Workaround JDK7 Process fd close bug

2013-12-18 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852169#comment-13852169
 ] 

Daryn Sharp commented on HADOOP-10146:
--

Yes, we were losing ~10s of NMs/day because of OOMs caused by this bug.  After 
the patch, no OOMs.

The referenced openjdk bug is indeed the same problem.

Do I have a +1 to commit?

 Workaround JDK7 Process fd close bug
 

 Key: HADOOP-10146
 URL: https://issues.apache.org/jira/browse/HADOOP-10146
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-10129.branch-23.patch, HADOOP-10129.patch


 JDK7's {{Process}} output streams have an async fd-close race bug.  This 
 manifests as commands run via o.a.h.u.Shell causing threads to hang, OOM, or 
 cause other bizarre behavior.  The NM is likely to encounter the bug under 
 heavy load.
 Specifically, {{ProcessBuilder}}'s {{UNIXProcess}} starts a thread to reap 
 the process and drain stdout/stderr to avoid a lingering zombie process.  A 
 race occurs if the thread using the stream closes it, the underlying fd is 
 recycled/reopened, while the reaper is draining it.  
 {{ProcessPipeInputStream.drainInputStream}}'s will OOM allocating an array if 
 {{in.available()}} returns a huge number, or may wreak havoc by incorrectly 
 draining the fd.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-18 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-10164:
-

  Resolution: Fixed
   Fix Version/s: 0.23.11
  2.4.0
  3.0.0
Target Version/s: 2.2.0, 0.23.10, 3.0.0  (was: 3.0.0, 0.23.10, 2.2.0)
  Status: Resolved  (was: Patch Available)

I checked this into trunk, branch-2, and branch-0.23

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 2.4.0, 0.23.11

 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852190#comment-13852190
 ] 

Hudson commented on HADOOP-10164:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #4911 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4911/])
HADOOP-10164. Allow UGI to login with a known Subject (bobby) (bobby: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1552104)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 2.4.0, 0.23.11

 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (HADOOP-9079) LocalDirAllocator throws ArithmeticException

2013-12-18 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HADOOP-9079.
-

Resolution: Duplicate

 LocalDirAllocator throws ArithmeticException
 

 Key: HADOOP-9079
 URL: https://issues.apache.org/jira/browse/HADOOP-9079
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: hadoop-9079-v2.txt, trunk-9079.patch


 2012-11-19 22:07:41,709 WARN  [IPC Server handler 0 on 38671] 
 nodemanager.NMAuditLogger(150): USER=UnknownUserIP= 
 OPERATION=Stop Container RequestTARGET=ContainerManagerImpl 
 RESULT=FAILURE  DESCRIPTION=Trying to stop unknown container!   
 APPID=application_1353391620476_0001
 CONTAINERID=container_1353391620476_0001_01_10
 java.lang.ArithmeticException: / by zero
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:368)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:263)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:849)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10172) Cache SASL server factories

2013-12-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852265#comment-13852265
 ] 

Kihwal Lee commented on HADOOP-10172:
-

+1 The patch looks good to me. 

 Cache SASL server factories
 ---

 Key: HADOOP-10172
 URL: https://issues.apache.org/jira/browse/HADOOP-10172
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-10172.patch


 Performance for SASL server creation is _atrocious_.  
 {{Sasl.createSaslServer}} does not cache the provider resolution for the 
 factories.  Factory resolution and server instantiation has 3 major 
 contention points.  During bursts of connections, one reader accepting a 
 connection stalls other readers accepting connections, in turn stalling all 
 existing connections handled by those readers.
 I benched 5 threads at 187 instances/s - total, not per thread.  With this 
 and another change, I've boosted it to 33K instances/s.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10169) remove the unnecessary synchronized in JvmMetrics class

2013-12-18 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HADOOP-10169:
---

Attachment: HADOOP-10169-v3.txt

 remove the unnecessary  synchronized in JvmMetrics class
 

 Key: HADOOP-10169
 URL: https://issues.apache.org/jira/browse/HADOOP-10169
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0, 2.2.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-10169-v2.txt, HADOOP-10169-v3.txt, 
 HADOOP-10169.txt


 When i looked into a HBase JvmMetric impl, just found this synchronized seems 
 not essential.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10169) remove the unnecessary synchronized in JvmMetrics class

2013-12-18 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852501#comment-13852501
 ] 

Liang Xie commented on HADOOP-10169:


Thanks [~jlowe] for your nice reply, yes, you are correct!  Attached v3 with an 
IDE formatting now:)

 remove the unnecessary  synchronized in JvmMetrics class
 

 Key: HADOOP-10169
 URL: https://issues.apache.org/jira/browse/HADOOP-10169
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0, 2.2.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-10169-v2.txt, HADOOP-10169-v3.txt, 
 HADOOP-10169.txt


 When i looked into a HBase JvmMetric impl, just found this synchronized seems 
 not essential.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10146) Workaround JDK7 Process fd close bug

2013-12-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852504#comment-13852504
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-10146:
--

Can you add a reference to this JIRA as well as a link to the corresponding 
(buggy) JVM version in the java comment for posterity? We can revisit and 
revert this when the time is apt.

 Workaround JDK7 Process fd close bug
 

 Key: HADOOP-10146
 URL: https://issues.apache.org/jira/browse/HADOOP-10146
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-10129.branch-23.patch, HADOOP-10129.patch


 JDK7's {{Process}} output streams have an async fd-close race bug.  This 
 manifests as commands run via o.a.h.u.Shell causing threads to hang, OOM, or 
 cause other bizarre behavior.  The NM is likely to encounter the bug under 
 heavy load.
 Specifically, {{ProcessBuilder}}'s {{UNIXProcess}} starts a thread to reap 
 the process and drain stdout/stderr to avoid a lingering zombie process.  A 
 race occurs if the thread using the stream closes it, the underlying fd is 
 recycled/reopened, while the reaper is draining it.  
 {{ProcessPipeInputStream.drainInputStream}}'s will OOM allocating an array if 
 {{in.available()}} returns a huge number, or may wreak havoc by incorrectly 
 draining the fd.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10146) Workaround JDK7 Process fd close bug

2013-12-18 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852505#comment-13852505
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-10146:
--

Yeah, +1 barring that point about code comment.

 Workaround JDK7 Process fd close bug
 

 Key: HADOOP-10146
 URL: https://issues.apache.org/jira/browse/HADOOP-10146
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-10129.branch-23.patch, HADOOP-10129.patch


 JDK7's {{Process}} output streams have an async fd-close race bug.  This 
 manifests as commands run via o.a.h.u.Shell causing threads to hang, OOM, or 
 cause other bizarre behavior.  The NM is likely to encounter the bug under 
 heavy load.
 Specifically, {{ProcessBuilder}}'s {{UNIXProcess}} starts a thread to reap 
 the process and drain stdout/stderr to avoid a lingering zombie process.  A 
 race occurs if the thread using the stream closes it, the underlying fd is 
 recycled/reopened, while the reaper is draining it.  
 {{ProcessPipeInputStream.drainInputStream}}'s will OOM allocating an array if 
 {{in.available()}} returns a huge number, or may wreak havoc by incorrectly 
 draining the fd.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HADOOP-10175) Har files system authority should preserve userinfo

2013-12-18 Thread Chuan Liu (JIRA)
Chuan Liu created HADOOP-10175:
--

 Summary: Har files system authority should preserve userinfo
 Key: HADOOP-10175
 URL: https://issues.apache.org/jira/browse/HADOOP-10175
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu


When Har file system parse the URI get the authority at initialization, the 
userinfo is not preserved. This may lead to failures if the underlying file 
system relies on the userinfo to work properly. E.g. 
har://file-user:passwd@localhost:80/test.har will be parsed to 
har://file-localhost:80/test.har, where user:passwd is lost in the processing.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10175) Har files system authority should preserve userinfo

2013-12-18 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10175:
---

Status: Patch Available  (was: Open)

 Har files system authority should preserve userinfo
 ---

 Key: HADOOP-10175
 URL: https://issues.apache.org/jira/browse/HADOOP-10175
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-10175.patch


 When Har file system parse the URI get the authority at initialization, the 
 userinfo is not preserved. This may lead to failures if the underlying file 
 system relies on the userinfo to work properly. E.g. 
 har://file-user:passwd@localhost:80/test.har will be parsed to 
 har://file-localhost:80/test.har, where user:passwd is lost in the processing.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10175) Har files system authority should preserve userinfo

2013-12-18 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10175:
---

Attachment: HADOOP-10175.patch

Attaching a patch. A unit test is also added to cover the case.

 Har files system authority should preserve userinfo
 ---

 Key: HADOOP-10175
 URL: https://issues.apache.org/jira/browse/HADOOP-10175
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-10175.patch


 When Har file system parse the URI get the authority at initialization, the 
 userinfo is not preserved. This may lead to failures if the underlying file 
 system relies on the userinfo to work properly. E.g. 
 har://file-user:passwd@localhost:80/test.har will be parsed to 
 har://file-localhost:80/test.har, where user:passwd is lost in the processing.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10175) Har files system authority should preserve userinfo

2013-12-18 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-10175:
---

Description: When Har file system parse the URI to get the authority at 
initialization, the userinfo is not preserved. This may lead to failures if the 
underlying file system relies on the userinfo to work properly. E.g. 
har://file-user:passwd@localhost:80/test.har will be parsed to 
har://file-localhost:80/test.har, where user:passwd is lost in the processing.  
(was: When Har file system parse the URI get the authority at initialization, 
the userinfo is not preserved. This may lead to failures if the underlying file 
system relies on the userinfo to work properly. E.g. 
har://file-user:passwd@localhost:80/test.har will be parsed to 
har://file-localhost:80/test.har, where user:passwd is lost in the processing.)

 Har files system authority should preserve userinfo
 ---

 Key: HADOOP-10175
 URL: https://issues.apache.org/jira/browse/HADOOP-10175
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-10175.patch


 When Har file system parse the URI to get the authority at initialization, 
 the userinfo is not preserved. This may lead to failures if the underlying 
 file system relies on the userinfo to work properly. E.g. 
 har://file-user:passwd@localhost:80/test.har will be parsed to 
 har://file-localhost:80/test.har, where user:passwd is lost in the processing.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10169) remove the unnecessary synchronized in JvmMetrics class

2013-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852582#comment-13852582
 ] 

Hadoop QA commented on HADOOP-10169:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12619459/HADOOP-10169-v3.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3370//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3370//console

This message is automatically generated.

 remove the unnecessary  synchronized in JvmMetrics class
 

 Key: HADOOP-10169
 URL: https://issues.apache.org/jira/browse/HADOOP-10169
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0, 2.2.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-10169-v2.txt, HADOOP-10169-v3.txt, 
 HADOOP-10169.txt


 When i looked into a HBase JvmMetric impl, just found this synchronized seems 
 not essential.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10175) Har files system authority should preserve userinfo

2013-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852592#comment-13852592
 ] 

Hadoop QA commented on HADOOP-10175:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12619468/HADOOP-10175.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3371//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3371//console

This message is automatically generated.

 Har files system authority should preserve userinfo
 ---

 Key: HADOOP-10175
 URL: https://issues.apache.org/jira/browse/HADOOP-10175
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-10175.patch


 When Har file system parse the URI to get the authority at initialization, 
 the userinfo is not preserved. This may lead to failures if the underlying 
 file system relies on the userinfo to work properly. E.g. 
 har://file-user:passwd@localhost:80/test.har will be parsed to 
 har://file-localhost:80/test.har, where user:passwd is lost in the processing.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10161) Add a method to change the default value of dmax in hadoop.properties

2013-12-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852613#comment-13852613
 ] 

stack commented on HADOOP-10161:


Patch makes good sense excepting the below nits.  I do not have a ganglia 
instance to check this patch against.  Can you upload evidence it works for you 
in your deploy?

Why you add the hadoop1 and hadoop2 metrics files to this JIRA?  Are these 
supposed to be checked in along w/ the patch?  If so, why not just include the 
changes to these files in the patch itself?  (Mention of dmax in the 
.properties file would be good documentation that dmax can now be twiddled 
with).

+ the unorthodox formatting.  See the code around your changes.   See how it 
has spaces between operators; it does not squash statements altogether without 
spacing as in this example '+if ((offset+4)message.length()){' ... and 
then also how there is NO space after and before parens as in this: '+  
  if ( binaryStr.indexOf(dmx_str) = 0 ){'  There is also this practice where 
you have a space before the semicolon: 'units_default ;'  This is all 
unorthodox.
+ is there nothing in hadoop that will do the bytes to int for you in str_int 
or you have to do it in here in the test?  Ditto on xdr_int.  Or, are we 
mocking ganglia facility here?  If so, say so in a method comment else 
subsequent readers will be scratching their head asking the same questions I do 
above.  I am not following either why xdr_int is defined in tests but used in 
main code (unless this is a mock?)
+ Why the check against 133 here: '+  if (133 != 
str_int(binaryStr,0)){' ?
+ Is there no tmax in the metrics2 context.  You seem to adjust it in metrics1 
but do nothing for metrics2 -- maybe that is fine... just asking.
+ These public strings could do with a bit of javadoc I'd say:

+  public static final String CONTEXT_DMAX_PROPERTY = dmax_default ;
+  public static final String CONTEXT_TMAX_PROPERTY = tmax_default ;
+  public static final String CONTEXT_SLOP_PROPERTY = slop_default ;
+  public static final String CONTEXT_UNITS_PROPERTY = units_default ;

While dmax/tmax probably don't need it, I'm not sure what SLOP is about and 
what units could be explained too.

Do you mean to copy the SLOP config into a 'slopeString' -- extra 'e' after 
slop:

+String slopeString = conf.getString(CONTEXT_SLOP_PROPERTY);

Thanks.





 Add a method to change the default value of dmax in hadoop.properties
 -

 Key: HADOOP-10161
 URL: https://issues.apache.org/jira/browse/HADOOP-10161
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.2.0
Reporter: Yang He
 Attachments: HADOOP-10161_0_20131211.patch, 
 HADOOP-10161_1_20131217.patch, HADOOP-10161_DESCRIPTION, 
 hadoop-metrics.properties, hadoop-metrics2.properties


 The property of dmax in ganglia is a configurable time to rotate metrics. 
 Therefore, no more value of the metric will be emit to the gmond, after 
 'dmax' seconds, then gmond will destroy the metric in memory. In Hadoop 
 metrics framework, the default value of 'dmax' is 0. It means the gmond will 
 never destroy the metric although the metric is disappeared. The gmetad 
 daemon also does not delete the rrdtool file forever. 
 We need to add a method to configure the default value of dmax for all 
 metrics in hadoop.properties.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10167) Mark hadoop-common source as UTF-8 in Maven pom files / refactoring

2013-12-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852614#comment-13852614
 ] 

Mikhail Antonov commented on HADOOP-10167:
--

Please let me know if there is anything I should improve in the patch or any 
other comments.

 Mark hadoop-common source as UTF-8 in Maven pom files / refactoring
 ---

 Key: HADOOP-10167
 URL: https://issues.apache.org/jira/browse/HADOOP-10167
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.6-alpha
 Environment: Fedora 19 x86-64
Reporter: Mikhail Antonov
  Labels: build
 Attachments: HADOOP-10167-1.patch


 While looking at BIGTOP-831, turned out that the way Bigtop calls maven build 
 / site:site generation causes the errors like this:
 [ERROR] Exit code: 1 - 
 /home/user/jenkins/workspace/BigTop-RPM/label/centos-6-x86_64-HAD-1-buildbot/bigtop-repo/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java:31:
  error: unmappable character for encoding ANSI_X3.4-1968
 [ERROR] JvmMetrics(JVM related metrics etc.), // record info??
 Making the whole hadoop-common to use UTF-8 fixes that and seems in general 
 good thing to me.
 Attaching first version of patch for review.
 Original issue was observed on openjdk 7 (x86-64).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)