[jira] [Commented] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-06-29 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605620#comment-14605620
 ] 

Alan Burlison commented on HADOOP-12114:


OK, I'll respin the patch to add logging on error.

 Make hadoop-tools/hadoop-pipes Native code -Wall-clean
 --

 Key: HADOOP-12114
 URL: https://issues.apache.org/jira/browse/HADOOP-12114
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Attachments: HADOOP-12114.001.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-06-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605619#comment-14605619
 ] 

Chris Nauroth commented on HADOOP-12111:


+1 for trying releasedocmaker instead of CHANGES.txt on the feature branch.

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12106) JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, ByteBuffer)

2015-06-29 Thread Tony Reix (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605373#comment-14605373
 ] 

Tony Reix commented on HADOOP-12106:


I have opened a defect against IBM JVM. Their answer, for now, is that they see 
NO issue there within their code.
I need someone from Hadoop to help me about this complex issue.

 JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, 
 ByteBuffer)
 --

 Key: HADOOP-12106
 URL: https://issues.apache.org/jira/browse/HADOOP-12106
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0, 2.7.0
 Environment: Hadoop 2.60 and 2.7+
  - AIX/PowerPC/IBMJVM
  - Ubuntu/i386/IBMJVM
Reporter: Tony Reix
 Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors


 On AIX (IBM JVM available only), many sub-tests of :
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
 fail:
  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
   - testCryptoIV
   - testSeek
   - testSkip
   - testAvailable
   - testPositionedRead
 When testing SAME exact code on Ubuntu/i386 :
   - with OpenJDK, all tests are OK
   - with IBM JVM, tests randomly fail.
 The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
 handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605392#comment-14605392
 ] 

Vinayakumar B commented on HADOOP-12119:


Latest patch looks good.
+1.
Will commit soon.

 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12119:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~vrushalic] for reporting the issue.
Thanks [~andreina] for the contribution.

 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605409#comment-14605409
 ] 

Hudson commented on HADOOP-12119:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8084/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605507#comment-14605507
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/243/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605511#comment-14605511
 ] 

Hudson commented on HADOOP-12119:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/243/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605520#comment-14605520
 ] 

Hudson commented on HADOOP-12119:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #973 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/973/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605516#comment-14605516
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #973 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/973/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12106) JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, ByteBuffer)

2015-06-29 Thread Tony Reix (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Reix updated HADOOP-12106:
---
Component/s: security

 JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, 
 ByteBuffer)
 --

 Key: HADOOP-12106
 URL: https://issues.apache.org/jira/browse/HADOOP-12106
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0, 2.7.0
 Environment: Hadoop 2.60 and 2.7+
  - AIX/PowerPC/IBMJVM
  - Ubuntu/i386/IBMJVM
Reporter: Tony Reix
 Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors


 On AIX (IBM JVM available only), many sub-tests of :
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
 fail:
  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
   - testCryptoIV
   - testSeek
   - testSkip
   - testAvailable
   - testPositionedRead
 When testing SAME exact code on Ubuntu/i386 :
   - with OpenJDK, all tests are OK
   - with IBM JVM, tests randomly fail.
 The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
 handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12143) Add a style guide to the Hadoop documentation

2015-06-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605389#comment-14605389
 ] 

Steve Loughran commented on HADOOP-12143:
-

# Thats {O(n)} complexity, JIRA appears to be adding emoticons into 
computational complexity notation
# I [have a 
draft|https://github.com/steveloughran/formality/blob/master/styleguide/styleguide.md]
 Not attaching it as a patch as it doesn't need to go through jenkins (yet), 
and I'd like people to review it via github rendering  pull requests for now.

 Add a style guide to the Hadoop documentation
 -

 Key: HADOOP-12143
 URL: https://issues.apache.org/jira/browse/HADOOP-12143
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.0
Reporter: Steve Loughran
Assignee: Steve Loughran

 We don't have a documented style guide for the Hadoop source or its tests 
 other than use the Java rules with two spaces. 
 That doesn't cover policy like
 # exception handling
 # logging
 # metrics
 # what makes a good test
 # why features that have O(n) or worse complexity or extra memory load on the 
 NN  RM Are unwelcome,
 # ... etc
 We have those in our heads, and we reject patches for not following them —but 
 as they aren't written down, how can we expect new submitters to follow them, 
 or back up our vetos with a policy to point at.
 I propose having an up to date style guide which defines the best practises 
 we expect for new codes. That can be stricter than the existing codebase: we 
 want things to improve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605672#comment-14605672
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2171 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2171/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605676#comment-14605676
 ] 

Hudson commented on HADOOP-12119:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2171 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2171/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605703#comment-14605703
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #232 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/232/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605707#comment-14605707
 ] 

Hudson commented on HADOOP-12119:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #232 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/232/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605854#comment-14605854
 ] 

Hudson commented on HADOOP-12119:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2189 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2189/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605850#comment-14605850
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2189 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2189/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12143) Add a style guide to the Hadoop documentation

2015-06-29 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605811#comment-14605811
 ] 

Kengo Seki commented on HADOOP-12143:
-

[~ste...@apache.org] I sent a trivial pr and raised a issue about test 
directory. I'd be happy if you check it.

 Add a style guide to the Hadoop documentation
 -

 Key: HADOOP-12143
 URL: https://issues.apache.org/jira/browse/HADOOP-12143
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.0
Reporter: Steve Loughran
Assignee: Steve Loughran

 We don't have a documented style guide for the Hadoop source or its tests 
 other than use the Java rules with two spaces. 
 That doesn't cover policy like
 # exception handling
 # logging
 # metrics
 # what makes a good test
 # why features that have O(n) or worse complexity or extra memory load on the 
 NN  RM Are unwelcome,
 # ... etc
 We have those in our heads, and we reject patches for not following them —but 
 as they aren't written down, how can we expect new submitters to follow them, 
 or back up our vetos with a policy to point at.
 I propose having an up to date style guide which defines the best practises 
 we expect for new codes. That can be stricter than the existing codebase: we 
 want things to improve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605882#comment-14605882
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #241 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/241/])
HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: rev 
3dfa8161f9412bcb040f3c29c471344f25f24337)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605886#comment-14605886
 ] 

Hudson commented on HADOOP-12119:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #241 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/241/])
HADOOP-12119. hadoop fs -expunge does not work for federated namespace 
(Contributed by J.Andreina) (vinayakumarb: rev 
c815344e2e68d78f6587b65bc2db25e151aa4364)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HADOOP-12119
 URL: https://issues.apache.org/jira/browse/HADOOP-12119
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch


 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12118) Validate xml configuration files with XML Schema

2015-06-29 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605993#comment-14605993
 ] 

Christopher Tubbs commented on HADOOP-12118:


xmllint isn't needed. You can validate in Java:

{code:java}
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
SchemaFactory sf = 
SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
dbf.setSchema(sf.newSchema(new File(path/to/hadoop-configuration.xsd)));
Document d = dbf.newDocumentBuilder().parse(new 
File(path/to/core-site.xml)); // throws exception if can't parse
... // additional checks, manual parsing, getting elements, etc. here
{code}


 Validate xml configuration files with XML Schema
 

 Key: HADOOP-12118
 URL: https://issues.apache.org/jira/browse/HADOOP-12118
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Christopher Tubbs
 Attachments: HADOOP-7947.branch-2.1.patch, hadoop-configuration.xsd


 I spent an embarrassingly long time today trying to figure out why the 
 following wouldn't work.
 {code}
 property
   keyfs.defaultFS/key
   valuehdfs://localhost:9000/value
 /property
 {code}
 I just kept getting an error about no authority for {{fs.defaultFS}}, with a 
 value of {{file:///}}, which made no sense... because I knew it was there.
 The problem was that the {{core-site.xml}} was parsed entirely without any 
 validation. This seems incorrect. The very least that could be done is a 
 simple XML Schema validation against an XSD, before parsing. That way, users 
 will get immediate failures on common typos and other problems in the xml 
 configuration files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7066/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12113.HADOOP-12111.00.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12121-HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: \\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 1s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 59, now 48). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 19s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742235/HADOOP-12113.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / 214ac3e |
| Optional Tests | asflicense site shellcheck |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7066/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7066/console |


This message was automatically generated.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: \\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 59, now 48). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 19s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742235/HADOOP-12113.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / 214ac3e |
| Optional Tests | asflicense site shellcheck |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7065/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7065/console |


This message was automatically generated.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12121-HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: (!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7065/console in case of 
problems.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12121-HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12146) dockermode should support custom maven repos

2015-06-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12146:
-

 Summary: dockermode should support custom maven repos
 Key: HADOOP-12146
 URL: https://issues.apache.org/jira/browse/HADOOP-12146
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer


On busy jenkins servers, it only takes one bad apple doing a 
dependency:purge-local-repository to wreak havoc on other projects. In order to 
protect against this, test-patch should have some way to overlay the .m2 
directory with something that is (minimally) per-project, per-branch and 
maximally per run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12045) Enable LocalFileSystem#setTimes to change atime

2015-06-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606655#comment-14606655
 ] 

Chris Nauroth commented on HADOOP-12045:


[~fjk], thank you for the update.

Unfortunately, the new symlink tests do not pass on Windows.  This is a 
consequence of the way symlink integration works on Windows right now.  On 
Windows, a call to {{RawLocalFileSystem#getFileLinkStatus}} returns a 
{{FileStatus}} populated with the symlink as the path, but the other attributes 
are populated from the symlink target.  This breaks the assertions that calling 
{{setTimes}} on the link doesn't alter the times reported by subsequent 
{{getFileLinkStatus}} calls.

This is a known limitation unrelated to the current patch, and we already have 
comments marking TODO's around native stat support for Windows.  I think it's 
appropriate to skip these tests on Windows for now.  You can do that by adding 
overrides in {{TestSymlinkLocalFS}} for each of the new test methods added to 
{{SymlinkBaseTest}}.  The override just checks if it's running on Windows, and 
then delegates up to the superclass to run the test.  Here is an existing 
example:

{code}
  @Override
  public void testCreateDanglingLink() throws IOException {
// Dangling symlinks are not supported on Windows local file system.
assumeTrue(!Path.WINDOWS);
super.testCreateDanglingLink();
  }
{code}

After that's done, I suspect it will be the final version of the patch.  Thanks 
for sticking with this!

 Enable LocalFileSystem#setTimes to change atime
 ---

 Key: HADOOP-12045
 URL: https://issues.apache.org/jira/browse/HADOOP-12045
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kazuho Fujii
Assignee: Kazuho Fujii
Priority: Minor
 Attachments: HADOOP-12045.001.patch, HADOOP-12045.002.patch, 
 HADOOP-12045.003.patch, HADOOP-12045.004-1.patch, HADOOP-12045.004-2.patch, 
 HADOOP-12045.005-1.patch, HADOOP-12045.005-2.patch


 LocalFileSystem#setTimes method can not change the last access time currently.
 With java.nio.file package in Java 7, we can implement the function easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12048) many undeclared used dependencies (and declared unused dependencies)

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606794#comment-14606794
 ] 

Hadoop QA commented on HADOOP-12048:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 16s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   0m 22s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742662/HADOOP-12048.3.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / d3797f9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7089/console |


This message was automatically generated.

 many undeclared used dependencies (and declared unused dependencies)
 

 Key: HADOOP-12048
 URL: https://issues.apache.org/jira/browse/HADOOP-12048
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Gabor Liptak
 Attachments: HADOOP-12048.1.patch, HADOOP-12048.2.patch, 
 HADOOP-12048.3.patch, dependency-analysis.txt, hadoop-unused.txt


 Currently there are numerous undeclared used dependencies and declared unused 
 dependencies in the hadoop projects.
 Undeclared used dependencies are easier errors to correct, and correcting 
 them will lead to a better management of dependencies (enabling stricter 
 dependency frameworks down the road).
 Declared unused dependencies are harder to resolve, as many may be legitimate 
 runtime dependencies. But fixing them would lead to smaller profiles for 
 hadoop projects.
 We can do a one-time scan of dependency issues and fix them. However, in the 
 long run, it would be nice to be able to enforce those rules via maven 
 plug-in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12106) JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, ByteBuffer)

2015-06-29 Thread Tony Reix (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605374#comment-14605374
 ] 

Tony Reix commented on HADOOP-12106:


Component: I've said: secutity. However, encryption would be better I 
think, but it is not proposed.

 JceAesCtrCryptoCodec.java may have an issue with Cypher.update(ByteBuffer, 
 ByteBuffer)
 --

 Key: HADOOP-12106
 URL: https://issues.apache.org/jira/browse/HADOOP-12106
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0, 2.7.0
 Environment: Hadoop 2.60 and 2.7+
  - AIX/PowerPC/IBMJVM
  - Ubuntu/i386/IBMJVM
Reporter: Tony Reix
 Attachments: mvn.Test.TestCryptoStreamsForLocalFS.res20.AIX.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res20.Ubuntu-i386.IBMJVM.Errors, 
 mvn.Test.TestCryptoStreamsForLocalFS.res22.OpenJDK.Errors


 On AIX (IBM JVM available only), many sub-tests of :
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
 fail:
  Tests run: 13, Failures: 5, Errors: 1, Skipped: 
   - testCryptoIV
   - testSeek
   - testSkip
   - testAvailable
   - testPositionedRead
 When testing SAME exact code on Ubuntu/i386 :
   - with OpenJDK, all tests are OK
   - with IBM JVM, tests randomly fail.
 The issue may be in the IBM JVM, or in some Hadoop code that not perfectly 
 handles differences due to different IBM JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12143) Add a style guide to the Hadoop documentation

2015-06-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606148#comment-14606148
 ] 

Steve Loughran commented on HADOOP-12143:
-

-thanks, merged it in

 Add a style guide to the Hadoop documentation
 -

 Key: HADOOP-12143
 URL: https://issues.apache.org/jira/browse/HADOOP-12143
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.0
Reporter: Steve Loughran
Assignee: Steve Loughran

 We don't have a documented style guide for the Hadoop source or its tests 
 other than use the Java rules with two spaces. 
 That doesn't cover policy like
 # exception handling
 # logging
 # metrics
 # what makes a good test
 # why features that have O(n) or worse complexity or extra memory load on the 
 NN  RM Are unwelcome,
 # ... etc
 We have those in our heads, and we reject patches for not following them —but 
 as they aren't written down, how can we expect new submitters to follow them, 
 or back up our vetos with a policy to point at.
 I propose having an up to date style guide which defines the best practises 
 we expect for new codes. That can be stricter than the existing codebase: we 
 want things to improve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11708) CryptoOutputStream synchronization differences from DFSOutputStream break HBase

2015-06-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606152#comment-14606152
 ] 

Colin Patrick McCabe commented on HADOOP-11708:
---

Thanks, [~busbey].  I see that we have a file 
{{hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md}}
 that discusses the concurrency guarantees of Hadoop input streams now.  
[~steve_l], do we have one for output streams as well?  Maybe I missed it?  If 
not, we should create something like that.

 CryptoOutputStream synchronization differences from DFSOutputStream break 
 HBase
 ---

 Key: HADOOP-11708
 URL: https://issues.apache.org/jira/browse/HADOOP-11708
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical

 For the write-ahead-log, HBase writes to DFS from a single thread and sends 
 sync/flush/hflush from a configurable number of other threads (default 5).
 FSDataOutputStream does not document anything about being thread safe, and it 
 is not thread safe for concurrent writes.
 However, DFSOutputStream is thread safe for concurrent writes + syncs. When 
 it is the stream FSDataOutputStream wraps, the combination is threadsafe for 
 1 writer and multiple syncs (the exact behavior HBase relies on).
 When HDFS Transparent Encryption is turned on, CryptoOutputStream is inserted 
 between FSDataOutputStream and DFSOutputStream. It is proactively labeled as 
 not thread safe, and this composition is not thread safe for any operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12127) some personalities are still using releaseaudit instead of asflicense

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12127:
--
Attachment: HADOOP-12127.HADOOP-12111.patch

 some personalities are still using releaseaudit instead of asflicense
 -

 Key: HADOOP-12127
 URL: https://issues.apache.org/jira/browse/HADOOP-12127
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Trivial
 Attachments: HADOOP-12127.HADOOP-12111.patch


 Simple bug: releaseaudit test was renamed to be asflicense.  Some 
 personalities are still using the old name and therefore doing the wrong 
 thing.  Just need to rename them in the personality files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12127) some personalities are still using releaseaudit instead of asflicense

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12127:
--
Status: Patch Available  (was: Open)

 some personalities are still using releaseaudit instead of asflicense
 -

 Key: HADOOP-12127
 URL: https://issues.apache.org/jira/browse/HADOOP-12127
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Trivial
 Attachments: HADOOP-12127.HADOOP-12111.patch


 Simple bug: releaseaudit test was renamed to be asflicense.  Some 
 personalities are still using the old name and therefore doing the wrong 
 thing.  Just need to rename them in the personality files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12127) some personalities are still using releaseaudit instead of asflicense

2015-06-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606160#comment-14606160
 ] 

Allen Wittenauer commented on HADOOP-12127:
---

We should probably also git mv apache-rat to asflicense for consistency.

 some personalities are still using releaseaudit instead of asflicense
 -

 Key: HADOOP-12127
 URL: https://issues.apache.org/jira/browse/HADOOP-12127
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Trivial
 Attachments: HADOOP-12127.HADOOP-12111.patch


 Simple bug: releaseaudit test was renamed to be asflicense.  Some 
 personalities are still using the old name and therefore doing the wrong 
 thing.  Just need to rename them in the personality files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11644) Contribute CMX compression

2015-06-29 Thread Xabriel J Collazo Mojica (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606021#comment-14606021
 ] 

Xabriel J Collazo Mojica commented on HADOOP-11644:
---

It would be great if someone could review the patch. Let me know if you have 
questions.

 Contribute CMX compression
 --

 Key: HADOOP-11644
 URL: https://issues.apache.org/jira/browse/HADOOP-11644
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Xabriel J Collazo Mojica
Assignee: Xabriel J Collazo Mojica
 Attachments: HADOOP-11644.001.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
 and ZLIB.
 Each one of these algorithms fills a gap:
 bzip2 : Very high compression ratio, splittable
 LZ4 : Very fast, non splittable
 Snappy : Very fast, non splittable
 zLib : good balance of compression and speed.
 We think there is a gap for a compression algorithm that can perform fast 
 compress and decompress, while also being splittable. This can help 
 significantly on jobs where the input file sizes are = 1GB.
 For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
 splittable, concatenable compression algorithm developed specifically for 
 Hadoop workloads. Many of our customers use CMX, and we would love to be able 
 to contribute it to hadoop-common. 
 CMX is block oriented : We typically use 64k blocks. Blocks are independently 
 decompressable.
 CMX is splittable : We implement the SplittableCompressionCodec interface. 
 All CMX files are a multiple of 64k, so the splittability is achieved in a 
 simple way with no need for external indexes.
 CMX is concatenable : Two independent CMX files can be concatenated together. 
 We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12144) bundled docker image should symlink java versions

2015-06-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12144:
-

 Summary: bundled docker image should symlink java versions
 Key: HADOOP-12144
 URL: https://issues.apache.org/jira/browse/HADOOP-12144
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


The default docker container should symlink the java versions installed to 
something generic (e.g., oracle8, openjdk7, whatever) so that using --multijdk 
will work without doing anything crazy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12121-HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606046#comment-14606046
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742542/HADOOP-12121.HADOOP-12111.patch
 |
| git revision | HADOOP-12111 / 8e657fb |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7080/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12121) smarter branch detection

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12121:
--
Attachment: HADOOP-12121.HADOOP-12111.patch

-00:
* use git show-ref and cat-file to determine viability.  this means if git 
honors it, so do we (so, if git is case sensitive, so are we...)
* remove the branch listing code. no longer needed due to above
* short circuit empty URL or CLI path
* strip .txt, .diff, and .patch from the name
* support minor, micro, nano(?), pico(?) ... releases
* recurse down the periods. this means if someone creates a file with 
branch-3.0.0, it will go instead to branch-3 if branch-3.0.0 and branch-3.0 
does not exist.
* support for ISSUE-##.[##].branch format, used by some projects


 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12142:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committed

 Test code modification is not detected if test directory is at the top level 
 of the project
 ---

 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12142.HADOOP-12111.01.patch


 On HADOOP-12134, a sample patch contains test code modification, but 
 test4tests failed to detect it.
 This is because test directory is at the top level of the Pig project. 
 test-patch detects changed files as follows, so the pattern does not match 
 test/
 {code}
   for i in ${CHANGED_FILES}; do
 if [[ ${i} =~ /test/ ]]; then
   ((testReferences=testReferences + 1))
 fi
   done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: \\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  2s | The patch command could not apply 
the patch. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742540/HADOOP-12121-HADOOP-12111.patch
 |
| Optional Tests | shellcheck |
| git revision | trunk / d3fed8e |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7079/console |


This message was automatically generated.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12121.HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606028#comment-14606028
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  2s | The patch command could not apply 
the patch. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742540/HADOOP-12121-HADOOP-12111.patch
 |
| Optional Tests | shellcheck |
| git revision | trunk / d3fed8e |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7079/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121-HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606045#comment-14606045
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7080/console in case of 
problems.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12121) smarter branch detection

2015-06-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606061#comment-14606061
 ] 

Allen Wittenauer edited comment on HADOOP-12121 at 6/29/15 6:24 PM:


-00:
* use git show-ref and cat-file to determine viability.  this means if git 
honors it, so do we (so, if git is case sensitive, so are we...)
* remove the branch listing code. no longer needed due to above
* short circuit empty URL or CLI path
* strip .txt, .diff, and .patch from the name
* support minor, micro, nano(?), pico(?) ... releases
* recurse down the periods. this means if someone creates a file with 
branch-3.0.0, it will go instead to branch-3 if branch-3.0.0 and branch-3.0 
does not exist.
* support for ISSUE-##.[##].branch format, used by some projects
* using a git### ref will have PATCH_BRANCH set to  if it is viable


was (Author: aw):
-00:
* use git show-ref and cat-file to determine viability.  this means if git 
honors it, so do we (so, if git is case sensitive, so are we...)
* remove the branch listing code. no longer needed due to above
* short circuit empty URL or CLI path
* strip .txt, .diff, and .patch from the name
* support minor, micro, nano(?), pico(?) ... releases
* recurse down the periods. this means if someone creates a file with 
branch-3.0.0, it will go instead to branch-3 if branch-3.0.0 and branch-3.0 
does not exist.
* support for ISSUE-##.[##].branch format, used by some projects


 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12134) Pig personality always fails at precheck_javac and check_patch_javac

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12134:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committed.

 Pig personality always fails at precheck_javac and check_patch_javac
 

 Key: HADOOP-12134
 URL: https://issues.apache.org/jira/browse/HADOOP-12134
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12134.HADOOP-12111.01.patch


 Currently, pig personality always fails at precheck_javac and 
 check_patch_javac by the following error:
 {code}
 forrest.check:
 BUILD FAILED
 /Users/sekikn/pig/build.xml:648: 'forrest.home' is not defined.   Please 
 pass -Dforrest.home=base of Apache Forrest installation to Ant on the 
 command-line.
 {code}
 This is because tar target depends on docs via package. But publishing 
 documents isn't needed in javac phase. Probably piggybank target is 
 suitable for the purpose of this phase. It kicks jar target also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12121) smarter branch detection

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606077#comment-14606077
 ] 

Hadoop QA commented on HADOOP-12121:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 13s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742549/HADOOP-12121.HADOOP-12111.patch
 |
| git revision | HADOOP-12111 / 2f801d6 |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7081/console |


This message was automatically generated.

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12121) smarter branch detection

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606076#comment-14606076
 ] 

Hadoop QA commented on HADOOP-12121:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7081/console in case of 
problems.

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11914) test-patch.sh confused by certain patch formats

2015-06-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606096#comment-14606096
 ] 

Allen Wittenauer commented on HADOOP-11914:
---

Is the comment above this code still correct?

 test-patch.sh confused by certain patch formats
 ---

 Key: HADOOP-11914
 URL: https://issues.apache.org/jira/browse/HADOOP-11914
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Critical
 Attachments: HADOOP-11914.001.patch, HADOOP-11914.002.patch, 
 HADOOP-11914.HADOOP-12111.03.patch


 A simple patch example:
 {code}
 diff --git a/start-build-env.sh b/start-build-env.sh
 old mode 100644
 new mode 100755
 {code}
 start-build-env.sh will not show up in the changed files list and therefore 
 will not get run by shellcheck.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-06-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606101#comment-14606101
 ] 

Allen Wittenauer commented on HADOOP-12111:
---

I'm going to remove the CHANGES.txt file then.  Once we move our JIRAs into its 
own project and re-categorize, releasedocmaker will do the right thing 
(post-HADOOP-12135)

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606365#comment-14606365
 ] 

Hadoop QA commented on HADOOP-12124:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 21s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   7m 53s | The applied patch generated  2  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  7s | The applied patch generated  1 
new checkstyle issues (total was 20, now 21). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 14s | Tests passed in 
hadoop-common. |
| | |  62m 57s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742576/HADOOP-12124.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fad291e |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7083/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7083/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7083/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7083/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7083/console |


This message was automatically generated.

 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606431#comment-14606431
 ] 

Gera Shegalov commented on HADOOP-12107:


bq. You could make the same argument to stop development on almost any patch.

I disagree with such a strong statement. It's not the case in my experience.  
Thanks for pointing out the compatibility document. It gives us a formal basis 
to go on, and not delay [~sjlee0]'s important fix. Maybe one day we'll have a 
compatibility test suite based on that doc.

bq. It's simply unreasonable to try to support users who are putting their code 
inside the org.apache.hadoop.fs 
We develop new Hadoop features and often they do not make it upstream 
immediately. It happens that we have classes in their intended packages but we 
can deal with this. We are not affected by this particular change, either.

+1 for both trunk and branch 2. [~mingma], do you want to exercise your 
committer rights :) ?


 





 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12089) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-06-29 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12089:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v02.  I committed this to trunk and branch-2.  Duo, thank you for 
contributing the patch.

 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12089
 URL: https://issues.apache.org/jira/browse/HADOOP-12089
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Fix For: 2.8.0

 Attachments: HADOOP-12089.01.patch, HADOOP-12089.02.patch


 This is a similar issue to HADOOP-11523. HADOOP-11523 happens when HBase is 
 doing distributed log splitting. This JIRA happens when HBase is deleting old 
 WALs and trying to update /hbase/oldWALs folder.
 The fix is the same as HADOOP-11523.
 {code}
 2015-06-10 08:11:40,636 WARN 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while deleting: 
 wasb://basecus...@basestoragecus1.blob.core.windows.net/hbase/oldWALs/workernode10.dthbasecus1.g1.internal.cloudapp.net%2C60020%2C1433908062461.1433921692855
 org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2602)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2613)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1505)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1437)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:256)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
   at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:162)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:177)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2991)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2597)
   ... 8 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12048) many undeclared used dependencies (and declared unused dependencies)

2015-06-29 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-12048:
--
Attachment: HADOOP-12048.2.patch

 many undeclared used dependencies (and declared unused dependencies)
 

 Key: HADOOP-12048
 URL: https://issues.apache.org/jira/browse/HADOOP-12048
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Gabor Liptak
 Attachments: HADOOP-12048.1.patch, HADOOP-12048.2.patch, 
 dependency-analysis.txt, hadoop-unused.txt


 Currently there are numerous undeclared used dependencies and declared unused 
 dependencies in the hadoop projects.
 Undeclared used dependencies are easier errors to correct, and correcting 
 them will lead to a better management of dependencies (enabling stricter 
 dependency frameworks down the road).
 Declared unused dependencies are harder to resolve, as many may be legitimate 
 runtime dependencies. But fixing them would lead to smaller profiles for 
 hadoop projects.
 We can do a one-time scan of dependency issues and fix them. However, in the 
 long run, it would be nice to be able to enforce those rules via maven 
 plug-in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606271#comment-14606271
 ] 

Colin Patrick McCabe commented on HADOOP-12107:
---

Guys, we clearly define the API contract for the project.  See 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

You have to remember that:
1. The function that you are talking about changing (the constructor) is not 
public from Java's point of view.  It is package-private.
2. The function that you are talking about changing is not public from Hadoop's 
point of view (there is no \@Public or \@LimitedPrivate annotation on it)

There is simply no reason to treat this as public.

bq. However, at the expense of being too defensive, the only test I apply here: 
is there hypothetically a scenario where an API user can be broken? My answer 
is yes if you have some org.apache.hadoop.fs.Foo calling the constructor even 
though the user absolutely should not do it. 

You could make the same argument to stop development on almost any patch.  
Almost every patch changes things which are private or package-private inside 
Hadoop.  It's simply unreasonable to try to support users who are putting their 
code inside the org.apache.hadoop.fs namespace (or any other internal project 
namespace)

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12144) bundled docker image should symlink java versions

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12144.
---
Resolution: Not A Problem

I misread the installed dirs.  Closing as not a problem.

 bundled docker image should symlink java versions
 -

 Key: HADOOP-12144
 URL: https://issues.apache.org/jira/browse/HADOOP-12144
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer

 The default docker container should symlink the java versions installed to 
 something generic (e.g., oracle8, openjdk7, whatever) so that using 
 --multijdk will work without doing anything crazy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606495#comment-14606495
 ] 

Hudson commented on HADOOP-12107:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8089 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8089/])
HADOOP-12107. long running apps may have a huge number of StatisticsData 
instances under FileSystem (Sangjin Lee via Ming Ma) (mingma: rev 
8e1bdc17d9134e01115ae7c929503d8ac0325207)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FCStatisticsBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11909) test-patch javac javadoc output clarity

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11909:
--
Assignee: (was: Allen Wittenauer)

 test-patch javac  javadoc output clarity
 -

 Key: HADOOP-11909
 URL: https://issues.apache.org/jira/browse/HADOOP-11909
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 It should be possible to use similar code as is in the checkstyle plugin to 
 just print the errors/warnings of relevant lines rather than the full diff.  
 Additionally, when javac (and maybe javadoc too?) fails, it doesn't provide a 
 link to the log file.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11909) test-patch javac javadoc output clarity

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11909:
--
Description: It should be possible to use similar code as is in the 
checkstyle plugin to just print the errors/warnings of relevant lines rather 
than the full diff.   (was: It should be possible to use similar code as is in 
the checkstyle plugin to just print the errors/warnings of relevant lines 
rather than the full diff.  Additionally, when javac (and maybe javadoc too?) 
fails, it doesn't provide a link to the log file.  )

 test-patch javac  javadoc output clarity
 -

 Key: HADOOP-11909
 URL: https://issues.apache.org/jira/browse/HADOOP-11909
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 It should be possible to use similar code as is in the checkstyle plugin to 
 just print the errors/warnings of relevant lines rather than the full diff. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HADOOP-11909) test-patch javac javadoc output clarity

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11909 stopped by Allen Wittenauer.
-
 test-patch javac  javadoc output clarity
 -

 Key: HADOOP-11909
 URL: https://issues.apache.org/jira/browse/HADOOP-11909
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer

 It should be possible to use similar code as is in the checkstyle plugin to 
 just print the errors/warnings of relevant lines rather than the full diff.  
 Additionally, when javac (and maybe javadoc too?) fails, it doesn't provide a 
 link to the log file.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11909) test-patch javac javadoc output clarity

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-11909:
-

Assignee: Allen Wittenauer

 test-patch javac  javadoc output clarity
 -

 Key: HADOOP-11909
 URL: https://issues.apache.org/jira/browse/HADOOP-11909
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer

 It should be possible to use similar code as is in the checkstyle plugin to 
 just print the errors/warnings of relevant lines rather than the full diff.  
 Additionally, when javac (and maybe javadoc too?) fails, it doesn't provide a 
 link to the log file.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12145) Organize and update CodeReviewChecklist wiki

2015-06-29 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-12145:
---

 Summary: Organize and update CodeReviewChecklist wiki
 Key: HADOOP-12145
 URL: https://issues.apache.org/jira/browse/HADOOP-12145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor


I haven't done too many reviews yet, but I've definitely had a lot of good 
review from others in the community.

I've put together a preliminary update with the following things in mind:
- In the spirit of trying to lower the barrier for new developers, reorganized 
the document to be a bit more like a checklist
- Added checklist items that other reviewers have caught in my earlier patch 
submissions
- Added more checklist items based on what I've read in past JIRAs




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12089) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-06-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606388#comment-14606388
 ] 

Hudson commented on HADOOP-12089:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8088 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8088/])
HADOOP-12089. StorageException complaining  no lease ID when updating 
FolderLastModifiedTime in WASB. Contributed by Duo Xu. (cnauroth: rev 
460e98f7b3ec84f3c5afcb2aad4f4e7031d16e3a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12089
 URL: https://issues.apache.org/jira/browse/HADOOP-12089
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Fix For: 2.8.0

 Attachments: HADOOP-12089.01.patch, HADOOP-12089.02.patch


 This is a similar issue to HADOOP-11523. HADOOP-11523 happens when HBase is 
 doing distributed log splitting. This JIRA happens when HBase is deleting old 
 WALs and trying to update /hbase/oldWALs folder.
 The fix is the same as HADOOP-11523.
 {code}
 2015-06-10 08:11:40,636 WARN 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while deleting: 
 wasb://basecus...@basestoragecus1.blob.core.windows.net/hbase/oldWALs/workernode10.dthbasecus1.g1.internal.cloudapp.net%2C60020%2C1433908062461.1433921692855
 org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2602)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2613)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1505)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1437)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:256)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
   at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:162)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:177)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2991)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2597)
   ... 8 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606514#comment-14606514
 ] 

Ming Ma commented on HADOOP-12107:
--

Also thanks for [~walter.k.su] and [~sandyr] for the review and suggestion.

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HADOOP-12107:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Fix For: 2.8.0

 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11909) test-patch javac javadoc output clarity

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11909:
--
Assignee: (was: Jayaradha)

 test-patch javac  javadoc output clarity
 -

 Key: HADOOP-11909
 URL: https://issues.apache.org/jira/browse/HADOOP-11909
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 It should be possible to use similar code as is in the checkstyle plugin to 
 just print the errors/warnings of relevant lines rather than the full diff.  
 Additionally, when javac (and maybe javadoc too?) fails, it doesn't provide a 
 link to the log file.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606509#comment-14606509
 ] 

Ming Ma commented on HADOOP-12107:
--

I have committed this to trunk and branch-2. Thanks [~sjlee0] for the 
contribution and [~jira.shegalov] and [~cmccabe] for the code review!

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12145) Organize and update CodeReviewChecklist wiki

2015-06-29 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-12145:

Attachment: 2015_CodeReviewChecklistWiki.001.pdf

Initial version

 Organize and update CodeReviewChecklist wiki
 

 Key: HADOOP-12145
 URL: https://issues.apache.org/jira/browse/HADOOP-12145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: 2015_CodeReviewChecklistWiki.001.pdf


 I haven't done too many reviews yet, but I've definitely had a lot of good 
 review from others in the community.
 I've put together a preliminary update with the following things in mind:
 - In the spirit of trying to lower the barrier for new developers, 
 reorganized the document to be a bit more like a checklist
 - Added checklist items that other reviewers have caught in my earlier patch 
 submissions
 - Added more checklist items based on what I've read in past JIRAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-06-29 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606552#comment-14606552
 ] 

Sangjin Lee commented on HADOOP-12107:
--

Thanks [~mingma] for the commit! Many thanks to [~jira.shegalov] and [~cmccabe] 
for the invaluable review.

 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Fix For: 2.8.0

 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605253#comment-14605253
 ] 

Hadoop QA commented on HADOOP-12142:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742470/HADOOP-12142.HADOOP-12111.01.patch
 |
| git revision | HADOOP-12111 / 8e657fb |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7077/console |


This message was automatically generated.

 Test code modification is not detected if test directory is at the top level 
 of the project
 ---

 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12142.HADOOP-12111.01.patch


 On HADOOP-12134, a sample patch contains test code modification, but 
 test4tests failed to detect it.
 This is because test directory is at the top level of the Pig project. 
 test-patch detects changed files as follows, so the pattern does not match 
 test/
 {code}
   for i in ${CHANGED_FILES}; do
 if [[ ${i} =~ /test/ ]]; then
   ((testReferences=testReferences + 1))
 fi
   done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605252#comment-14605252
 ] 

Hadoop QA commented on HADOOP-12142:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7077/console in case of 
problems.

 Test code modification is not detected if test directory is at the top level 
 of the project
 ---

 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12142.HADOOP-12111.01.patch


 On HADOOP-12134, a sample patch contains test code modification, but 
 test4tests failed to detect it.
 This is because test directory is at the top level of the Pig project. 
 test-patch detects changed files as follows, so the pattern does not match 
 test/
 {code}
   for i in ${CHANGED_FILES}; do
 if [[ ${i} =~ /test/ ]]; then
   ((testReferences=testReferences + 1))
 fi
   done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12143) Add a style guide to the Hadoop documentation

2015-06-29 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12143:
---

 Summary: Add a style guide to the Hadoop documentation
 Key: HADOOP-12143
 URL: https://issues.apache.org/jira/browse/HADOOP-12143
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.0
Reporter: Steve Loughran
Assignee: Steve Loughran


We don't have a documented style guide for the Hadoop source or its tests other 
than use the Java rules with two spaces. 

That doesn't cover policy like
# exception handling
# logging
# metrics
# what makes a good test
# why features that have O(n) or worse complexity or extra memory load on the 
NN  RM Are unwelcome,
# ... etc

We have those in our heads, and we reject patches for not following them —but 
as they aren't written down, how can we expect new submitters to follow them, 
or back up our vetos with a policy to point at.

I propose having an up to date style guide which defines the best practises we 
expect for new codes. That can be stricter than the existing codebase: we want 
things to improve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12142:

Attachment: HADOOP-12142.HADOOP-12111.01.patch

 Test code modification is not detected if test directory is at the top level 
 of the project
 ---

 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12142.HADOOP-12111.01.patch


 On HADOOP-12134, a sample patch contains test code modification, but 
 test4tests failed to detect it.
 This is because test directory is at the top level of the Pig project. 
 test-patch detects changed files as follows, so the pattern does not match 
 test/
 {code}
   for i in ${CHANGED_FILES}; do
 if [[ ${i} =~ /test/ ]]; then
   ((testReferences=testReferences + 1))
 fi
   done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12142:

Status: Patch Available  (was: Open)

 Test code modification is not detected if test directory is at the top level 
 of the project
 ---

 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12142.HADOOP-12111.01.patch


 On HADOOP-12134, a sample patch contains test code modification, but 
 test4tests failed to detect it.
 This is because test directory is at the top level of the Pig project. 
 test-patch detects changed files as follows, so the pattern does not match 
 test/
 {code}
   for i in ${CHANGED_FILES}; do
 if [[ ${i} =~ /test/ ]]; then
   ((testReferences=testReferences + 1))
 fi
   done
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11914) test-patch.sh confused by certain patch formats

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605279#comment-14605279
 ] 

Hadoop QA commented on HADOOP-11914:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7078/console in case of 
problems.

 test-patch.sh confused by certain patch formats
 ---

 Key: HADOOP-11914
 URL: https://issues.apache.org/jira/browse/HADOOP-11914
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Critical
 Attachments: HADOOP-11914.001.patch, HADOOP-11914.002.patch, 
 HADOOP-11914.HADOOP-12111.03.patch


 A simple patch example:
 {code}
 diff --git a/start-build-env.sh b/start-build-env.sh
 old mode 100644
 new mode 100755
 {code}
 start-build-env.sh will not show up in the changed files list and therefore 
 will not get run by shellcheck.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-06-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605248#comment-14605248
 ] 

Steve Loughran commented on HADOOP-11628:
-

OK:  is there any reason why the patch as is SHOULD NOT go into 2.8?

Regarding options, SPNEGO isn't that broadly used in the Hadoop stack at least 
with Jersey (KMS, WebHDFS  Timeline, each with their own client). I do want to 
coalesce these with a single common HTTP/Jersey client. Having this feature in 
without another config option would work better on Java 8, and avoid adding 
another dimension to the configuration-space which is hadoop's -site.xml and 
the tests around it. Assuming this is a no-op on Java 7, enabling it will give 
consistent behaviour for Java 8, so it should not count as a regression there


 SPNEGO auth does not work with CNAMEs in JDK8
 -

 Key: HADOOP-11628
 URL: https://issues.apache.org/jira/browse/HADOOP-11628
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
  Labels: jdk8
 Attachments: HADOOP-11628.patch


 Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
 principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
 user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12142) Test code modification is not detected if test directory is at the top level of the project

2015-06-29 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12142:
---

 Summary: Test code modification is not detected if test directory 
is at the top level of the project
 Key: HADOOP-12142
 URL: https://issues.apache.org/jira/browse/HADOOP-12142
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki


On HADOOP-12134, a sample patch contains test code modification, but test4tests 
failed to detect it.
This is because test directory is at the top level of the Pig project. 
test-patch detects changed files as follows, so the pattern does not match 
test/

{code}
  for i in ${CHANGED_FILES}; do
if [[ ${i} =~ /test/ ]]; then
  ((testReferences=testReferences + 1))
fi
  done
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12048) many undeclared used dependencies (and declared unused dependencies)

2015-06-29 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605254#comment-14605254
 ] 

Kengo Seki commented on HADOOP-12048:
-

[~gliptak] they will be resolved if HADOOP-12113 is merged into trunk. 
According to its release note:

bq. smart-apply-patch has a stray rm fixed.

 many undeclared used dependencies (and declared unused dependencies)
 

 Key: HADOOP-12048
 URL: https://issues.apache.org/jira/browse/HADOOP-12048
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Gabor Liptak
 Attachments: HADOOP-12048.1.patch, dependency-analysis.txt, 
 hadoop-unused.txt


 Currently there are numerous undeclared used dependencies and declared unused 
 dependencies in the hadoop projects.
 Undeclared used dependencies are easier errors to correct, and correcting 
 them will lead to a better management of dependencies (enabling stricter 
 dependency frameworks down the road).
 Declared unused dependencies are harder to resolve, as many may be legitimate 
 runtime dependencies. But fixing them would lead to smaller profiles for 
 hadoop projects.
 We can do a one-time scan of dependency issues and fix them. However, in the 
 long run, it would be nice to be able to enforce those rules via maven 
 plug-in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11914) test-patch.sh confused by certain patch formats

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605280#comment-14605280
 ] 

Hadoop QA commented on HADOOP-11914:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 14s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742478/HADOOP-11914.HADOOP-12111.03.patch
 |
| git revision | HADOOP-12111 / 8e657fb |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7078/console |


This message was automatically generated.

 test-patch.sh confused by certain patch formats
 ---

 Key: HADOOP-11914
 URL: https://issues.apache.org/jira/browse/HADOOP-11914
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Critical
 Attachments: HADOOP-11914.001.patch, HADOOP-11914.002.patch, 
 HADOOP-11914.HADOOP-12111.03.patch


 A simple patch example:
 {code}
 diff --git a/start-build-env.sh b/start-build-env.sh
 old mode 100644
 new mode 100755
 {code}
 start-build-env.sh will not show up in the changed files list and therefore 
 will not get run by shellcheck.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2015-06-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14605250#comment-14605250
 ] 

Steve Loughran commented on HADOOP-11875:
-

Bear in mind that the YARN Hamlet web framework is used downstream in YARN 
apps: no matter what is done here, it needs to be retained

 [JDK8] Renaming _ as a one-character identifier to another identifier
 -

 Key: HADOOP-11875
 URL: https://issues.apache.org/jira/browse/HADOOP-11875
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
  Labels: newbie

 From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
 uses it. We should fix them to compile with JDK8. 
 https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11914) test-patch.sh confused by certain patch formats

2015-06-29 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-11914:

Attachment: HADOOP-11914.HADOOP-12111.03.patch

Patch rebased.

 test-patch.sh confused by certain patch formats
 ---

 Key: HADOOP-11914
 URL: https://issues.apache.org/jira/browse/HADOOP-11914
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Critical
 Attachments: HADOOP-11914.001.patch, HADOOP-11914.002.patch, 
 HADOOP-11914.HADOOP-12111.03.patch


 A simple patch example:
 {code}
 diff --git a/start-build-env.sh b/start-build-env.sh
 old mode 100644
 new mode 100755
 {code}
 start-build-env.sh will not show up in the changed files list and therefore 
 will not get run by shellcheck.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12146) dockermode should support custom maven repos

2015-06-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606632#comment-14606632
 ] 

Allen Wittenauer edited comment on HADOOP-12146 at 6/29/15 11:13 PM:
-

IMO, per-run is going to be expensive and slow. Per-project doesn't protect 
projects against themselves.  I think per-project, per-branch is probably the 
correct mix. Also, I think this mode should run in effectively three ways:

* no override
* custom directory specified by the user
* some default directory path structure (e.g., 
$\{HOME\}/test-patch-maven/$\{project\}/$\{branch\}

In the case of the latter, we'll need to build some self-cleaning code similar 
to the things we're doing for docker in general.


was (Author: aw):
IMO, per-run is going to be expensive and slow. Per-project doesn't protect 
projects against themselves.  I think per-project, per-branch is probably the 
correct mix. Also, I think this mode should run in effectively three ways:

* no override
* custom directory specified by the user
* some default directory path structure (e.g., 
${HOME}/test-patch-maven/${project}/${branch}

In the case of the latter, we'll need to build some self-cleaning code similar 
to the things we're doing for docker in general.

 dockermode should support custom maven repos
 

 Key: HADOOP-12146
 URL: https://issues.apache.org/jira/browse/HADOOP-12146
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 On busy jenkins servers, it only takes one bad apple doing a 
 dependency:purge-local-repository to wreak havoc on other projects. In order 
 to protect against this, test-patch should have some way to overlay the .m2 
 directory with something that is (minimally) per-project and maximally per 
 run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606715#comment-14606715
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 20s | Pre-patch HADOOP-12111 
compilation is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   0m 21s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742644/HADOOP-12048.HADOOP-12111.patch
 |
| Optional Tests | javadoc javac unit |
| git revision | HADOOP-12111 / c5815a6 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7086/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606764#comment-14606764
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 27m 3s 
{color} | {color:red} root in HADOOP-12111 failed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-auth in the patch failed. {color} |
| {color:green}+1{color} | {color:green} eclipse {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-auth in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-auth-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742653/1.HADOOP-12111.patch |
| git revision | HADOOP-12111 / c5815a6 |
| Optional Tests | asflicense shellcheck javac javadoc mvninstall unit xml |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Default Java | 1.7.0_55 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7088/artifact/patchprocess/branch-mvninstall-root.txt
 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7088/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-auth.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7088/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-auth.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7088/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7088/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12121.HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12048.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: HADOOP-12048.HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-12048.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: \\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 42s | Pre-patch HADOOP-12111 
compilation is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   0m 21s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742644/HADOOP-12048.HADOOP-12111.patch
 |
| Optional Tests | javadoc javac unit |
| git revision | HADOOP-12111 / c5815a6 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7085/console |


This message was automatically generated.)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: (was: HADOOP-12048.HADOOP-12111.patch)

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: 1.HADOOP-12111.patch

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.HADOOP-12111.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12147) bundled dockerfile should use the JDK verison of openjdk, not JRE

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12147:
--
Attachment: HADOOP-12147.HADOOP-12111.patch

-00:
* force openjdk 7 to be a jdk instead of a jre

 bundled dockerfile should use the JDK verison of openjdk, not JRE
 -

 Key: HADOOP-12147
 URL: https://issues.apache.org/jira/browse/HADOOP-12147
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Priority: Trivial
 Attachments: HADOOP-12147.HADOOP-12111.patch


 It's sort of dumb to have OpenJDK JRE there...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-06-29 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606740#comment-14606740
 ] 

Yi Liu commented on HADOOP-12124:
-

+1, thanks Colin.

 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12048) many undeclared used dependencies (and declared unused dependencies)

2015-06-29 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606756#comment-14606756
 ] 

Gabor Liptak commented on HADOOP-12048:
---

Jenkins shows a javac failure, but the log doesn't have the error.

https://builds.apache.org/job/PreCommit-HADOOP-Build/7084/console

What is the best way to identify how this failed?

 many undeclared used dependencies (and declared unused dependencies)
 

 Key: HADOOP-12048
 URL: https://issues.apache.org/jira/browse/HADOOP-12048
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Gabor Liptak
 Attachments: HADOOP-12048.1.patch, HADOOP-12048.2.patch, 
 HADOOP-12048.3.patch, dependency-analysis.txt, hadoop-unused.txt


 Currently there are numerous undeclared used dependencies and declared unused 
 dependencies in the hadoop projects.
 Undeclared used dependencies are easier errors to correct, and correcting 
 them will lead to a better management of dependencies (enabling stricter 
 dependency frameworks down the road).
 Declared unused dependencies are harder to resolve, as many may be legitimate 
 runtime dependencies. But fixing them would lead to smaller profiles for 
 hadoop projects.
 We can do a one-time scan of dependency issues and fix them. However, in the 
 long run, it would be nice to be able to enforce those rules via maven 
 plug-in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11914) test-patch.sh confused by certain patch formats

2015-06-29 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14606854#comment-14606854
 ] 

Kengo Seki commented on HADOOP-11914:
-

bq. Is the comment above this code still correct?

Let me confirm, is the part you think incorrect and any revision info at the 
end? Other parts look good to me.
And I noticed the current patch can't detect the following case. I'll fix it 
later.

{code}
diff --git a/foo b/foo
new file mode 100644
index 000..e69de29
{code}

 test-patch.sh confused by certain patch formats
 ---

 Key: HADOOP-11914
 URL: https://issues.apache.org/jira/browse/HADOOP-11914
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Critical
 Attachments: HADOOP-11914.001.patch, HADOOP-11914.002.patch, 
 HADOOP-11914.HADOOP-12111.03.patch


 A simple patch example:
 {code}
 diff --git a/start-build-env.sh b/start-build-env.sh
 old mode 100644
 new mode 100755
 {code}
 start-build-env.sh will not show up in the changed files list and therefore 
 will not get run by shellcheck.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12147) bundled dockerfile should use the JDK verison of openjdk, not JRE

2015-06-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12147:
-

 Summary: bundled dockerfile should use the JDK verison of openjdk, 
not JRE
 Key: HADOOP-12147
 URL: https://issues.apache.org/jira/browse/HADOOP-12147
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


It's sort of dumb to have OpenJDK JRE there...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12147) bundled dockerfile should use the JDK verison of openjdk, not JRE

2015-06-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12147:
--
Affects Version/s: HADOOP-12111

 bundled dockerfile should use the JDK verison of openjdk, not JRE
 -

 Key: HADOOP-12147
 URL: https://issues.apache.org/jira/browse/HADOOP-12147
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 It's sort of dumb to have OpenJDK JRE there...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >