[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395675#comment-14395675 ] Hudson commented on HADOOP-11800: - FAILURE: Integrated in Hadoop-Yarn-trunk #887 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/887/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395679#comment-14395679 ] Hudson commented on HADOOP-11785: - FAILURE: Integrated in Hadoop-Yarn-trunk #887 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/887/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java * hadoop-common-project/hadoop-common/CHANGES.txt Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11806) Test issue for JIRA automation scripts
[ https://issues.apache.org/jira/browse/HADOOP-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymie Stata updated HADOOP-11806: -- Status: Patch Available (was: Open) Test comment Test issue for JIRA automation scripts -- Key: HADOOP-11806 URL: https://issues.apache.org/jira/browse/HADOOP-11806 Project: Hadoop Common Issue Type: Test Reporter: Raymie Stata Assignee: Raymie Stata Priority: Trivial I'm writing some scripts to automate some JIRA clean-up activities. I've created this issue for testing these scripts. Please ignore... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11806) Test issue for JIRA automation scripts
[ https://issues.apache.org/jira/browse/HADOOP-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymie Stata updated HADOOP-11806: -- Status: Patch Available (was: Open) Test comment Test issue for JIRA automation scripts -- Key: HADOOP-11806 URL: https://issues.apache.org/jira/browse/HADOOP-11806 Project: Hadoop Common Issue Type: Test Reporter: Raymie Stata Assignee: Raymie Stata Priority: Trivial I'm writing some scripts to automate some JIRA clean-up activities. I've created this issue for testing these scripts. Please ignore... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11806) Test issue for JIRA automation scripts
[ https://issues.apache.org/jira/browse/HADOOP-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymie Stata updated HADOOP-11806: -- Status: Open (was: Patch Available) Test comment Test issue for JIRA automation scripts -- Key: HADOOP-11806 URL: https://issues.apache.org/jira/browse/HADOOP-11806 Project: Hadoop Common Issue Type: Test Reporter: Raymie Stata Assignee: Raymie Stata Priority: Trivial I'm writing some scripts to automate some JIRA clean-up activities. I've created this issue for testing these scripts. Please ignore... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11806) Test issue for JIRA automation scripts
[ https://issues.apache.org/jira/browse/HADOOP-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymie Stata updated HADOOP-11806: -- Status: Open (was: Patch Available) Test comment Test issue for JIRA automation scripts -- Key: HADOOP-11806 URL: https://issues.apache.org/jira/browse/HADOOP-11806 Project: Hadoop Common Issue Type: Test Reporter: Raymie Stata Assignee: Raymie Stata Priority: Trivial I'm writing some scripts to automate some JIRA clean-up activities. I've created this issue for testing these scripts. Please ignore... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk
[ https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395621#comment-14395621 ] Hadoop QA commented on HADOOP-11627: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12709372/HADOOP-11627-007.patch against trunk revision ef591b1. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: org.apache.hadoop.io.compress.TestCodec org.apache.hadoop.mapreduce.v2.TestMRJobsWithProfiler org.apache.hadoop.mapred.TestMRTimelineEventHandling org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService org.apache.hadoop.mapred.pipes.TestPipeApplication org.apache.hadoop.mapred.TestClusterMRNotification The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: org.apache.hadoop.mapred.TestMiniMRChildTask org.apache.hadoop.mapred.TestLazyOutput org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers org.apache.hadoop.mapred.TestJobCleanup org.apache.hadoop.mapreduce.TestLargeSort org.apache.hadoop.mapreduce.TestMapReduceLazyOutput org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution org.apache.hadoop.mapreduce.v2.TestUberAM org.apache.hadoop.mapreduce.v2.TestMRJobs org.apache.hadoop.mapreduce.TestMRJobClient org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/6062//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/6062//console This message is automatically generated. Remove io.native.lib.available from trunk - Key: HADOOP-11627 URL: https://issues.apache.org/jira/browse/HADOOP-11627 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, HADOOP-11627-007.patch, HADOOP-11627.patch According to the discussion in HADOOP-8642, we should remove {{io.native.lib.available}} from trunk, and always use native libraries if they exist. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395672#comment-14395672 ] Hudson commented on HADOOP-11785: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #153 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/153/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395668#comment-14395668 ] Hudson commented on HADOOP-11800: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #153 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/153/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java * hadoop-common-project/hadoop-common/CHANGES.txt Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker
[ https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11807: -- Component/s: documentation build add a lint mode to releasedocmaker -- Key: HADOOP-11807 URL: https://issues.apache.org/jira/browse/HADOOP-11807 Project: Hadoop Common Issue Type: Improvement Components: build, documentation Affects Versions: 3.0.0 Reporter: Allen Wittenauer Priority: Minor * check for missing components * check for missing assignee * check for common version problems? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395754#comment-14395754 ] Hudson commented on HADOOP-11800: - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2085 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2085/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java * hadoop-common-project/hadoop-common/CHANGES.txt Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395758#comment-14395758 ] Hudson commented on HADOOP-11785: - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2085 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2085/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11807) add a lint mode to releasedocmaker
Allen Wittenauer created HADOOP-11807: - Summary: add a lint mode to releasedocmaker Key: HADOOP-11807 URL: https://issues.apache.org/jira/browse/HADOOP-11807 Project: Hadoop Common Issue Type: Improvement Reporter: Allen Wittenauer Priority: Minor * check for missing components * check for missing assignee * check for common version problems? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker
[ https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11807: -- Description: * check for missing components (error) * check for missing assignee (error) * check for common version problems (warning) * add an error message for missing release notes was: * check for missing components * check for missing assignee * check for common version problems? add a lint mode to releasedocmaker -- Key: HADOOP-11807 URL: https://issues.apache.org/jira/browse/HADOOP-11807 Project: Hadoop Common Issue Type: Improvement Components: build, documentation Affects Versions: 3.0.0 Reporter: Allen Wittenauer Priority: Minor * check for missing components (error) * check for missing assignee (error) * check for common version problems (warning) * add an error message for missing release notes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes
[ https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395800#comment-14395800 ] Allen Wittenauer commented on HADOOP-11731: --- I've created HADOOP-11807 to add a lint mode. bq. we do need a transition period to see if it indeed works well. ... which is a good reason to target trunk rather than branch-2. trunks' changes.txt files are very very wrong. Any effort put into updating those files is wasted effort given that it's automated now. Rework the changelog and releasenotes - Key: HADOOP-11731 URL: https://issues.apache.org/jira/browse/HADOOP-11731 Project: Hadoop Common Issue Type: New Feature Components: documentation Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Fix For: 3.0.0 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, HADOOP-11731-06.patch, HADOOP-11731-07.patch The current way we generate these build artifacts is awful. Plus they are ugly and, in the case of release notes, very hard to pick out what is important. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker
[ https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11807: -- Affects Version/s: 3.0.0 add a lint mode to releasedocmaker -- Key: HADOOP-11807 URL: https://issues.apache.org/jira/browse/HADOOP-11807 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Allen Wittenauer Priority: Minor * check for missing components * check for missing assignee * check for common version problems? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395743#comment-14395743 ] Hudson commented on HADOOP-11800: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #144 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/144/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395747#comment-14395747 ] Hudson commented on HADOOP-11785: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #144 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/144/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java * hadoop-common-project/hadoop-common/CHANGES.txt Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes
[ https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14396008#comment-14396008 ] Allen Wittenauer commented on HADOOP-11731: --- bq. Why don't we use the git commit message instead of JIRA summary? This has already been somewhat covered on the mailing list, but: a) because we have to hit JIRA for the other information anyway. b) parsing the commit logs is trickier than one thinks. bq. The git commit message is supposed to be the same as the entry in CHAGNES.txt. Looking at https://wiki.apache.org/hadoop/HowToCommit, that isn't true. In the vast majority of cases, however, I'd place money that JIRA summary==CHANGES.txt entry==git commit log entry. But typos happen and these are much easier to fix in the JIRA summary by leaps and bounds. bq. I disagree since the automation may have bugs or may not work at all. Then our release notes have been broken for over 2 years, since it's effectively the same code but with different formatting. - At this point, this JIRA is beating a horse that's been long dead. The code is committed. If there are issues with the code, then file new JIRAs against it. I'm pretty much going to ignore any new messages here and continue working on getting the rest of the release process updated to take advantage of it. Rework the changelog and releasenotes - Key: HADOOP-11731 URL: https://issues.apache.org/jira/browse/HADOOP-11731 Project: Hadoop Common Issue Type: New Feature Components: documentation Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Fix For: 3.0.0 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, HADOOP-11731-06.patch, HADOOP-11731-07.patch The current way we generate these build artifacts is awful. Plus they are ugly and, in the case of release notes, very hard to pick out what is important. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11746: -- Attachment: HADOOP-11746-06.patch -06: * lots of fixes and cleanup, including some minor perf fixes and removing a few needless temp files * On Solaris, use POSIX not SVID binaries * If a patch hits test-patch or smart-apply-path, short-circuit some pre-patch logic, then use the new versions as part of testing the patch rewrite test-patch.sh - Key: HADOOP-11746 URL: https://issues.apache.org/jira/browse/HADOOP-11746 Project: Hadoop Common Issue Type: Test Components: build, test Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, HADOOP-11746-05.patch, HADOOP-11746-06.patch This code is bad and you should feel bad. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11746: -- Release Note: * test-patch.sh now has new output that is different than the previous versions * test-patch.sh is now pluggable via the test-patch.d directory, with checkstyle and shellcheck tests included * JIRA comments now use much more markup to improve readability * test-patch.sh now supports either a file name, a URL, or a JIRA issue as input in developer mode * If part of the patch testing code is changed, test-patch.sh will now re-executing itself using those new versions. was: * test-patch.sh now has new output that is different than the previous versions * test-patch.sh is now pluggable via the test-patch.d directory, with checkstyle and shellcheck tests included * JIRA comments now use much more markup to improve readability * test-patch.sh now supports either a file name, a URL, or a JIRA issue as input in developer mode rewrite test-patch.sh - Key: HADOOP-11746 URL: https://issues.apache.org/jira/browse/HADOOP-11746 Project: Hadoop Common Issue Type: Test Components: build, test Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, HADOOP-11746-05.patch, HADOOP-11746-06.patch This code is bad and you should feel bad. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14396050#comment-14396050 ] Hadoop QA commented on HADOOP-11746: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12709449/HADOOP-11746-06.patch against trunk revision 4b3948e. {color:red}-1 @author{color}. The patch appears to contain 13 @author tags which the Hadoop community has agreed to not allow in code contributions. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/6064//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/6064//console This message is automatically generated. rewrite test-patch.sh - Key: HADOOP-11746 URL: https://issues.apache.org/jira/browse/HADOOP-11746 Project: Hadoop Common Issue Type: Test Components: build, test Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, HADOOP-11746-05.patch, HADOOP-11746-06.patch This code is bad and you should feel bad. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh
[ https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14396057#comment-14396057 ] Allen Wittenauer commented on HADOOP-11746: --- Argh, just realized -06 doesn't have the uniq'ing of CHANGED_MODULES, so you'll see duplicate tests. rewrite test-patch.sh - Key: HADOOP-11746 URL: https://issues.apache.org/jira/browse/HADOOP-11746 Project: Hadoop Common Issue Type: Test Components: build, test Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, HADOOP-11746-05.patch, HADOOP-11746-06.patch This code is bad and you should feel bad. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395855#comment-14395855 ] Hudson commented on HADOOP-11800: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #154 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/154/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java * hadoop-common-project/hadoop-common/CHANGES.txt Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395859#comment-14395859 ] Hudson commented on HADOOP-11785: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #154 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/154/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
[ https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-11717: - Attachment: RedirectingWebSSOwithJWTforHadoopWebUIs.pdf Attached overview and configuration details for this JWT based WebSSO handler. Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth - Key: HADOOP-11717 URL: https://issues.apache.org/jira/browse/HADOOP-11717 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Larry McCay Assignee: Larry McCay Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, RedirectingWebSSOwithJWTforHadoopWebUIs.pdf Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs. The actual authentication is done by some external service that the handler will redirect to when there is no hadoop.auth cookie and no JWT token found in the incoming request. Using JWT provides a number of benefits: * It is not tied to any specific authentication mechanism - so buys us many SSO integrations * It is cryptographically verifiable for determining whether it can be trusted * Checking for expiration allows for a limited lifetime and window for compromised use This will introduce the use of nimbus-jose-jwt library for processing, validating and parsing JWT tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()
[ https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395873#comment-14395873 ] Hudson commented on HADOOP-11785: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2103 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2103/]) HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing (Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 932730df7d62077f7356464ad27f69469965d77a) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java Reduce number of listStatus operation in distcp buildListing() -- Key: HADOOP-11785 URL: https://issues.apache.org/jira/browse/HADOOP-11785 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Affects Versions: 3.0.0 Reporter: Zoran Dimitrijevic Assignee: Zoran Dimitrijevic Priority: Minor Fix For: 2.8.0 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch Original Estimate: 1h Remaining Estimate: 1h Distcp was taking long time in copyListing.buildListing() for large source trees (I was using source of 1.5M files in a tree of about 50K directories). For input at s3 buildListing was taking more than one hour. I've noticed a performance bug in the current code which does listStatus twice for each directory which doubles number of RPCs in some cases (if most directories do not contain 1000 files). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java
[ https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395869#comment-14395869 ] Hudson commented on HADOOP-11800: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2103 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2103/]) HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java Clean up some test methods in TestCodec.java Key: HADOOP-11800 URL: https://issues.apache.org/jira/browse/HADOOP-11800 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11800.patch Found two issues when reviewing the patches in HADOOP-11627. 1. There is no {{@Test}} annotation, so the test is not executed. {code} public void testCodecPoolAndGzipDecompressor() { {code} 2. The method should be private because it is called from other tests. {code} public void testGzipCodecWrite(boolean useNative) throws IOException { {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
[ https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395884#comment-14395884 ] Larry McCay commented on HADOOP-11717: -- Oops - tried to test the pdf attachment... Failures unrelated to actual patch. Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth - Key: HADOOP-11717 URL: https://issues.apache.org/jira/browse/HADOOP-11717 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Larry McCay Assignee: Larry McCay Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, RedirectingWebSSOwithJWTforHadoopWebUIs.pdf Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs. The actual authentication is done by some external service that the handler will redirect to when there is no hadoop.auth cookie and no JWT token found in the incoming request. Using JWT provides a number of benefits: * It is not tied to any specific authentication mechanism - so buys us many SSO integrations * It is cryptographically verifiable for determining whether it can be trusted * Checking for expiration allows for a limited lifetime and window for compromised use This will introduce the use of nimbus-jose-jwt library for processing, validating and parsing JWT tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
[ https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395867#comment-14395867 ] Hadoop QA commented on HADOOP-11717: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12709428/RedirectingWebSSOwithJWTforHadoopWebUIs.pdf against trunk revision ef591b1. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/6063//console This message is automatically generated. Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth - Key: HADOOP-11717 URL: https://issues.apache.org/jira/browse/HADOOP-11717 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Larry McCay Assignee: Larry McCay Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, RedirectingWebSSOwithJWTforHadoopWebUIs.pdf Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs. The actual authentication is done by some external service that the handler will redirect to when there is no hadoop.auth cookie and no JWT token found in the incoming request. Using JWT provides a number of benefits: * It is not tied to any specific authentication mechanism - so buys us many SSO integrations * It is cryptographically verifiable for determining whether it can be trusted * Checking for expiration allows for a limited lifetime and window for compromised use This will introduce the use of nimbus-jose-jwt library for processing, validating and parsing JWT tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11377) jdiff failing on java 7 and java 8, Null.java not found
[ https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated HADOOP-11377: - Resolution: Fixed Fix Version/s: 2.7.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this trunk, branch-2 and branch-2.7. Thanks Tsuyoshi! jdiff failing on java 7 and java 8, Null.java not found - Key: HADOOP-11377 URL: https://issues.apache.org/jira/browse/HADOOP-11377 Project: Hadoop Common Issue Type: Sub-task Components: build Affects Versions: 2.6.0, 2.7.0 Environment: Java8 jenkins Reporter: Steve Loughran Assignee: Tsuyoshi Ozawa Fix For: 2.7.0 Attachments: HADOOP-11377.001.patch Jdiff is having problems on Java 8, as it cannot find a javadoc for the new {{Null}} datatype {code} 'https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java' The ' characters around the executable and arguments are not part of the command. [javadoc] javadoc: error - Illegal package name: [javadoc] javadoc: error - File not found: https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11776) jdiff is broken in Hadoop 2
[ https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated HADOOP-11776: - Resolution: Fixed Fix Version/s: 2.7.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed this to trunk, branch-2 and branch-2.7. Thanks Li! jdiff is broken in Hadoop 2 --- Key: HADOOP-11776 URL: https://issues.apache.org/jira/browse/HADOOP-11776 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Li Lu Assignee: Li Lu Priority: Blocker Fix For: 2.7.0 Attachments: HADOOP-11776-040115.patch Seems like we haven't touch the API files from jdiff under dev-support for a while. For now we're missing the jdiff API files for hadoop 2. We're also missing YARN when generating the jdiff API files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2
[ https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395947#comment-14395947 ] Hudson commented on HADOOP-11776: - FAILURE: Integrated in Hadoop-trunk-Commit #7512 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7512/]) HADOOP-11776. Fixed the broken JDiff support in Hadoop 2. Contributed by Li Lu. (vinodkv: rev 4b3948ea365db07df7a9369a271009fafd1ba8f5) * hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-project/pom.xml * hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.6.0.xml * hadoop-project-dist/pom.xml jdiff is broken in Hadoop 2 --- Key: HADOOP-11776 URL: https://issues.apache.org/jira/browse/HADOOP-11776 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Li Lu Assignee: Li Lu Priority: Blocker Fix For: 2.7.0 Attachments: HADOOP-11776-040115.patch Seems like we haven't touch the API files from jdiff under dev-support for a while. For now we're missing the jdiff API files for hadoop 2. We're also missing YARN when generating the jdiff API files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes
[ https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395922#comment-14395922 ] Tsz Wo Nicholas Sze commented on HADOOP-11731: -- Why don't we use the git commit message instead of JIRA summary? The git commit message is supposed to be the same as the entry in CHAGNES.txt. BTW, I guess the tool won't handle the case that there are multiple contributors if it takes JIRA assignee as the contributor. We should also retrieve the contributor list from the commit message. ... Any effort put into updating those files is wasted effort given that it's automated now. I disagree since the automation may have bugs or may not work at all. Rework the changelog and releasenotes - Key: HADOOP-11731 URL: https://issues.apache.org/jira/browse/HADOOP-11731 Project: Hadoop Common Issue Type: New Feature Components: documentation Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Fix For: 3.0.0 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, HADOOP-11731-06.patch, HADOOP-11731-07.patch The current way we generate these build artifacts is awful. Plus they are ugly and, in the case of release notes, very hard to pick out what is important. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2
[ https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395928#comment-14395928 ] Vinod Kumar Vavilapalli commented on HADOOP-11776: -- This looks good to me. +1. There is more work to be done, but the current patch should unblock the basic reporting. Checking this in. jdiff is broken in Hadoop 2 --- Key: HADOOP-11776 URL: https://issues.apache.org/jira/browse/HADOOP-11776 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Li Lu Assignee: Li Lu Priority: Blocker Attachments: HADOOP-11776-040115.patch Seems like we haven't touch the API files from jdiff under dev-support for a while. For now we're missing the jdiff API files for hadoop 2. We're also missing YARN when generating the jdiff API files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, Null.java not found
[ https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395934#comment-14395934 ] Hudson commented on HADOOP-11377: - FAILURE: Integrated in Hadoop-trunk-Commit #7511 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7511/]) HADOOP-11377. Added Null.java without which jdiff completely flops. Contributed by Tsuyoshi Ozawa. (vinodkv: rev 5370e7128b4b78dabff79986a92151f1de39eeed) * hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Null.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java jdiff failing on java 7 and java 8, Null.java not found - Key: HADOOP-11377 URL: https://issues.apache.org/jira/browse/HADOOP-11377 Project: Hadoop Common Issue Type: Sub-task Components: build Affects Versions: 2.6.0, 2.7.0 Environment: Java8 jenkins Reporter: Steve Loughran Assignee: Tsuyoshi Ozawa Fix For: 2.7.0 Attachments: HADOOP-11377.001.patch Jdiff is having problems on Java 8, as it cannot find a javadoc for the new {{Null}} datatype {code} 'https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java' The ' characters around the executable and arguments are not part of the command. [javadoc] javadoc: error - Illegal package name: [javadoc] javadoc: error - File not found: https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)