[jira] [Commented] (MAPREDUCE-4857) Fix 126 error during map/reduce phase
[ https://issues.apache.org/jira/browse/MAPREDUCE-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619678#comment-13619678 ] Sigehere Smith commented on MAPREDUCE-4857: --- Hello Friends, I have done this changes in src/mapred/org/apache/hadoop/mapred/DefaultTaskController.java. But, can you tell me how will i build this code. Fix 126 error during map/reduce phase - Key: MAPREDUCE-4857 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4857 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 1.0.4 Reporter: Fengdong Yu Fix For: 1.0.4 Attachments: MAPREDUCE-4857.patch There is rare happenings during map or reduce phase, but mostly in map phase, the Exception messages: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) Caused by: java.io.IOException: Task process exit with nonzero status of 126. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) and error logs are cleaned, so It's very hard to debug. but I compared DefaultTaskController.java with 0.22, they use bash command to start the job scritp, but 1.0.4 use bash, -c, command. I removed -c, everything is ok, 126 error code never happen again. I read man document of bash, it indicates when fork a new thread with write command, another thread with bash -c also has a writable fd. so I think it could return 126 status occasionally. So, there is only one line fix for this issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4857) Fix 126 error during map/reduce phase
[ https://issues.apache.org/jira/browse/MAPREDUCE-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619683#comment-13619683 ] Fengdong Yu commented on MAPREDUCE-4857: bq. But, can you tell me how will i build this code. under your $HADOOP_HOME/: mvn -Dmaven.test.skip.exec=true package Fix 126 error during map/reduce phase - Key: MAPREDUCE-4857 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4857 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 1.0.4 Reporter: Fengdong Yu Fix For: 1.0.4 Attachments: MAPREDUCE-4857.patch There is rare happenings during map or reduce phase, but mostly in map phase, the Exception messages: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) Caused by: java.io.IOException: Task process exit with nonzero status of 126. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) and error logs are cleaned, so It's very hard to debug. but I compared DefaultTaskController.java with 0.22, they use bash command to start the job scritp, but 1.0.4 use bash, -c, command. I removed -c, everything is ok, 126 error code never happen again. I read man document of bash, it indicates when fork a new thread with write command, another thread with bash -c also has a writable fd. so I think it could return 126 status occasionally. So, there is only one line fix for this issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5113) Streaming input/output types are ignored with java mapper/reducer
[ https://issues.apache.org/jira/browse/MAPREDUCE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619692#comment-13619692 ] Hudson commented on MAPREDUCE-5113: --- Integrated in Hadoop-Yarn-trunk #173 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/173/]) MAPREDUCE-5113. Streaming input/output types are ignored with java mapper/reducer. (sandyr via tucu) (Revision 1463307) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463307 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingOutputKeyValueTypes.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TrApp.java Streaming input/output types are ignored with java mapper/reducer - Key: MAPREDUCE-5113 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5113 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.2-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: HADOOP-9300-1.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-3.patch, HADOOP-9300.patch, HADOOP-9300.patch, MAPREDUCE-5113.patch After MAPREDUCE-1888, with a java mapper or reducer, StreamJob doesn't respect stream.map.output/stream.reduce.output when setting a job's output key/value classes, even if these configs are explicitly set by the user. As MAPREDUCE-1888 is not in branch-1, this change is only needed in hadoop 2. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619696#comment-13619696 ] Hudson commented on MAPREDUCE-4974: --- Integrated in Hadoop-Yarn-trunk #173 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/173/]) Reverted MAPREDUCE-4974 because of test failures. (Revision 1463359) MAPREDUCE-4974. Optimising the LineRecordReader initialize() method (Gelesh via bobby) (Revision 1463221) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463359 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463221 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619754#comment-13619754 ] Hudson commented on MAPREDUCE-4974: --- Integrated in Hadoop-Hdfs-0.23-Build #571 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/571/]) Reverted MAPREDUCE-4974 because of test failures. (Revision 1463361) svn merge -c 1463221 FIXES: MAPREDUCE-4974. Optimising the LineRecordReader initialize() method (Gelesh via bobby) (Revision 1463224) Result = UNSTABLE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463361 Files : * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463224 Files : * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5007) fix coverage org.apache.hadoop.mapreduce.v2.hs
[ https://issues.apache.org/jira/browse/MAPREDUCE-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619758#comment-13619758 ] Aleksey Gorshkov commented on MAPREDUCE-5007: - Thomas, I've tried run this test on from hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs via mvn test, and I have success result. I've tried to run this test from project root and I have success result. Could you try to rebuild your project from root (mvn clean install -DskipTests) and repeat this test ? fix coverage org.apache.hadoop.mapreduce.v2.hs -- Key: MAPREDUCE-5007 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5007 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: MAPREDUCE-5007-branch-0.23-a.patch, MAPREDUCE-5007-branch-0.23.patch, MAPREDUCE-5007-branch-2-a.patch, MAPREDUCE-5007-branch-2.patch, MAPREDUCE-5007-trunk-a.patch, MAPREDUCE-5007-trunk.patch fix coverage org.apache.hadoop.mapreduce.v2.hs MAPREDUCE-5007-trunk.patch patch for trunk MAPREDUCE-5007-branch-2.patch patch for branch-2 MAPREDUCE-5007-branch-0.23.patch patch for branch-0.23 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5113) Streaming input/output types are ignored with java mapper/reducer
[ https://issues.apache.org/jira/browse/MAPREDUCE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619763#comment-13619763 ] Hudson commented on MAPREDUCE-5113: --- Integrated in Hadoop-Hdfs-trunk #1362 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1362/]) MAPREDUCE-5113. Streaming input/output types are ignored with java mapper/reducer. (sandyr via tucu) (Revision 1463307) Result = FAILURE tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463307 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingOutputKeyValueTypes.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TrApp.java Streaming input/output types are ignored with java mapper/reducer - Key: MAPREDUCE-5113 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5113 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.2-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: HADOOP-9300-1.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-3.patch, HADOOP-9300.patch, HADOOP-9300.patch, MAPREDUCE-5113.patch After MAPREDUCE-1888, with a java mapper or reducer, StreamJob doesn't respect stream.map.output/stream.reduce.output when setting a job's output key/value classes, even if these configs are explicitly set by the user. As MAPREDUCE-1888 is not in branch-1, this change is only needed in hadoop 2. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619767#comment-13619767 ] Hudson commented on MAPREDUCE-4974: --- Integrated in Hadoop-Hdfs-trunk #1362 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1362/]) Reverted MAPREDUCE-4974 because of test failures. (Revision 1463359) MAPREDUCE-4974. Optimising the LineRecordReader initialize() method (Gelesh via bobby) (Revision 1463221) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463359 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463221 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619812#comment-13619812 ] Gelesh commented on MAPREDUCE-4974: --- [~jira.shegalov], I too apologise for not noticing you comments over review board. I had not much idea about review board, and was expecting the review comments over here(Jira). Thanks for sharing your thoughts. Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619813#comment-13619813 ] Gelesh commented on MAPREDUCE-4974: --- [~jira.shegalov], [~revans2], I would suggest to have isCompressedInput a private boolean variable by default false, instead of isCompressedInput() method. This would help us to reduce the scope of Codec object along with CompressionCodecFactory object, to local. Which as of now is a class variable ? I would be patching this modification shortly. Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5113) Streaming input/output types are ignored with java mapper/reducer
[ https://issues.apache.org/jira/browse/MAPREDUCE-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619819#comment-13619819 ] Hudson commented on MAPREDUCE-5113: --- Integrated in Hadoop-Mapreduce-trunk #1389 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1389/]) MAPREDUCE-5113. Streaming input/output types are ignored with java mapper/reducer. (sandyr via tucu) (Revision 1463307) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463307 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamJob.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingOutputKeyValueTypes.java * /hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TrApp.java Streaming input/output types are ignored with java mapper/reducer - Key: MAPREDUCE-5113 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5113 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.2-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: HADOOP-9300-1.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-2.patch, HADOOP-9300-3.patch, HADOOP-9300.patch, HADOOP-9300.patch, MAPREDUCE-5113.patch After MAPREDUCE-1888, with a java mapper or reducer, StreamJob doesn't respect stream.map.output/stream.reduce.output when setting a job's output key/value classes, even if these configs are explicitly set by the user. As MAPREDUCE-1888 is not in branch-1, this change is only needed in hadoop 2. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4974) Optimising the LineRecordReader initialize() method
[ https://issues.apache.org/jira/browse/MAPREDUCE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619823#comment-13619823 ] Hudson commented on MAPREDUCE-4974: --- Integrated in Hadoop-Mapreduce-trunk #1389 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1389/]) Reverted MAPREDUCE-4974 because of test failures. (Revision 1463359) MAPREDUCE-4974. Optimising the LineRecordReader initialize() method (Gelesh via bobby) (Revision 1463221) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463359 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463221 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/LineRecordReader.java Optimising the LineRecordReader initialize() method --- Key: MAPREDUCE-4974 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4974 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv1, mrv2, performance Affects Versions: 2.0.2-alpha, 0.23.5 Environment: Hadoop Linux Reporter: Arun A K Assignee: Gelesh Labels: patch, performance Fix For: 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4974.2.patch, MAPREDUCE-4974.3.patch, MAPREDUCE-4974.4.patch Original Estimate: 1h Remaining Estimate: 1h I found there is a a scope of optimizing the code, over initialize() if we have compressionCodecs codec instantiated only if its a compressed input. Mean while Gelesh George Omathil, added if we could avoid the null check of key value. This would time save, since for every next key value generation, null check is done. The intention being to instantiate only once and avoid NPE as well. Hope both could be met if initialize key value over initialize() method. We both have worked on it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MAPREDUCE-5124) AM lacks flow control for task events
Jason Lowe created MAPREDUCE-5124: - Summary: AM lacks flow control for task events Key: MAPREDUCE-5124 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5124 Project: Hadoop Map/Reduce Issue Type: Bug Components: mr-am Affects Versions: 0.23.5, 2.0.3-alpha Reporter: Jason Lowe The AM does not have any flow control to limit the incoming rate of events from tasks. If the AM is unable to keep pace with the rate of incoming events for a sufficient period of time then it will eventually exhaust the heap and crash. MAPREDUCE-5043 addressed a major bottleneck for event processing, but the AM could still get behind if it's starved for CPU and/or handling a very large job with tens of thousands of active tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5007) fix coverage org.apache.hadoop.mapreduce.v2.hs
[ https://issues.apache.org/jira/browse/MAPREDUCE-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619913#comment-13619913 ] Thomas Graves commented on MAPREDUCE-5007: -- Aleksey, I tried it again in a clean checkout. The first time I ran the tests they passed, but then when I ran them again one failed: testDeleteFileInfo(org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryParsing) Time elapsed: 1085 sec FAILURE! junit.framework.AssertionFailedError: null at junit.framework.Assert.fail(Assert.java:47) at junit.framework.Assert.assertTrue(Assert.java:20) at junit.framework.Assert.assertTrue(Assert.java:27) at org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryParsing.testDeleteFileInfo(TestJobHistoryParsing.java:634) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) I am using java7 too so order of tests within there might be an issue. Note that I can run it by hand a bunch of times with mvn test -Dtest=TestJobHistoryParsing and it intermittently fails fix coverage org.apache.hadoop.mapreduce.v2.hs -- Key: MAPREDUCE-5007 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5007 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: MAPREDUCE-5007-branch-0.23-a.patch, MAPREDUCE-5007-branch-0.23.patch, MAPREDUCE-5007-branch-2-a.patch, MAPREDUCE-5007-branch-2.patch, MAPREDUCE-5007-trunk-a.patch, MAPREDUCE-5007-trunk.patch fix coverage org.apache.hadoop.mapreduce.v2.hs MAPREDUCE-5007-trunk.patch patch for trunk MAPREDUCE-5007-branch-2.patch patch for branch-2 MAPREDUCE-5007-branch-0.23.patch patch for branch-0.23 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5007) fix coverage org.apache.hadoop.mapreduce.v2.hs
[ https://issues.apache.org/jira/browse/MAPREDUCE-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619940#comment-13619940 ] Thomas Graves commented on MAPREDUCE-5007: -- I looked at this some more and issue is there is a race between when the job history server moves the files to the done directory (from the done_intermediate directory) and when you try to clean them. The times it fails is when the job history server doesn't move them quick enough and thus clean can't remove them. fix coverage org.apache.hadoop.mapreduce.v2.hs -- Key: MAPREDUCE-5007 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5007 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: MAPREDUCE-5007-branch-0.23-a.patch, MAPREDUCE-5007-branch-0.23.patch, MAPREDUCE-5007-branch-2-a.patch, MAPREDUCE-5007-branch-2.patch, MAPREDUCE-5007-trunk-a.patch, MAPREDUCE-5007-trunk.patch fix coverage org.apache.hadoop.mapreduce.v2.hs MAPREDUCE-5007-trunk.patch patch for trunk MAPREDUCE-5007-branch-2.patch patch for branch-2 MAPREDUCE-5007-branch-0.23.patch patch for branch-0.23 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MAPREDUCE-4242) port gridmix tests to yarn
[ https://issues.apache.org/jira/browse/MAPREDUCE-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Graves resolved MAPREDUCE-4242. -- Resolution: Duplicate MAPREDUCE-4991 pretty much covers these tests. port gridmix tests to yarn -- Key: MAPREDUCE-4242 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4242 Project: Hadoop Map/Reduce Issue Type: Bug Components: contrib/gridmix, mrv2 Affects Versions: 0.23.3 Reporter: Thomas Graves Priority: Minor jira MAPREDUCE-3543 is mavenizing gridmix, however some of the tests were not pulled over since they need to be ported to Yarn. This jira is to port the remaining tests. The ones under contrib/gridmix/src/test/system should be looked at and then there is TestSleepJob, TestGridmixSubmission, and TestDistCacheEmulation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5088) MR Client gets an renewer token exception while Oozie is submitting a job
[ https://issues.apache.org/jira/browse/MAPREDUCE-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620309#comment-13620309 ] Konstantin Boudnik commented on MAPREDUCE-5088: --- I can confirm that patch addressed oozie issue. I will commit it by the end of today if there's no objections. MR Client gets an renewer token exception while Oozie is submitting a job - Key: MAPREDUCE-5088 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5088 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Roman Shaposhnik Assignee: Daryn Sharp Priority: Blocker Fix For: 2.0.4-alpha Attachments: HADOOP-9409.patch, HADOOP-9409.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.txt After the fix for HADOOP-9299 I'm now getting the following bizzare exception in Oozie while trying to submit a job. This also seems to be KRB related: {noformat} 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-] TOKEN[] APP[MapReduce] JOB[001-130315123130987-oozie-oozi-W] ACTION[001-130315123130987-oozie-oozi-W@Sleep] Error starting action [Sleep]. ErrorType [ERROR], ErrorCode [UninitializedMessageException], Message [UninitializedMessageException: Message missing required fields: renewer] org.apache.oozie.action.ActionExecutorException: UninitializedMessageException: Message missing required fields: renewer at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:401) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:738) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:889) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59) at org.apache.oozie.command.XCommand.call(XCommand.java:277) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: com.google.protobuf.UninitializedMessageException: Message missing required fields: renewer at com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:605) at org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto$Builder.build(SecurityProtos.java:973) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.mergeLocalToProto(GetDelegationTokenRequestPBImpl.java:84) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.getProto(GetDelegationTokenRequestPBImpl.java:67) at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getDelegationToken(MRClientProtocolPBClientImpl.java:200) at org.apache.hadoop.mapred.YARNRunner.getDelegationTokenFromHS(YARNRunner.java:194) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:273) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:581) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:576) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723) ... 10 more 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-] TOKEN[] APP[MapReduce]
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620329#comment-13620329 ] Siddharth Seth commented on MAPREDUCE-5117: --- histProxy is actually a HSClientProtocolPBClientImpl - which contains the actual proxy. This, along with a couple other interfaces need to implement Closeable. Will submit a patch a little later. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Priority: Blocker Fix For: 2.0.4-alpha Attachments: yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated MAPREDUCE-5117: -- Assignee: Siddharth Seth With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated MAPREDUCE-5117: -- Attachment: MAPREDUCE-5117.txt Changed MRClientProtocolPBClientImpl to implement Closeable. The alternate is to stop the history proxy using YarnRPC. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated MAPREDUCE-5117: -- Status: Patch Available (was: Open) With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MAPREDUCE-3951) Tasks are not evenly spread throughout cluster in MR2
[ https://issues.apache.org/jira/browse/MAPREDUCE-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza resolved MAPREDUCE-3951. --- Resolution: Not A Problem Tasks are not evenly spread throughout cluster in MR2 - Key: MAPREDUCE-3951 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3951 Project: Hadoop Map/Reduce Issue Type: Improvement Components: scheduler Affects Versions: 0.23.0, 0.24.0 Reporter: Todd Lipcon In MR1 (at least with the fair and fifo schedulers), if you submit a job that needs fewer resources than the cluster can provide, the tasks are spread relatively evenly across the node. For example, submitting a 100-map job to a 50-node cluster, each with 10 slots, results in 2 tasks on each machine. In MR2, however, the tasks would pile up on the first 10 nodes of the cluster, leaving the other nodes unused. This is highly suboptimal for many use cases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620340#comment-13620340 ] Daryn Sharp commented on MAPREDUCE-5117: +1 Looks ok to me. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620342#comment-13620342 ] Konstantin Boudnik commented on MAPREDUCE-5117: --- I assume this should go to trunk and branch-2 as well. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620353#comment-13620353 ] Hadoop QA commented on MAPREDUCE-5117: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576672/MAPREDUCE-5117.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3491//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3491//console This message is automatically generated. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5111) RM address DNS lookup can cause unnecessary slowness on every JHS page load
[ https://issues.apache.org/jira/browse/MAPREDUCE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated MAPREDUCE-5111: -- Attachment: MAPREDUCE-5111.patch RM address DNS lookup can cause unnecessary slowness on every JHS page load Key: MAPREDUCE-5111 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5111 Project: Hadoop Map/Reduce Issue Type: Improvement Components: jobhistoryserver Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: MAPREDUCE-5111.patch When I run the job history server locally, every page load takes in the 10s of seconds. I profiled the process and discovered that all the extra time was spent inside YarnConfiguration#getRMWebAppURL, trying to resolve 0.0.0.0 to a hostname. When I changed my yarn.resourcemanager.address to localhost, the page load times decreased drastically. There's no that we need to perform this resolution on every page load. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5111) RM address DNS lookup can cause unnecessary slowness on every JHS page load
[ https://issues.apache.org/jira/browse/MAPREDUCE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated MAPREDUCE-5111: -- Status: Patch Available (was: Open) RM address DNS lookup can cause unnecessary slowness on every JHS page load Key: MAPREDUCE-5111 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5111 Project: Hadoop Map/Reduce Issue Type: Improvement Components: jobhistoryserver Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: MAPREDUCE-5111.patch When I run the job history server locally, every page load takes in the 10s of seconds. I profiled the process and discovered that all the extra time was spent inside YarnConfiguration#getRMWebAppURL, trying to resolve 0.0.0.0 to a hostname. When I changed my yarn.resourcemanager.address to localhost, the page load times decreased drastically. There's no that we need to perform this resolution on every page load. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5111) RM address DNS lookup can cause unnecessary slowness on every JHS page load
[ https://issues.apache.org/jira/browse/MAPREDUCE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620424#comment-13620424 ] Sandy Ryza commented on MAPREDUCE-5111: --- Attached patch uses modifies YarnConfiguration#getRMWebAppHostAndPort to use NetUtils#getConnectAddress, which avoids trying to resolve the 0.0.0.0. Verified on a pseudo-distributed cluster that the patch fixes the page load slowness. RM address DNS lookup can cause unnecessary slowness on every JHS page load Key: MAPREDUCE-5111 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5111 Project: Hadoop Map/Reduce Issue Type: Improvement Components: jobhistoryserver Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: MAPREDUCE-5111.patch When I run the job history server locally, every page load takes in the 10s of seconds. I profiled the process and discovered that all the extra time was spent inside YarnConfiguration#getRMWebAppURL, trying to resolve 0.0.0.0 to a hostname. When I changed my yarn.resourcemanager.address to localhost, the page load times decreased drastically. There's no that we need to perform this resolution on every page load. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5111) RM address DNS lookup can cause unnecessary slowness on every JHS page load
[ https://issues.apache.org/jira/browse/MAPREDUCE-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620441#comment-13620441 ] Hadoop QA commented on MAPREDUCE-5111: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576692/MAPREDUCE-5111.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3492//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3492//console This message is automatically generated. RM address DNS lookup can cause unnecessary slowness on every JHS page load Key: MAPREDUCE-5111 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5111 Project: Hadoop Map/Reduce Issue Type: Improvement Components: jobhistoryserver Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: MAPREDUCE-5111.patch When I run the job history server locally, every page load takes in the 10s of seconds. I profiled the process and discovered that all the extra time was spent inside YarnConfiguration#getRMWebAppURL, trying to resolve 0.0.0.0 to a hostname. When I changed my yarn.resourcemanager.address to localhost, the page load times decreased drastically. There's no that we need to perform this resolution on every page load. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-5088) MR Client gets an renewer token exception while Oozie is submitting a job
[ https://issues.apache.org/jira/browse/MAPREDUCE-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Boudnik updated MAPREDUCE-5088: -- Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to branch-2.0.4-alpha as r1463804. Thanks Daryn! MR Client gets an renewer token exception while Oozie is submitting a job - Key: MAPREDUCE-5088 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5088 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Roman Shaposhnik Assignee: Daryn Sharp Priority: Blocker Fix For: 2.0.4-alpha Attachments: HADOOP-9409.patch, HADOOP-9409.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.txt After the fix for HADOOP-9299 I'm now getting the following bizzare exception in Oozie while trying to submit a job. This also seems to be KRB related: {noformat} 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-] TOKEN[] APP[MapReduce] JOB[001-130315123130987-oozie-oozi-W] ACTION[001-130315123130987-oozie-oozi-W@Sleep] Error starting action [Sleep]. ErrorType [ERROR], ErrorCode [UninitializedMessageException], Message [UninitializedMessageException: Message missing required fields: renewer] org.apache.oozie.action.ActionExecutorException: UninitializedMessageException: Message missing required fields: renewer at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:401) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:738) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:889) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59) at org.apache.oozie.command.XCommand.call(XCommand.java:277) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: com.google.protobuf.UninitializedMessageException: Message missing required fields: renewer at com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:605) at org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto$Builder.build(SecurityProtos.java:973) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.mergeLocalToProto(GetDelegationTokenRequestPBImpl.java:84) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.getProto(GetDelegationTokenRequestPBImpl.java:67) at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getDelegationToken(MRClientProtocolPBClientImpl.java:200) at org.apache.hadoop.mapred.YARNRunner.getDelegationTokenFromHS(YARNRunner.java:194) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:273) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:581) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:576) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723) ... 10 more 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-] TOKEN[] APP[MapReduce] JOB[001-13031512313
[jira] [Commented] (MAPREDUCE-4991) coverage for gridmix
[ https://issues.apache.org/jira/browse/MAPREDUCE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620500#comment-13620500 ] Thomas Graves commented on MAPREDUCE-4991: -- +1 Thanks Aleksey and Dennis! coverage for gridmix Key: MAPREDUCE-4991 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4991 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: MAPREDUCE-4991-branch-0.23-a.patch, MAPREDUCE-4991-branch-0.23-b.patch, MAPREDUCE-4991-branch-0.23.patch, MAPREDUCE-4991-branch-2-b.patch, MAPREDUCE-4991-branch-2.patch, MAPREDUCE-4991-trunk-a.patch, MAPREDUCE-4991-trunk-b.patch, MAPREDUCE-4991-trunk.patch fix coverage for GridMix MAPREDUCE-4991-trunk.patch patch for thunk MAPREDUCE-4991-branch-2.patch for branch-2 and MAPREDUCE-4991-branch-0.23.patch for branch-0.23 known fail -org.apache.hadoop.mapred.gridmix.TestGridmixSummary.testExecutionSummarizer. It is for next issue -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MAPREDUCE-4991) coverage for gridmix
[ https://issues.apache.org/jira/browse/MAPREDUCE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Graves updated MAPREDUCE-4991: - Resolution: Fixed Fix Version/s: 2.0.5-beta 0.23.7 3.0.0 Status: Resolved (was: Patch Available) coverage for gridmix Key: MAPREDUCE-4991 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4991 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Fix For: 3.0.0, 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4991-branch-0.23-a.patch, MAPREDUCE-4991-branch-0.23-b.patch, MAPREDUCE-4991-branch-0.23.patch, MAPREDUCE-4991-branch-2-b.patch, MAPREDUCE-4991-branch-2.patch, MAPREDUCE-4991-trunk-a.patch, MAPREDUCE-4991-trunk-b.patch, MAPREDUCE-4991-trunk.patch fix coverage for GridMix MAPREDUCE-4991-trunk.patch patch for thunk MAPREDUCE-4991-branch-2.patch for branch-2 and MAPREDUCE-4991-branch-0.23.patch for branch-0.23 known fail -org.apache.hadoop.mapred.gridmix.TestGridmixSummary.testExecutionSummarizer. It is for next issue -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-4991) coverage for gridmix
[ https://issues.apache.org/jira/browse/MAPREDUCE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620516#comment-13620516 ] Hudson commented on MAPREDUCE-4991: --- Integrated in Hadoop-trunk-Commit #3551 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3551/]) MAPREDUCE-4991. coverage for gridmix (Aleksey Gorshkov via tgraves) (Revision 1463806) Result = SUCCESS tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463806 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/pom.xml * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/Gridmix.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/SerialJobFactory.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/CommonJobTest.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/DebugJobFactory.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/DebugJobProducer.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/GridmixTestUtils.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestDistCacheEmulation.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridMixClasses.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridmixSubmission.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestGridmixSummary.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestLoadJob.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestSleepJob.java * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/resources/data/wordcount.json * /hadoop/common/trunk/hadoop-tools/hadoop-gridmix/src/test/resources/data/wordcount2.json coverage for gridmix Key: MAPREDUCE-4991 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4991 Project: Hadoop Map/Reduce Issue Type: Test Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Fix For: 3.0.0, 0.23.7, 2.0.5-beta Attachments: MAPREDUCE-4991-branch-0.23-a.patch, MAPREDUCE-4991-branch-0.23-b.patch, MAPREDUCE-4991-branch-0.23.patch, MAPREDUCE-4991-branch-2-b.patch, MAPREDUCE-4991-branch-2.patch, MAPREDUCE-4991-trunk-a.patch, MAPREDUCE-4991-trunk-b.patch, MAPREDUCE-4991-trunk.patch fix coverage for GridMix MAPREDUCE-4991-trunk.patch patch for thunk MAPREDUCE-4991-branch-2.patch for branch-2 and MAPREDUCE-4991-branch-0.23.patch for branch-0.23 known fail -org.apache.hadoop.mapred.gridmix.TestGridmixSummary.testExecutionSummarizer. It is for next issue -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620620#comment-13620620 ] Vinod Kumar Vavilapalli commented on MAPREDUCE-5117: Looks good, +1, checking it in. With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (MAPREDUCE-5088) MR Client gets an renewer token exception while Oozie is submitting a job
[ https://issues.apache.org/jira/browse/MAPREDUCE-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth reopened MAPREDUCE-5088: --- bq. Patch committed to branch-2.0.4-alpha as r1463804. Thanks Daryn! [~cos], looks like this went to branch-2.0.4-alpha only. Would you mind pulling this into trunk and beanch-2 as well ? Thanks MR Client gets an renewer token exception while Oozie is submitting a job - Key: MAPREDUCE-5088 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5088 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.3-alpha Reporter: Roman Shaposhnik Assignee: Daryn Sharp Priority: Blocker Fix For: 2.0.4-alpha Attachments: HADOOP-9409.patch, HADOOP-9409.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.patch, MAPREDUCE-5088.txt After the fix for HADOOP-9299 I'm now getting the following bizzare exception in Oozie while trying to submit a job. This also seems to be KRB related: {noformat} 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-] TOKEN[] APP[MapReduce] JOB[001-130315123130987-oozie-oozi-W] ACTION[001-130315123130987-oozie-oozi-W@Sleep] Error starting action [Sleep]. ErrorType [ERROR], ErrorCode [UninitializedMessageException], Message [UninitializedMessageException: Message missing required fields: renewer] org.apache.oozie.action.ActionExecutorException: UninitializedMessageException: Message missing required fields: renewer at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:401) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:738) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:889) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59) at org.apache.oozie.command.XCommand.call(XCommand.java:277) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: com.google.protobuf.UninitializedMessageException: Message missing required fields: renewer at com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:605) at org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto$Builder.build(SecurityProtos.java:973) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.mergeLocalToProto(GetDelegationTokenRequestPBImpl.java:84) at org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.getProto(GetDelegationTokenRequestPBImpl.java:67) at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getDelegationToken(MRClientProtocolPBClientImpl.java:200) at org.apache.hadoop.mapred.YARNRunner.getDelegationTokenFromHS(YARNRunner.java:194) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:273) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:581) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:576) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723) ... 10 more 2013-03-15 13:34:16,555 WARN ActionStartXCommand:542 - USER[hue] GROUP[-]
[jira] [Updated] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated MAPREDUCE-5117: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I just committed this to trunk, branch-2 and branch-2.0.4-alpha. Thanks Sid! Thanks to Roman and Daryn too for the help! With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (MAPREDUCE-5117) With security enabled HS delegation token renewer fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620646#comment-13620646 ] Hudson commented on MAPREDUCE-5117: --- Integrated in Hadoop-trunk-Commit #3553 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3553/]) MAPREDUCE-5117. Changed MRClientProtocolPBClientImpl to be closeable and thus fix failures in renewal of HistoryServer's delegations tokens. Contributed by Siddharth Seth. (Revision 1463828) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1463828 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/impl/pb/client/MRClientProtocolPBClientImpl.java With security enabled HS delegation token renewer fails --- Key: MAPREDUCE-5117 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5117 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 2.0.4-alpha Reporter: Roman Shaposhnik Assignee: Siddharth Seth Priority: Blocker Fix For: 2.0.4-alpha Attachments: MAPREDUCE-5117.txt, yarn.log It seems that the HSClientProtocolPBClientImpl should implement Closeable as per the attached stack trace. The problem can be observed on a cluster running the latest branch-2.0.4-alpha with MAPREDUCE-5088 applied on top. The easiest way to reproduce it is to run an oozie pig job: {noformat} $ oozie job -oozie http://`hostname -f`:11000/oozie -run -DjobTracker=`hostname -f`:8032 -DnameNode=hdfs://`hostname -f`:17020 -DexamplesRoot=examples -config /tmp/examples/apps/pig/job.properties {noformat} Please also note that I can successfully submit simple jobs (Pi/Sleep) from a command line using hadoop jar command. Thus it *seems* related to MAPREDUCE-5088 change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira