[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977881#comment-13977881 ] Sangjin Lee commented on MAPREDUCE-5841: Updated the patch. I added code that interrupts the task attempt for which a container clean-up is being requested via Future.cancel(). System.exit() was replaced with ExitUtil.terminate(). Also added a unit test. I ran some -kill-task scenarios to confirm the patch works for them. For example, for a 2-mapper (no reducer) uber job, if I kill any of the tasks, it gets interrupted promptly (provided the task responds to interruption), a new attempt is scheduled, and the job completes successfully. I also tried a 2-mapper, 1-reducer uber job. I killed a mapper task, and confirmed the running task is killed promptly. In this case the job eventually fails because the reducer attempt detects that not all the mappers ran successfully. I suspect this is somewhat of an expected behavior. Regarding the aforementioned unit test timeouts, it turned out to be a fluke. It may have been the state my machine was in when I ran the test network-wise. I can no longer reproduce the timeouts under a stable condition. > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch, mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated MAPREDUCE-5841: --- Status: Patch Available (was: Open) > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch, mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated MAPREDUCE-5841: --- Attachment: mapreduce-5841.patch > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch, mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated MAPREDUCE-5841: --- Status: Open (was: Patch Available) > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MAPREDUCE-5854) Move the search box in UI from the right side to the left side
Jinhui Liu created MAPREDUCE-5854: - Summary: Move the search box in UI from the right side to the left side Key: MAPREDUCE-5854 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5854 Project: Hadoop Map/Reduce Issue Type: Improvement Affects Versions: 0.23.9 Reporter: Jinhui Liu In the UI for resoure manager, job history, and job configuration (this might not be a complete list), there is a search box at the top-right corner of the listed content. This search box is frequently used but it is usually not visible due to right-alignment. Extra scroll is needed to make it visable and it is not convenient. It would be good to move it to the left-side, next to the "Show ... Entries" drop-down box. In the same spirit, the "First|Preious|...|Next|Last" at the bottom-right corner of the listed content can also be moved to the left side. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5831) Old MR client is not compatible with new MR application
[ https://issues.apache.org/jira/browse/MAPREDUCE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen updated MAPREDUCE-5831: --- Assignee: Tan, Wangda > Old MR client is not compatible with new MR application > --- > > Key: MAPREDUCE-5831 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5831 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: client, mr-am >Affects Versions: 2.2.0, 2.3.0 >Reporter: Zhijie Shen >Assignee: Tan, Wangda >Priority: Critical > > Recently, we saw the following scenario: > 1. The user setup a cluster of hadoop 2.3., which contains YARN 2.3 and MR > 2.3. > 2. The user client on a machine that MR 2.2 is installed and in the classpath. > Then, when the user submitted a simple wordcount job, he saw the following > message: > {code} > 16:00:41,027 INFO main mapreduce.Job:1345 - map 100% reduce 100% > 16:00:41,036 INFO main mapreduce.Job:1356 - Job job_1396468045458_0006 > completed successfully > 16:02:20,535 WARN main mapreduce.JobRunner:212 - Cannot start job > [wordcountJob] > java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148) > at > org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182) > at > org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154) > at > org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240) > at > org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370) > at > org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511) > at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756) > at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) > at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753) > at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289) > . . . > {code} > The problem is that the wordcount job was running on one or more than one > nodes of the YARN cluster, where MR 2.3 libs were installed, and > JobCounter.MB_MILLIS_REDUCES is available in the counters. On the other side, > due to the classpath setting, the client was likely to run with MR 2.2 libs. > After the client retrieved the counters from MR AM, it tried to construct the > Counter object with the received counter name. Unfortunately, the enum didn't > exist in the client's classpath. Therefore, "No enum constant" exception is > thrown here. > JobCounter.MB_MILLIS_REDUCES is brought to MR2 via MAPREDUCE-5464 since > Hadoop 2.3. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5809) Enhance distcp to support preserving HDFS ACLs.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977842#comment-13977842 ] Chris Nauroth commented on MAPREDUCE-5809: -- The test failures are unrelated to this patch. {{TestBalancerWithNodeGroup}} is known to be flaky. For {{TestUniformSizeInputFormat}}, it looks like the Jenkins machine was too heavily loaded. The output showed exceptions for "too many open files". I could not repro either failure locally. > Enhance distcp to support preserving HDFS ACLs. > --- > > Key: MAPREDUCE-5809 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5809 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: distcp >Affects Versions: 2.4.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: MAPREDUCE-5809.1.patch, MAPREDUCE-5809.2.patch > > > This issue tracks enhancing distcp to add a new command-line argument for > preserving HDFS ACLs from the source at the copy destination. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5812) Make task context available to OutputCommitter.isRecoverySupported()
[ https://issues.apache.org/jira/browse/MAPREDUCE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977791#comment-13977791 ] Hadoop QA commented on MAPREDUCE-5812: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641389/MAPREDUCE-5812.4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4547//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4547//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4547//console This message is automatically generated. > Make task context available to OutputCommitter.isRecoverySupported() > - > > Key: MAPREDUCE-5812 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5812 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.3.0 >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > Attachments: MAPREDUCE-5812.1.patch, MAPREDUCE-5812.2.patch, > MAPREDUCE-5812.3.patch, MAPREDUCE-5812.4.patch > > > Background > == > The system like Hive provides its version of OutputCommitter. The custom > implementation of isRecoverySupported() requires task context. From > taskContext:getConfiguration(), hive checks if hive-defined specific > property is set or not. Based on the property value, it returns true or > false. However, in the current OutputCommitter:isRecoverySupported(), there > is no way of getting task config. As a result, user can't turn on/off the > MRAM recovery feature. > Proposed resolution: > === > 1. Pass Task Context into isRecoverySupported() method. > Pros: Easy and clean > Cons: Possible backward compatibility issue due to aPI changes. (Is it true?) > 2. Call outputCommitter.setupTask(taskContext) from MRAM: The new > OutputCommitter will store the context in the class level variable and use it > from isRecoverySupported() > Props: No API changes. No backward compatibility issue. This call can be made > from MRAppMaster.getOutputCommitter() method for old API case. > Cons: Might not be very clean solution due to class level variable. > Please give your comments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5809) Enhance distcp to support preserving HDFS ACLs.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977730#comment-13977730 ] Hadoop QA commented on MAPREDUCE-5809: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641338/MAPREDUCE-5809.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-distcp: org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4546//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4546//console This message is automatically generated. > Enhance distcp to support preserving HDFS ACLs. > --- > > Key: MAPREDUCE-5809 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5809 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: distcp >Affects Versions: 2.4.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: MAPREDUCE-5809.1.patch, MAPREDUCE-5809.2.patch > > > This issue tracks enhancing distcp to add a new command-line argument for > preserving HDFS ACLs from the source at the copy destination. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5812) Make task context available to OutputCommitter.isRecoverySupported()
[ https://issues.apache.org/jira/browse/MAPREDUCE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Kamrul Islam updated MAPREDUCE-5812: - Attachment: MAPREDUCE-5812.4.patch Thanks [~jlowe] for the review. New patch addressed the review comments. > Make task context available to OutputCommitter.isRecoverySupported() > - > > Key: MAPREDUCE-5812 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5812 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.3.0 >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > Attachments: MAPREDUCE-5812.1.patch, MAPREDUCE-5812.2.patch, > MAPREDUCE-5812.3.patch, MAPREDUCE-5812.4.patch > > > Background > == > The system like Hive provides its version of OutputCommitter. The custom > implementation of isRecoverySupported() requires task context. From > taskContext:getConfiguration(), hive checks if hive-defined specific > property is set or not. Based on the property value, it returns true or > false. However, in the current OutputCommitter:isRecoverySupported(), there > is no way of getting task config. As a result, user can't turn on/off the > MRAM recovery feature. > Proposed resolution: > === > 1. Pass Task Context into isRecoverySupported() method. > Pros: Easy and clean > Cons: Possible backward compatibility issue due to aPI changes. (Is it true?) > 2. Call outputCommitter.setupTask(taskContext) from MRAM: The new > OutputCommitter will store the context in the class level variable and use it > from isRecoverySupported() > Props: No API changes. No backward compatibility issue. This call can be made > from MRAppMaster.getOutputCommitter() method for old API case. > Cons: Might not be very clean solution due to class level variable. > Please give your comments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977697#comment-13977697 ] Sangjin Lee commented on MAPREDUCE-5841: I just wanted to clarify a couple of things regarding -kill-task or -fail-task with uber jobs. Since the uber job executes task attempts serially, we can have a situation where a mapper attempt is killed but the new mapper attempt will be queued behind an existing reducer attempt. In that case, the job will not be able to make progress if the reducer needs the killed mapper to finish. I think this happens already before these changes, and what we do here is likely not going to change that. Also, another situation is if mapper/reducer tasks do not respond to interrupt (i.e. uninterruptible). If a task does not respond to interrupt, -kill-task won't necessarily work. Outside those cases, this fix can probably improve the situation and make the job complete successfully by interrupting the targeted attempt. Is that the same as your understanding? > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5831) Old MR client is not compatible with new MR application
[ https://issues.apache.org/jira/browse/MAPREDUCE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977689#comment-13977689 ] Wangda Tan commented on MAPREDUCE-5831: --- Link this issue with MAPREDUCE-4150 > Old MR client is not compatible with new MR application > --- > > Key: MAPREDUCE-5831 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5831 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: client, mr-am >Affects Versions: 2.2.0, 2.3.0 >Reporter: Zhijie Shen >Priority: Critical > > Recently, we saw the following scenario: > 1. The user setup a cluster of hadoop 2.3., which contains YARN 2.3 and MR > 2.3. > 2. The user client on a machine that MR 2.2 is installed and in the classpath. > Then, when the user submitted a simple wordcount job, he saw the following > message: > {code} > 16:00:41,027 INFO main mapreduce.Job:1345 - map 100% reduce 100% > 16:00:41,036 INFO main mapreduce.Job:1356 - Job job_1396468045458_0006 > completed successfully > 16:02:20,535 WARN main mapreduce.JobRunner:212 - Cannot start job > [wordcountJob] > java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.valueOf(FrameworkCounterGroup.java:148) > at > org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.findCounter(FrameworkCounterGroup.java:182) > at > org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154) > at > org.apache.hadoop.mapreduce.TypeConverter.fromYarn(TypeConverter.java:240) > at > org.apache.hadoop.mapred.ClientServiceDelegate.getJobCounters(ClientServiceDelegate.java:370) > at > org.apache.hadoop.mapred.YARNRunner.getJobCounters(YARNRunner.java:511) > at org.apache.hadoop.mapreduce.Job$7.run(Job.java:756) > at org.apache.hadoop.mapreduce.Job$7.run(Job.java:753) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) > at org.apache.hadoop.mapreduce.Job.getCounters(Job.java:753) > at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1361) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289) > . . . > {code} > The problem is that the wordcount job was running on one or more than one > nodes of the YARN cluster, where MR 2.3 libs were installed, and > JobCounter.MB_MILLIS_REDUCES is available in the counters. On the other side, > due to the classpath setting, the client was likely to run with MR 2.2 libs. > After the client retrieved the counters from MR AM, it tried to construct the > Counter object with the received counter name. Unfortunately, the enum didn't > exist in the client's classpath. Therefore, "No enum constant" exception is > thrown here. > JobCounter.MB_MILLIS_REDUCES is brought to MR2 via MAPREDUCE-5464 since > Hadoop 2.3. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5832) Few tests in TestJobClient fail on Windows
[ https://issues.apache.org/jira/browse/MAPREDUCE-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977653#comment-13977653 ] Hudson commented on MAPREDUCE-5832: --- SUCCESS: Integrated in Hadoop-trunk-Commit #5552 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5552/]) MAPREDUCE-5832. Fixed TestJobClient to not fail on JDK7 or on Windows. Contributed by Jian He and Vinod Kumar Vavilapalli. (vinodkv: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589315) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestJobClient.java > Few tests in TestJobClient fail on Windows > -- > > Key: MAPREDUCE-5832 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5832 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jian He >Assignee: Vinod Kumar Vavilapalli > Fix For: 2.4.1 > > Attachments: MAPREDUCE-5832.1.patch, MAPREDUCE-5832.2.patch > > > java.lang.Exception: test timed out after 1000 milliseconds > at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) > at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) > at > java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) > at java.net.InetAddress.getLocalHost(InetAddress.java:1434) > at sun.security.krb5.Config.getRealmFromDNS(Config.java:1174) > at sun.security.krb5.Config.getDefaultRealm(Config.java:1081) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:75) > at > org.apache.hadoop.security.authentication.util.KerberosName.(KerberosName.java:85) > at > org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:246) > at > org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:233) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:719) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:704) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:606) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:81) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) > at org.apache.hadoop.mapred.JobClient.(JobClient.java:460) > at > org.apache.hadoop.mapred.TestJobClient.testGetStagingAreaDir(TestJobClient.java:74) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5832) Few tests in TestJobClient fail on Windows
[ https://issues.apache.org/jira/browse/MAPREDUCE-5832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated MAPREDUCE-5832: --- Resolution: Fixed Fix Version/s: 2.4.1 Status: Resolved (was: Patch Available) Thanks Chris for the review. I just committed this to trunk, branch-2 and branch-2.4. Thanks Jian for the early patch - credited you too in the commit. > Few tests in TestJobClient fail on Windows > -- > > Key: MAPREDUCE-5832 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5832 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jian He >Assignee: Vinod Kumar Vavilapalli > Fix For: 2.4.1 > > Attachments: MAPREDUCE-5832.1.patch, MAPREDUCE-5832.2.patch > > > java.lang.Exception: test timed out after 1000 milliseconds > at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) > at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) > at > java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) > at java.net.InetAddress.getLocalHost(InetAddress.java:1434) > at sun.security.krb5.Config.getRealmFromDNS(Config.java:1174) > at sun.security.krb5.Config.getDefaultRealm(Config.java:1081) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:75) > at > org.apache.hadoop.security.authentication.util.KerberosName.(KerberosName.java:85) > at > org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:246) > at > org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:233) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:719) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:704) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:606) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:81) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) > at org.apache.hadoop.mapred.JobClient.(JobClient.java:460) > at > org.apache.hadoop.mapred.TestJobClient.testGetStagingAreaDir(TestJobClient.java:74) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5756) CombineFileInputFormat.getSplits() including directories in its results
[ https://issues.apache.org/jira/browse/MAPREDUCE-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977557#comment-13977557 ] Jason Lowe commented on MAPREDUCE-5756: --- Sorry for the delay. As [~jdere] and I mentioned above we think the problem is caused by MAPREDUCE-4470 generating degenerate splits for directories within the input directory. I haven't verified yet that reverting that patch changes CombineFileInputFormat to its original behavior of silently skipping directories in the input directory, but if it does then I think we should tweak that fix to distinguish directories from files without blocks. From a quick perusal of that patch it doesn't appear to do so, and that's why I think it could have introduced the behavior change. [~jdere], have you already verified that reverting MAPREDUCE-4470 fixes the Hive test issue? > CombineFileInputFormat.getSplits() including directories in its results > --- > > Key: MAPREDUCE-5756 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5756 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jason Dere > > Trying to track down HIVE-6401, where we see some "is not a file" errors > because getSplits() is giving us directories. I believe the culprit is > FileInputFormat.listStatus(): > {code} > if (recursive && stat.isDirectory()) { > addInputPathRecursively(result, fs, stat.getPath(), > inputFilter); > } else { > result.add(stat); > } > {code} > Which seems to be allowing directories to be added to the results if > recursive is false. Is this meant to return directories? If not, I think it > should look like this: > {code} > if (stat.isDirectory()) { > if (recursive) { > addInputPathRecursively(result, fs, stat.getPath(), > inputFilter); > } > } else { > result.add(stat); > } > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-3043) Missing containers info on the nodes page
[ https://issues.apache.org/jira/browse/MAPREDUCE-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977487#comment-13977487 ] Eric Payne commented on MAPREDUCE-3043: --- [~ramysiha] and [~sanyalsubroto], can you please comment on this by Friday, April 25? I believe this problem has been resolved. > Missing containers info on the nodes page > - > > Key: MAPREDUCE-3043 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3043 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: resourcemanager >Affects Versions: 0.23.0 >Reporter: Ramya Sunil >Assignee: Subroto Sanyal > Fix For: 0.24.0 > > Attachments: MAPREDUCE-3043.patch > > > The containers info on the nodes page on the RM seems to be missing. This was > useful in understanding the usage on each of the nodemanagers. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5465) Container killed before hprof dumps profile.out
[ https://issues.apache.org/jira/browse/MAPREDUCE-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977483#comment-13977483 ] Hadoop QA commented on MAPREDUCE-5465: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641300/MAPREDUCE-5465-5.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4545//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4545//console This message is automatically generated. > Container killed before hprof dumps profile.out > --- > > Key: MAPREDUCE-5465 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5465 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mr-am, mrv2 >Affects Versions: trunk, 2.0.3-alpha >Reporter: Radim Kolar >Assignee: Ming Ma > Attachments: MAPREDUCE-5465-2.patch, MAPREDUCE-5465-3.patch, > MAPREDUCE-5465-4.patch, MAPREDUCE-5465-5.patch, MAPREDUCE-5465.patch > > > If there is profiling enabled for mapper or reducer then hprof dumps > profile.out at process exit. It is dumped after task signaled to AM that work > is finished. > AM kills container with finished work without waiting for hprof to finish > dumps. If hprof is dumping larger outputs (such as with depth=4 while depth=3 > works) , it could not finish dump in time before being killed making entire > dump unusable because cpu and heap stats are missing. > There needs to be better delay before container is killed if profiling is > enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5809) Enhance distcp to support preserving HDFS ACLs.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated MAPREDUCE-5809: - Attachment: MAPREDUCE-5809.2.patch I'm attaching patch v2, which is just a minor rebase on current trunk. > Enhance distcp to support preserving HDFS ACLs. > --- > > Key: MAPREDUCE-5809 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5809 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: distcp >Affects Versions: 2.4.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: MAPREDUCE-5809.1.patch, MAPREDUCE-5809.2.patch > > > This issue tracks enhancing distcp to add a new command-line argument for > preserving HDFS ACLs from the source at the copy destination. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5853) ChecksumFileSystem.getContentSummary() including contents for crc files
[ https://issues.apache.org/jira/browse/MAPREDUCE-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977402#comment-13977402 ] Harish Butani commented on MAPREDUCE-5853: -- Thanks to [~brandon li]: - This change was introduced by https://issues.apache.org/jira/browse/HADOOP-8014. - Was fixed in https://issues.apache.org/jira/browse/HADOOP-10425 > ChecksumFileSystem.getContentSummary() including contents for crc files > > > Key: MAPREDUCE-5853 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5853 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jason Dere > > Trying to track down some differences in Hive statistics between > hadoop-1/hadoop-2. It looks like although ChecksumFileSystem.listStatus() > filters out CRC files, getContentSummary() falls back to using the > FilterFileSystem.getContentSummary() implementation, which calls > fs.getContentSummary(). The underlying fs may not have the same filters as > the ChecksumFileSystem and so the CRC files can get included in the content > summary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (MAPREDUCE-5853) ChecksumFileSystem.getContentSummary() including contents for crc files
[ https://issues.apache.org/jira/browse/MAPREDUCE-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere resolved MAPREDUCE-5853. --- Resolution: Duplicate Sorry, looks like there are other related (fixed) issues: HADOOP-8014 > ChecksumFileSystem.getContentSummary() including contents for crc files > > > Key: MAPREDUCE-5853 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5853 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jason Dere > > Trying to track down some differences in Hive statistics between > hadoop-1/hadoop-2. It looks like although ChecksumFileSystem.listStatus() > filters out CRC files, getContentSummary() falls back to using the > FilterFileSystem.getContentSummary() implementation, which calls > fs.getContentSummary(). The underlying fs may not have the same filters as > the ChecksumFileSystem and so the CRC files can get included in the content > summary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5853) ChecksumFileSystem.getContentSummary() including contents for crc files
[ https://issues.apache.org/jira/browse/MAPREDUCE-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977383#comment-13977383 ] Jason Dere commented on MAPREDUCE-5853: --- It looks like FileSystem's implementation of getContentSummary() really just uses getFileStatus()/listStatus(). If we get rid of the overridden version of getContentSummary() in FilterFileSystem and just fall back to the FileSystem implementation, would this work correctly, since FilterFileSystem does have overridden versions of getFileStatus()/listStatus()? > ChecksumFileSystem.getContentSummary() including contents for crc files > > > Key: MAPREDUCE-5853 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5853 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jason Dere > > Trying to track down some differences in Hive statistics between > hadoop-1/hadoop-2. It looks like although ChecksumFileSystem.listStatus() > filters out CRC files, getContentSummary() falls back to using the > FilterFileSystem.getContentSummary() implementation, which calls > fs.getContentSummary(). The underlying fs may not have the same filters as > the ChecksumFileSystem and so the CRC files can get included in the content > summary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MAPREDUCE-5853) ChecksumFileSystem.getContentSummary() including contents for crc files
Jason Dere created MAPREDUCE-5853: - Summary: ChecksumFileSystem.getContentSummary() including contents for crc files Key: MAPREDUCE-5853 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5853 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Jason Dere Trying to track down some differences in Hive statistics between hadoop-1/hadoop-2. It looks like although ChecksumFileSystem.listStatus() filters out CRC files, getContentSummary() falls back to using the FilterFileSystem.getContentSummary() implementation, which calls fs.getContentSummary(). The underlying fs may not have the same filters as the ChecksumFileSystem and so the CRC files can get included in the content summary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
[ https://issues.apache.org/jira/browse/MAPREDUCE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated MAPREDUCE-5833: - Resolution: Fixed Fix Version/s: 2.4.1 3.0.0 Status: Resolved (was: Patch Available) I committed this to trunk, branch-2 and branch-2.4. Zhijie, thank you for contributing the patch. > TestRMContainerAllocator fails ocassionally > --- > > Key: MAPREDUCE-5833 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 > Project: Hadoop Map/Reduce > Issue Type: Test >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Fix For: 3.0.0, 2.4.1 > > Attachments: MAPREDUCE-5833-branch-2.patch, MAPREDUCE-5833.1.patch, > MAPREDUCE-5833.2.patch > > > testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race > conditions. > {code} > Stacktrace > java.util.NoSuchElementException: null > at java.util.Collections$EmptyIterator.next(Collections.java:2998) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) > {code} > {code} > Error Message > Task state is not correct (timedout) expected: but was: > Stacktrace > junit.framework.AssertionFailedError: Task state is not correct (timedout) > expected: but was: > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.failNotEquals(Assert.java:287) > at junit.framework.Assert.assertEquals(Assert.java:67) > at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
[ https://issues.apache.org/jira/browse/MAPREDUCE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977269#comment-13977269 ] Hadoop QA commented on MAPREDUCE-5833: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641309/MAPREDUCE-5833-branch-2.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4544//console This message is automatically generated. > TestRMContainerAllocator fails ocassionally > --- > > Key: MAPREDUCE-5833 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 > Project: Hadoop Map/Reduce > Issue Type: Test >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Attachments: MAPREDUCE-5833-branch-2.patch, MAPREDUCE-5833.1.patch, > MAPREDUCE-5833.2.patch > > > testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race > conditions. > {code} > Stacktrace > java.util.NoSuchElementException: null > at java.util.Collections$EmptyIterator.next(Collections.java:2998) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) > {code} > {code} > Error Message > Task state is not correct (timedout) expected: but was: > Stacktrace > junit.framework.AssertionFailedError: Task state is not correct (timedout) > expected: but was: > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.failNotEquals(Assert.java:287) > at junit.framework.Assert.assertEquals(Assert.java:67) > at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
[ https://issues.apache.org/jira/browse/MAPREDUCE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977255#comment-13977255 ] Chris Nauroth commented on MAPREDUCE-5833: -- +1 for the branch-2 patch also. > TestRMContainerAllocator fails ocassionally > --- > > Key: MAPREDUCE-5833 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 > Project: Hadoop Map/Reduce > Issue Type: Test >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Attachments: MAPREDUCE-5833-branch-2.patch, MAPREDUCE-5833.1.patch, > MAPREDUCE-5833.2.patch > > > testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race > conditions. > {code} > Stacktrace > java.util.NoSuchElementException: null > at java.util.Collections$EmptyIterator.next(Collections.java:2998) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) > {code} > {code} > Error Message > Task state is not correct (timedout) expected: but was: > Stacktrace > junit.framework.AssertionFailedError: Task state is not correct (timedout) > expected: but was: > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.failNotEquals(Assert.java:287) > at junit.framework.Assert.assertEquals(Assert.java:67) > at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
[ https://issues.apache.org/jira/browse/MAPREDUCE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen updated MAPREDUCE-5833: --- Attachment: MAPREDUCE-5833-branch-2.patch Upload a patch that applies to branch-2 > TestRMContainerAllocator fails ocassionally > --- > > Key: MAPREDUCE-5833 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 > Project: Hadoop Map/Reduce > Issue Type: Test >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Attachments: MAPREDUCE-5833-branch-2.patch, MAPREDUCE-5833.1.patch, > MAPREDUCE-5833.2.patch > > > testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race > conditions. > {code} > Stacktrace > java.util.NoSuchElementException: null > at java.util.Collections$EmptyIterator.next(Collections.java:2998) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) > {code} > {code} > Error Message > Task state is not correct (timedout) expected: but was: > Stacktrace > junit.framework.AssertionFailedError: Task state is not correct (timedout) > expected: but was: > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.failNotEquals(Assert.java:287) > at junit.framework.Assert.assertEquals(Assert.java:67) > at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-4931) Add user-APIs for classpath precedence control
[ https://issues.apache.org/jira/browse/MAPREDUCE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977244#comment-13977244 ] Chen He commented on MAPREDUCE-4931: RETARGET TO 3.0 > Add user-APIs for classpath precedence control > -- > > Key: MAPREDUCE-4931 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4931 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client >Affects Versions: 1.0.0 >Reporter: Harsh J >Priority: Minor > > The feature config from MAPREDUCE-1938 of allowing tasks to start with > user-classes-first is fairly popular and can use its own API hooks in > Job/JobConf classes, making it easier to discover and use it rather than > continuing to keep it as an advanced param. > I propose to add two APIs to Job/JobConf: > {code} > void setUserClassesTakesPrecedence(boolean) > boolean userClassesTakesPrecedence() > {code} > Both of which, depending on their branch of commit, set the property > {{mapreduce.user.classpath.first}} (1.x) or > {{mapreduce.job.user.classpath.first}} (trunk, 2.x and if needed, in 0.23.x). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-4931) Add user-APIs for classpath precedence control
[ https://issues.apache.org/jira/browse/MAPREDUCE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated MAPREDUCE-4931: --- Target Version/s: 3.0.0 (was: 3.0.0, 0.23.11) > Add user-APIs for classpath precedence control > -- > > Key: MAPREDUCE-4931 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4931 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client >Affects Versions: 1.0.0 >Reporter: Harsh J >Priority: Minor > > The feature config from MAPREDUCE-1938 of allowing tasks to start with > user-classes-first is fairly popular and can use its own API hooks in > Job/JobConf classes, making it easier to discover and use it rather than > continuing to keep it as an advanced param. > I propose to add two APIs to Job/JobConf: > {code} > void setUserClassesTakesPrecedence(boolean) > boolean userClassesTakesPrecedence() > {code} > Both of which, depending on their branch of commit, set the property > {{mapreduce.user.classpath.first}} (1.x) or > {{mapreduce.job.user.classpath.first}} (trunk, 2.x and if needed, in 0.23.x). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-4734) The history server should link back to NM logs if aggregation is incomplete / disabled
[ https://issues.apache.org/jira/browse/MAPREDUCE-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977234#comment-13977234 ] Chen He commented on MAPREDUCE-4734: retarget it to 3.0 > The history server should link back to NM logs if aggregation is incomplete / > disabled > -- > > Key: MAPREDUCE-4734 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4734 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: jobhistoryserver, mrv2 >Affects Versions: 0.23.4 >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: MR4734_WIP.txt > > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-4734) The history server should link back to NM logs if aggregation is incomplete / disabled
[ https://issues.apache.org/jira/browse/MAPREDUCE-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated MAPREDUCE-4734: --- Target Version/s: 3.0.0 (was: 3.0.0, 0.23.11) > The history server should link back to NM logs if aggregation is incomplete / > disabled > -- > > Key: MAPREDUCE-4734 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4734 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: jobhistoryserver, mrv2 >Affects Versions: 0.23.4 >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: MR4734_WIP.txt > > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (MAPREDUCE-3476) Optimize YARN API calls
[ https://issues.apache.org/jira/browse/MAPREDUCE-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He resolved MAPREDUCE-3476. Resolution: Later > Optimize YARN API calls > --- > > Key: MAPREDUCE-3476 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3476 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: mrv2 >Affects Versions: 0.23.0 >Reporter: Ravi Prakash >Assignee: Vinod Kumar Vavilapalli > > Several YARN API calls are taking inordinately long. This might be a > performance blocker. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-3476) Optimize YARN API calls
[ https://issues.apache.org/jira/browse/MAPREDUCE-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977231#comment-13977231 ] Chen He commented on MAPREDUCE-3476: Close it, and reopen it if necessary. Thank you [~raviprak] and [~vinodkv] > Optimize YARN API calls > --- > > Key: MAPREDUCE-3476 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3476 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: mrv2 >Affects Versions: 0.23.0 >Reporter: Ravi Prakash >Assignee: Vinod Kumar Vavilapalli > > Several YARN API calls are taking inordinately long. This might be a > performance blocker. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5827) TestSpeculativeExecutionWithMRApp fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977209#comment-13977209 ] Hudson commented on MAPREDUCE-5827: --- SUCCESS: Integrated in Hadoop-trunk-Commit #5550 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5550/]) MAPREDUCE-5827. Move attribution from 2.5.0 to 2.4.1 in CHANGES.txt. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589238) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt > TestSpeculativeExecutionWithMRApp fails > --- > > Key: MAPREDUCE-5827 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5827 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Labels: test > Fix For: 3.0.0, 2.4.1 > > Attachments: MAPREDUCE-5827.1.patch, MAPREDUCE-5827.2.patch, > MAPREDUCE-5827.3.patch > > > {code} > junit.framework.AssertionFailedError: Couldn't speculate successfully > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.assertTrue(Assert.java:20) > at > org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp.testSpeculateSuccessfulWithoutUpdateEvents(TestSpeculativeExecutionWithMRApp.java:122 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5833) TestRMContainerAllocator fails ocassionally
[ https://issues.apache.org/jira/browse/MAPREDUCE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977208#comment-13977208 ] Hudson commented on MAPREDUCE-5833: --- SUCCESS: Integrated in Hadoop-trunk-Commit #5550 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5550/]) MAPREDUCE-5833. TestRMContainerAllocator fails ocassionally. Contributed by Zhijie Shen. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589248) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java > TestRMContainerAllocator fails ocassionally > --- > > Key: MAPREDUCE-5833 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5833 > Project: Hadoop Map/Reduce > Issue Type: Test >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Attachments: MAPREDUCE-5833.1.patch, MAPREDUCE-5833.2.patch > > > testReportedAppProgress and testReportedAppProgressWithOnlyMaps have race > conditions. > {code} > Stacktrace > java.util.NoSuchElementException: null > at java.util.Collections$EmptyIterator.next(Collections.java:2998) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgress(TestRMContainerAllocator.java:535) > {code} > {code} > Error Message > Task state is not correct (timedout) expected: but was: > Stacktrace > junit.framework.AssertionFailedError: Task state is not correct (timedout) > expected: but was: > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.failNotEquals(Assert.java:287) > at junit.framework.Assert.assertEquals(Assert.java:67) > at org.apache.hadoop.mapreduce.v2.app.MRApp.waitForState(MRApp.java:393) > at > org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator.testReportedAppProgressWithOnlyMaps(TestRMContainerAllocator.java:700) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5827) TestSpeculativeExecutionWithMRApp fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen updated MAPREDUCE-5827: --- Fix Version/s: (was: 2.4.0) 2.4.1 > TestSpeculativeExecutionWithMRApp fails > --- > > Key: MAPREDUCE-5827 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5827 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Labels: test > Fix For: 3.0.0, 2.4.1 > > Attachments: MAPREDUCE-5827.1.patch, MAPREDUCE-5827.2.patch, > MAPREDUCE-5827.3.patch > > > {code} > junit.framework.AssertionFailedError: Couldn't speculate successfully > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.assertTrue(Assert.java:20) > at > org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp.testSpeculateSuccessfulWithoutUpdateEvents(TestSpeculativeExecutionWithMRApp.java:122 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5465) Container killed before hprof dumps profile.out
[ https://issues.apache.org/jira/browse/MAPREDUCE-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated MAPREDUCE-5465: --- Attachment: MAPREDUCE-5465-5.patch Updated version that fixes javac and findbug warning. > Container killed before hprof dumps profile.out > --- > > Key: MAPREDUCE-5465 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5465 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mr-am, mrv2 >Affects Versions: trunk, 2.0.3-alpha >Reporter: Radim Kolar >Assignee: Ming Ma > Attachments: MAPREDUCE-5465-2.patch, MAPREDUCE-5465-3.patch, > MAPREDUCE-5465-4.patch, MAPREDUCE-5465-5.patch, MAPREDUCE-5465.patch > > > If there is profiling enabled for mapper or reducer then hprof dumps > profile.out at process exit. It is dumped after task signaled to AM that work > is finished. > AM kills container with finished work without waiting for hprof to finish > dumps. If hprof is dumping larger outputs (such as with depth=4 while depth=3 > works) , it could not finish dump in time before being killed making entire > dump unusable because cpu and heap stats are missing. > There needs to be better delay before container is killed if profiling is > enabled. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5827) TestSpeculativeExecutionWithMRApp fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated MAPREDUCE-5827: - Resolution: Fixed Fix Version/s: 2.4.0 3.0.0 Status: Resolved (was: Patch Available) I committed this to trunk, branch-2 and branch-2.4. Zhijie, thank you for the patch. > TestSpeculativeExecutionWithMRApp fails > --- > > Key: MAPREDUCE-5827 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5827 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Labels: test > Fix For: 3.0.0, 2.4.0 > > Attachments: MAPREDUCE-5827.1.patch, MAPREDUCE-5827.2.patch, > MAPREDUCE-5827.3.patch > > > {code} > junit.framework.AssertionFailedError: Couldn't speculate successfully > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.assertTrue(Assert.java:20) > at > org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp.testSpeculateSuccessfulWithoutUpdateEvents(TestSpeculativeExecutionWithMRApp.java:122 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5827) TestSpeculativeExecutionWithMRApp fails
[ https://issues.apache.org/jira/browse/MAPREDUCE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977168#comment-13977168 ] Hudson commented on MAPREDUCE-5827: --- SUCCESS: Integrated in Hadoop-trunk-Commit #5549 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5549/]) MAPREDUCE-5827. TestSpeculativeExecutionWithMRApp fails. Contributed by Zhijie Shen. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589223) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java > TestSpeculativeExecutionWithMRApp fails > --- > > Key: MAPREDUCE-5827 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5827 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Zhijie Shen >Assignee: Zhijie Shen > Labels: test > Attachments: MAPREDUCE-5827.1.patch, MAPREDUCE-5827.2.patch, > MAPREDUCE-5827.3.patch > > > {code} > junit.framework.AssertionFailedError: Couldn't speculate successfully > at junit.framework.Assert.fail(Assert.java:50) > at junit.framework.Assert.assertTrue(Assert.java:20) > at > org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp.testSpeculateSuccessfulWithoutUpdateEvents(TestSpeculativeExecutionWithMRApp.java:122 > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5603) Ability to disable FileInputFormat listLocatedStatus optimization to save client memory
[ https://issues.apache.org/jira/browse/MAPREDUCE-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated MAPREDUCE-5603: -- Resolution: Won't Fix Target Version/s: 2.3.0, 3.0.0, 0.23.11 (was: 3.0.0, 0.23.11, 2.3.0) Status: Resolved (was: Patch Available) Closing this as won't fix for now. This was originally filed as a stop-gap in case there were any situations where the client needed the extra memory and simply couldn't use a larger heap. However after running with the listLocatedStatus fix for quite some time we've only seen a couple of instances where the client needed more memory to handle the situation and it was an easy fix to simply increase the memory used by the client. There's plenty of confs in Hadoop as it is, and I'd rather not add yet another conf if it's not necessary. We can reopen this if the need arises. > Ability to disable FileInputFormat listLocatedStatus optimization to save > client memory > --- > > Key: MAPREDUCE-5603 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5603 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client, mrv2 >Affects Versions: 0.23.10, 2.2.0 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Minor > Attachments: MAPREDUCE-5603.patch, MAPREDUCE-5603.patch > > > It would be nice if users had the option to disable the listLocatedStatus > optimization in FileInputFormat to save client memory. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5652) NM Recovery. ShuffleHandler should handle NM restarts
[ https://issues.apache.org/jira/browse/MAPREDUCE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977065#comment-13977065 ] Hadoop QA commented on MAPREDUCE-5652: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12641270/MAPREDUCE-5652-v6.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4543//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4543//console This message is automatically generated. > NM Recovery. ShuffleHandler should handle NM restarts > - > > Key: MAPREDUCE-5652 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5652 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Karthik Kambatla >Assignee: Jason Lowe > Labels: shuffle > Attachments: MAPREDUCE-5652-v2.patch, MAPREDUCE-5652-v3.patch, > MAPREDUCE-5652-v4.patch, MAPREDUCE-5652-v5.patch, MAPREDUCE-5652-v6.patch, > MAPREDUCE-5652.patch > > > ShuffleHandler should work across NM restarts and not require re-running > map-tasks. On NM restart, the map outputs are cleaned up requiring > re-execution of map tasks and should be avoided. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-3755) Add the equivalent of JobStatus to end of JobHistory file
[ https://issues.apache.org/jira/browse/MAPREDUCE-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977062#comment-13977062 ] Bikas Saha commented on MAPREDUCE-3755: --- Probably not. We may close this. > Add the equivalent of JobStatus to end of JobHistory file > -- > > Key: MAPREDUCE-3755 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3755 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: jobhistoryserver, mrv2 >Affects Versions: 0.23.0 >Reporter: Arun C Murthy >Assignee: Bikas Saha > Fix For: 0.23.2 > > > In MR1 we have the notion of CompletedJobStatus store to aid fast responses > to job.getStatus. We need the equivalent for MR2, an option is to add the > jobStatus to the end of the JobHistory file to which the JHS can easily jump > ahead to and serve the query, it should also cache this for a fair number of > recently completed jobs. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (MAPREDUCE-3755) Add the equivalent of JobStatus to end of JobHistory file
[ https://issues.apache.org/jira/browse/MAPREDUCE-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bikas Saha resolved MAPREDUCE-3755. --- Resolution: Won't Fix Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0 (was: 0.23.3, 2.0.0-alpha, 3.0.0) > Add the equivalent of JobStatus to end of JobHistory file > -- > > Key: MAPREDUCE-3755 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3755 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: jobhistoryserver, mrv2 >Affects Versions: 0.23.0 >Reporter: Arun C Murthy >Assignee: Bikas Saha > Fix For: 0.23.2 > > > In MR1 we have the notion of CompletedJobStatus store to aid fast responses > to job.getStatus. We need the equivalent for MR2, an option is to add the > jobStatus to the end of the JobHistory file to which the JHS can easily jump > ahead to and serve the query, it should also cache this for a fair number of > recently completed jobs. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977047#comment-13977047 ] Sangjin Lee commented on MAPREDUCE-5841: That's a good point. I had a pretty narrow focus on the kill job case, and hadn't thought about kill/fail task. I'll look at it again, and why that test is behaving that way. I'll update the JIRA once I have more finding. > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976992#comment-13976992 ] Jason Lowe commented on MAPREDUCE-5841: --- bq. Anyhow, the reason I arrived at the current fix is that once the state transition is completed, LocalContainerLauncher.stop() is invoked, and that shuts down the ExecutorService for the task runner. And that shut down interrupts any running task. So the end result is pretty much the same. Yes I understand how the end result is the same for the mapred job -kill case, but doesn't the job still hang in the mapred job -fail-task or mapred job -kill-task case if the task is stuck? Since we fake the exit of the "container" the AM will attempt to re-launch the task, the new task attempt will get submitted to the executor queue, but I think it hangs in the queue waiting for the stuck task since it's only executing one task at a time. At that point the only recourse is to kill the entire job. If you could poke at it a bit more I think it'd be good to understand what's going on if we cancel the task. If it proves particularly sticky then we can consider putting this in and filing a followup JIRA to track the remaining fail-task/kill-task issue. > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-5652) NM Recovery. ShuffleHandler should handle NM restarts
[ https://issues.apache.org/jira/browse/MAPREDUCE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated MAPREDUCE-5652: -- Attachment: MAPREDUCE-5652-v6.patch bq. Do you think renaming initStateStore to initAndOpenStore or startStore is reasonable? Changed it to startStore. bq. MAPREDUCE-5362 . Let us try to get that in first. Mind taking a look? Posted some comments to that JIRA but haven't seen any activity for a bit. In the interim this patch works without the changes from MAPREDUCE-5362. If MAPREDUCE-5362 happens to go in before this is committed then I'll update it accordingly. > NM Recovery. ShuffleHandler should handle NM restarts > - > > Key: MAPREDUCE-5652 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5652 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Karthik Kambatla >Assignee: Jason Lowe > Labels: shuffle > Attachments: MAPREDUCE-5652-v2.patch, MAPREDUCE-5652-v3.patch, > MAPREDUCE-5652-v4.patch, MAPREDUCE-5652-v5.patch, MAPREDUCE-5652-v6.patch, > MAPREDUCE-5652.patch > > > ShuffleHandler should work across NM restarts and not require re-running > map-tasks. On NM restart, the map outputs are cleaned up requiring > re-execution of map tasks and should be avoided. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-3755) Add the equivalent of JobStatus to end of JobHistory file
[ https://issues.apache.org/jira/browse/MAPREDUCE-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976993#comment-13976993 ] Jonathan Eagles commented on MAPREDUCE-3755: [~bikassaha], does this change still make sense with the addition of the ATS? I don't have much context for this issue, but I'm trying to cleanup some old tickets and not sure whether this ticket is still an issue we want to pursue. > Add the equivalent of JobStatus to end of JobHistory file > -- > > Key: MAPREDUCE-3755 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3755 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: jobhistoryserver, mrv2 >Affects Versions: 0.23.0 >Reporter: Arun C Murthy >Assignee: Bikas Saha > Fix For: 0.23.2 > > > In MR1 we have the notion of CompletedJobStatus store to aid fast responses > to job.getStatus. We need the equivalent for MR2, an option is to add the > jobStatus to the end of the JobHistory file to which the JHS can easily jump > ahead to and serve the query, it should also cache this for a fair number of > recently completed jobs. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5116) PUMA Benchmark Suite
[ https://issues.apache.org/jira/browse/MAPREDUCE-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976976#comment-13976976 ] Jonathan Eagles commented on MAPREDUCE-5116: There hasn't been any traffic on this jira in some time. [~algol], are you still interested in getting this benchmark suite into Hadoop? Please comment on you interest in finishing up this work and I'll be happy to keep this ticket open. Otherwise, I'll close this ticket at the end of the week. > PUMA Benchmark Suite > > > Key: MAPREDUCE-5116 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5116 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: benchmarks >Reporter: Faraz Ahmad >Assignee: Faraz Ahmad > Original Estimate: 48h > Time Spent: 48h > Remaining Estimate: 0h > > A benchmark suite which represents a broad range of "real-world" MapReduce > applications exhibiting application characteristics with high/low computation > and high/low shuffle volumes. These benchmarks have been published as part of > MaRCO (http://dx.doi.org/10.1016/j.jpdc.2012.12.012) project in JPDC '12. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976959#comment-13976959 ] Sangjin Lee commented on MAPREDUCE-5841: Thanks for the review [~jlowe]! I am almost done with a unit test, and am going to update the patch to include it soon. Yes, that's the approach I took (although it took a fair amount of mocking). With the test, you can easily reproduce the bug against the existing code, and confirm the fix. bq. The System.exit call should be ExitUtil.terminate instead, as it allows unit tests to disable the system exit in case something goes wrong during the test or to verify exit behavior if that's desired. Good point. I'll change it to ExitUtil.terminate. bq. I don't see any attempt to stop the task when CONTAINER_REMOTE_CLEANUP is received. Before it didn't make sense to do so because by the time we received it there was no task running. Now that there could be, I think we would want to track the Future for each task submitted that's in-flight and attempt to cancel the running task when the cleanup event is received. That is quite interesting. In fact, that's the very first approach I took, but when I ran the TestUberAM unit tests, interestingly it resulted in all unit tests hanging indefinitely. I spent some time to try to figure out why it was hanging but wasn't successful. Anyhow, the reason I arrived at the current fix is that once the state transition is completed, LocalContainerLauncher.stop() is invoked, and that shuts down the ExecutorService for the task runner. And that shut down interrupts any running task. So the end result is pretty much the same. Let me know what you think. If needed, I can look into the TestUberAM issue when cancelling the future task one more time. > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-4065) Add .proto files to built tarball
[ https://issues.apache.org/jira/browse/MAPREDUCE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976953#comment-13976953 ] Jonathan Eagles commented on MAPREDUCE-4065: As 0.23.x is going to maintenance mode, I have re-targeted this jira for an feature accepting active line. The issue of allowing 3rd party tools is especially compelling, but as Bobby mentioned, very difficult at this point. > Add .proto files to built tarball > - > > Key: MAPREDUCE-4065 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4065 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 0.23.2 >Reporter: Ralph H Castain > > Please add the .proto files to the built tarball so that users can build 3rd > party tools that use protocol buffers without having to do an svn checkout of > the source code. > Sorry I don't know more about Maven, or I would provide a patch. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-4065) Add .proto files to built tarball
[ https://issues.apache.org/jira/browse/MAPREDUCE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated MAPREDUCE-4065: --- Target Version/s: 2.5.0 (was: 0.23.0) > Add .proto files to built tarball > - > > Key: MAPREDUCE-4065 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4065 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build >Affects Versions: 0.23.2 >Reporter: Ralph H Castain > > Please add the .proto files to the built tarball so that users can build 3rd > party tools that use protocol buffers without having to do an svn checkout of > the source code. > Sorry I don't know more about Maven, or I would provide a patch. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-3838) MapReduce job submission time has increased in 0.23 when compared to 0.20.206
[ https://issues.apache.org/jira/browse/MAPREDUCE-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976912#comment-13976912 ] Mit Desai commented on MAPREDUCE-3838: -- I will close this jira at the end of the day unless somebody wants to go on a different way. > MapReduce job submission time has increased in 0.23 when compared to 0.20.206 > - > > Key: MAPREDUCE-3838 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-3838 > Project: Hadoop Map/Reduce > Issue Type: Sub-task > Components: client >Affects Versions: 0.23.0 >Reporter: Amar Kamat > Labels: gridmix, job-submit-time, yarn > Fix For: 0.23.2 > > > While running Gridmix on 0.23, we found that the job submission time has > increased when compared to 0.20.206. > Here are some stats: > ||Submit-Time||Total number of jobs in YARN|| Total number of jobs in FRED|| > |> 25secs|3 |1 | > |> 20secs| 6 | 2 | > |> 15secs| 14 | 4 | > |> 10secs| 24 | 4 | > |> 5secs | 67 | 28| > Note that Gridmix was run using the same trace. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5852) Prepare MapReduce codebase for JUnit 4.11.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976881#comment-13976881 ] Hudson commented on MAPREDUCE-5852: --- SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1765 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1765/]) MAPREDUCE-5852. Prepare MapReduce codebase for JUnit 4.11. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589006) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestJobClientGetJob.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestMRWithDistributedCache.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRPCFactories.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRecordFactory.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestClusterStatus.java * /hadoop/common/trunk/hadoop
[jira] [Commented] (MAPREDUCE-5841) uber job doesn't terminate on getting mapred job kill
[ https://issues.apache.org/jira/browse/MAPREDUCE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976866#comment-13976866 ] Jason Lowe commented on MAPREDUCE-5841: --- Thanks for the patch, Sangjin. The approach looks fine to me. A few comments: - I don't see any attempt to stop the task when CONTAINER_REMOTE_CLEANUP is received. Before it didn't make sense to do so because by the time we received it there was no task running. Now that there could be, I think we would want to track the Future for each task submitted that's in-flight and attempt to cancel the running task when the cleanup event is received. - The System.exit call should be ExitUtil.terminate instead, as it allows unit tests to disable the system exit in case something goes wrong during the test or to verify exit behavior if that's desired. - Speaking of unit tests, it'd be nice to verify the LocalContainerLauncher is able to respond to events while a long-running sleep task is executing. > uber job doesn't terminate on getting mapred job kill > - > > Key: MAPREDUCE-5841 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5841 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.3.0 >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: mapreduce-5841.patch > > > If you issue a "mapred job -kill" against a uberized job, the job (and the > yarn application) state transitions to KILLED, but the application master > process continues to run. The job actually runs to completion despite the > killed status. > This can be easily reproduced by running a sleep job: > {noformat} > hadoop jar hadoop-mapreduce-client-jobclient-2.3.0-tests.jar sleep -m 1 -r 0 > -mt 30 > {noformat} > Issue a kill with "mapred job -kill \[job-id\]". The UI will show the job > (app) is in the KILLED state. However, you can see the application master is > still running. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5852) Prepare MapReduce codebase for JUnit 4.11.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976799#comment-13976799 ] Hudson commented on MAPREDUCE-5852: --- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1740 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1740/]) MAPREDUCE-5852. Prepare MapReduce codebase for JUnit 4.11. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589006) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestJobClientGetJob.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestMRWithDistributedCache.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRPCFactories.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRecordFactory.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestClusterStatus.java * /hadoop/common/trunk/hadoop-mapreduce
[jira] [Commented] (MAPREDUCE-5812) Make task context available to OutputCommitter.isRecoverySupported()
[ https://issues.apache.org/jira/browse/MAPREDUCE-5812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976798#comment-13976798 ] Jason Lowe commented on MAPREDUCE-5812: --- Thanks for updating the patch, Mohammad. I think the second version of the patch is closer to what we want, as the most recent patch has the mapred OutputCommitter#isRecoverySupported method receiving a mapreduce JobContext when all of the other methods receive a mapred JobContext. I think for consistency we should do the bridging functions as was done for the other mapred OutputCommitter methods so the JobContext type is consistent across the methods that derived types would override. This means the MRAppMaster will need to create the appropriate job context object as JobImpl does when the job initializes. > Make task context available to OutputCommitter.isRecoverySupported() > - > > Key: MAPREDUCE-5812 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5812 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mr-am >Affects Versions: 2.3.0 >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > Attachments: MAPREDUCE-5812.1.patch, MAPREDUCE-5812.2.patch, > MAPREDUCE-5812.3.patch > > > Background > == > The system like Hive provides its version of OutputCommitter. The custom > implementation of isRecoverySupported() requires task context. From > taskContext:getConfiguration(), hive checks if hive-defined specific > property is set or not. Based on the property value, it returns true or > false. However, in the current OutputCommitter:isRecoverySupported(), there > is no way of getting task config. As a result, user can't turn on/off the > MRAM recovery feature. > Proposed resolution: > === > 1. Pass Task Context into isRecoverySupported() method. > Pros: Easy and clean > Cons: Possible backward compatibility issue due to aPI changes. (Is it true?) > 2. Call outputCommitter.setupTask(taskContext) from MRAM: The new > OutputCommitter will store the context in the class level variable and use it > from isRecoverySupported() > Props: No API changes. No backward compatibility issue. This call can be made > from MRAppMaster.getOutputCommitter() method for old API case. > Cons: Might not be very clean solution due to class level variable. > Please give your comments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MAPREDUCE-4718) MapReduce fails If I pass a parameter as a S3 folder
[ https://issues.apache.org/jira/browse/MAPREDUCE-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated MAPREDUCE-4718: --- Target Version/s: 1.0.3 (was: 1.0.3, 0.23.3) > MapReduce fails If I pass a parameter as a S3 folder > > > Key: MAPREDUCE-4718 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4718 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: job submission >Affects Versions: 1.0.0, 1.0.3 > Environment: Hadoop with default configurations >Reporter: Benjamin Kim > > I'm running a wordcount MR as follows > hadoop jar WordCount.jar wordcount.WordCountDriver > s3n://bucket/wordcount/input s3n://bucket/wordcount/output > > s3n://bucket/wordcount/input is a s3 object that contains other input files. > However I get following NPE error > 12/10/02 18:56:23 INFO mapred.JobClient: map 0% reduce 0% > 12/10/02 18:56:54 INFO mapred.JobClient: map 50% reduce 0% > 12/10/02 18:56:56 INFO mapred.JobClient: Task Id : > attempt_201210021853_0001_m_01_0, Status : FAILED > java.lang.NullPointerException > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106) > at java.io.BufferedInputStream.close(BufferedInputStream.java:451) > at java.io.FilterInputStream.close(FilterInputStream.java:155) > at org.apache.hadoop.util.LineReader.close(LineReader.java:83) > at > org.apache.hadoop.mapreduce.lib.input.LineRecordReader.close(LineRecordReader.java:144) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.close(MapTask.java:497) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > MR runs fine if i specify more specific input path such as > s3n://bucket/wordcount/input/file.txt > MR fails if I pass s3 folder as a parameter > In summary, > This works > hadoop jar ./hadoop-examples-1.0.3.jar wordcount > /user/hadoop/wordcount/input/ s3n://bucket/wordcount/output/ > This doesn't work > hadoop jar ./hadoop-examples-1.0.3.jar wordcount > s3n://bucket/wordcount/input/ s3n://bucket/wordcount/output/ > (both input path are directories) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5852) Prepare MapReduce codebase for JUnit 4.11.
[ https://issues.apache.org/jira/browse/MAPREDUCE-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976753#comment-13976753 ] Hudson commented on MAPREDUCE-5852: --- SUCCESS: Integrated in Hadoop-Yarn-trunk #548 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/548/]) MAPREDUCE-5852. Prepare MapReduce codebase for JUnit 4.11. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589006) * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestAMInfos.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppComponentDependencies.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRAppMaster.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRecovery.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestStagingCleanup.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestMapReduceChildJVM.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestJobClientGetJob.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapred/TestMRWithDistributedCache.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRPCFactories.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/TestRecordFactory.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestClusterStatus.java * /hadoop/common/trunk/hadoop-mapreduce-p
[jira] [Commented] (MAPREDUCE-5363) Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus
[ https://issues.apache.org/jira/browse/MAPREDUCE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1397#comment-1397 ] Akira AJISAKA commented on MAPREDUCE-5363: -- The patch is just to fix the javadoc, so new tests are not needed. > Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus > > > Key: MAPREDUCE-5363 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5363 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv1, mrv2 >Affects Versions: 1.1.2, 2.1.0-beta >Reporter: Sandy Ryza >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: MAPREDUCE-5363-1.patch, MAPREDUCE-5363-2.patch, > MAPREDUCE-5363-3.patch > > > The doc for TaskCompletionEvent#get(Task)Status in both MR1 and MR2 is > {code} > Returns enum Status.SUCESS or Status.FAILURE. > @return task tracker status > {code} > The actual values that the Status enum can take are > FAILED, KILLED, SUCCEEDED, OBSOLETE, TIPFAILED -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MAPREDUCE-5363) Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus
[ https://issues.apache.org/jira/browse/MAPREDUCE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976616#comment-13976616 ] Hadoop QA commented on MAPREDUCE-5363: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12590719/MAPREDUCE-5363-3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4542//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4542//console This message is automatically generated. > Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus > > > Key: MAPREDUCE-5363 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5363 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv1, mrv2 >Affects Versions: 1.1.2, 2.1.0-beta >Reporter: Sandy Ryza >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: MAPREDUCE-5363-1.patch, MAPREDUCE-5363-2.patch, > MAPREDUCE-5363-3.patch > > > The doc for TaskCompletionEvent#get(Task)Status in both MR1 and MR2 is > {code} > Returns enum Status.SUCESS or Status.FAILURE. > @return task tracker status > {code} > The actual values that the Status enum can take are > FAILED, KILLED, SUCCEEDED, OBSOLETE, TIPFAILED -- This message was sent by Atlassian JIRA (v6.2#6252)