[ https://issues.apache.org/jira/browse/MAPREDUCE-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Roman Shaposhnik updated MAPREDUCE-5240: ---------------------------------------- Description: I am attaching a modified wordcount job that clearly demonstrates the problem we've encountered in running Sqoop2 on YARN (BIGTOP-949). Here's what running it produces: {noformat} $ hadoop fs -mkdir in $ hadoop fs -put /etc/passwd in $ hadoop jar ./bug.jar org.myorg.LostCreds 13/05/12 03:13:46 WARN mapred.JobConf: The variable mapred.child.ulimit is no longer used. numberOfSecretKeys: 1 numberOfTokens: 0 .............. .............. .............. 13/05/12 03:05:35 INFO mapreduce.Job: Job job_1368318686284_0013 failed with state FAILED due to: Job commit failed: java.io.IOException: numberOfSecretKeys: 0 numberOfTokens: 0 at org.myorg.LostCreds$DestroyerFileOutputCommitter.commitJob(LostCreds.java:43) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:249) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:212) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {noformat} As you can see, even though we've clearly initialized the creds via: {noformat} job.getCredentials().addSecretKey(new Text("mykey"), "mysecret".getBytes()); {noformat} It doesn't seem to appear later in the job. This is a pretty critical issue for Sqoop 2 since it appears to be DOA for YARN in Hadoop 2.0.4-alpha was: I am attaching a modified wordcount job that clearly demonstrates the problem we've encountered in running Sqoop2 on YARN (BIGTOP-949). Here's what running it produces: {noformat} $ hadoop fs -mkdir in $ hadoop fs -put /etc/passwd in $ hadoop jar ./bug.jar org.myorg.LostCreds 13/05/12 03:13:46 WARN mapred.JobConf: The variable mapred.child.ulimit is no longer used. numberOfSecretKeys: 1 numberOfTokens: 0 .............. .............. .............. 13/05/12 03:05:35 INFO mapreduce.Job: Job job_1368318686284_0013 failed with state FAILED due to: Job commit failed: java.io.IOException: numberOfSecretKeys: 0 numberOfTokens: 0 at org.myorg.LostCreds$DestroyerFileOutputCommitter.commitJob(LostCreds.java:43) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:249) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:212) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {noformat} As you can see, even though we've clearly initialized the creds via: {noformat} job.getCredentials().addSecretKey(new Text("mykey"), "mysecret".getBytes()); {noformat} It doesn't seem to appear later in the job. This is a pretty critical issue for Sqoop 2 since it appears to be DOA for YARN in Hadoop 2.0.4-alpha > inside of FileOutputCommitter the initialized Credentials cache appears to be > empty > ----------------------------------------------------------------------------------- > > Key: MAPREDUCE-5240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5240 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv1 > Affects Versions: 2.0.4-alpha > Reporter: Roman Shaposhnik > Priority: Blocker > Fix For: 2.0.5-beta > > Attachments: LostCreds.java > > > I am attaching a modified wordcount job that clearly demonstrates the problem > we've encountered in running Sqoop2 on YARN (BIGTOP-949). > Here's what running it produces: > {noformat} > $ hadoop fs -mkdir in > $ hadoop fs -put /etc/passwd in > $ hadoop jar ./bug.jar org.myorg.LostCreds > 13/05/12 03:13:46 WARN mapred.JobConf: The variable mapred.child.ulimit is no > longer used. > numberOfSecretKeys: 1 > numberOfTokens: 0 > .............. > .............. > .............. > 13/05/12 03:05:35 INFO mapreduce.Job: Job job_1368318686284_0013 failed with > state FAILED due to: Job commit failed: java.io.IOException: > numberOfSecretKeys: 0 > numberOfTokens: 0 > at > org.myorg.LostCreds$DestroyerFileOutputCommitter.commitJob(LostCreds.java:43) > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:249) > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:212) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:619) > {noformat} > As you can see, even though we've clearly initialized the creds via: > {noformat} > job.getCredentials().addSecretKey(new Text("mykey"), "mysecret".getBytes()); > {noformat} > It doesn't seem to appear later in the job. > This is a pretty critical issue for Sqoop 2 since it appears to be DOA for > YARN in Hadoop 2.0.4-alpha -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira