Re: Issues Running Hadoop 1.1.2 on multi-node cluster
I figured out the issue! The problem was in the permission to rum Hadoop scripts from root user. I create a dedicated hadoop user to rum hadoop cluster but one of the time i accidentally started hadoop from root. Hence, some of the permissions of hadoop scripts changed. The solution is to again change the ownership of the hadoop folder to the dedicated user using chown. Its working fine now. Thanks a lot for the pointers! Regards, Siddharth On Thu, Jul 11, 2013 at 1:43 AM, Ram pramesh...@gmail.com wrote: Hi, Please check all directories/files are existed in local system configured mapres-site.xml and permissions to the files/directories as mapred as user and hadoop as a group. Hi, From, P.Ramesh Babu, +91-7893442722. On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov lfedo...@hortonworks.comwrote: Make sure your mapred.local.dir (check it in mapred-site.xml) is actually exists and writable by your mapreduce usewr. *Thank you!* * * *Sincerely,* *Leonid Fedotov* On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote: Hi Siddharth, While running the multi-node we need to take care of the local host of the slave machine from the error messages the task tracker root directory not able to get to the masters. Please check and rerun it. Thanks, Kiran On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur sidh1...@gmail.comwrote: Hi, I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it watching this tutorial * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ * When I startup the hadoop, I get the folloing error in *all* the tasktrackers. 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051203_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051611_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker:*Failed to get system directory *... 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... *But everything looks fine in the webUI. * When I run a job, I get the following error but the job completes anyways. I have* attached the* *screenshots* of the maptask failed error log in the UI. ** 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to process : 2 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded 13/07/09 12:29:37 INFO mapred.JobClient: Running job: job_201307091215_0001 13/07/09 12:29:38 INFO mapred.JobClient: map 0% reduce 0% 13/07/09 12:29:41 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_01_0, Status : FAILED Error initializing attempt_201307091215_0001_m_01_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581) at java.lang.Thread.run(Thread.java:724) 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr 13/07/09 12:29:45 INFO mapred.JobClient: map 50% reduce 0% 13/07/09 12:29:53 INFO mapred.JobClient: map 50% reduce 16% 13/07/09 12:30:38 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_00_1, Status : FAILED Error initializing attempt_201307091215_0001_m_00_1: ENOENT: No such file or
Re: Issues Running Hadoop 1.1.2 on multi-node cluster
Make sure your mapred.local.dir (check it in mapred-site.xml) is actually exists and writable by your mapreduce usewr. Thank you! Sincerely, Leonid Fedotov On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote: Hi Siddharth, While running the multi-node we need to take care of the local host of the slave machine from the error messages the task tracker root directory not able to get to the masters. Please check and rerun it. Thanks, Kiran On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur sidh1...@gmail.com wrote: Hi, I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it watching this tutorial http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ When I startup the hadoop, I get the folloing error in all the tasktrackers. 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051203_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051611_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... But everything looks fine in the webUI. When I run a job, I get the following error but the job completes anyways. I have attached the screenshots of the maptask failed error log in the UI. 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to process : 2 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded 13/07/09 12:29:37 INFO mapred.JobClient: Running job: job_201307091215_0001 13/07/09 12:29:38 INFO mapred.JobClient: map 0% reduce 0% 13/07/09 12:29:41 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_01_0, Status : FAILED Error initializing attempt_201307091215_0001_m_01_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581) at java.lang.Thread.run(Thread.java:724) 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr 13/07/09 12:29:45 INFO mapred.JobClient: map 50% reduce 0% 13/07/09 12:29:53 INFO mapred.JobClient: map 50% reduce 16% 13/07/09 12:30:38 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_00_1, Status : FAILED Error initializing attempt_201307091215_0001_m_00_1: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at
Re: Issues Running Hadoop 1.1.2 on multi-node cluster
Hi, Please check all directories/files are existed in local system configured mapres-site.xml and permissions to the files/directories as mapred as user and hadoop as a group. Hi, From, P.Ramesh Babu, +91-7893442722. On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov lfedo...@hortonworks.comwrote: Make sure your mapred.local.dir (check it in mapred-site.xml) is actually exists and writable by your mapreduce usewr. *Thank you!* * * *Sincerely,* *Leonid Fedotov* On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote: Hi Siddharth, While running the multi-node we need to take care of the local host of the slave machine from the error messages the task tracker root directory not able to get to the masters. Please check and rerun it. Thanks, Kiran On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur sidh1...@gmail.comwrote: Hi, I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it watching this tutorial * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ * When I startup the hadoop, I get the folloing error in *all* the tasktrackers. 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051203_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051611_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker:*Failed to get system directory *... 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... *But everything looks fine in the webUI. * When I run a job, I get the following error but the job completes anyways. I have* attached the* *screenshots* of the maptask failed error log in the UI. ** 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to process : 2 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded 13/07/09 12:29:37 INFO mapred.JobClient: Running job: job_201307091215_0001 13/07/09 12:29:38 INFO mapred.JobClient: map 0% reduce 0% 13/07/09 12:29:41 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_01_0, Status : FAILED Error initializing attempt_201307091215_0001_m_01_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581) at java.lang.Thread.run(Thread.java:724) 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr 13/07/09 12:29:45 INFO mapred.JobClient: map 50% reduce 0% 13/07/09 12:29:53 INFO mapred.JobClient: map 50% reduce 16% 13/07/09 12:30:38 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_00_1, Status : FAILED Error initializing attempt_201307091215_0001_m_00_1: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at
Re: Issues Running Hadoop 1.1.2 on multi-node cluster
Siddharth, The error msgs pointing to file system issues. Make sure that the file system locations you specified in the config files are accurate and accessible. -Sreedhar From: siddharth mathur sidh1...@gmail.com To: user@hadoop.apache.org Sent: Tuesday, July 9, 2013 9:56 AM Subject: Issues Running Hadoop 1.1.2 on multi-node cluster Hi, I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it watching this tutorial http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ When I startup the hadoop, I get the folloing error in all the tasktrackers. 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051203_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051611_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker:Failed to get system directory... 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... But everything looks fine in the webUI. When I run a job, I get the following error but the job completes anyways. I haveattached the screenshots of the maptask failed error log in the UI. 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to process : 2 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded 13/07/09 12:29:37 INFO mapred.JobClient: Running job: job_201307091215_0001 13/07/09 12:29:38 INFO mapred.JobClient: map 0% reduce 0% 13/07/09 12:29:41 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_01_0, Status : FAILED Error initializing attempt_201307091215_0001_m_01_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581) at java.lang.Thread.run(Thread.java:724) 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr 13/07/09 12:29:45 INFO mapred.JobClient: map 50% reduce 0% 13/07/09 12:29:53 INFO mapred.JobClient: map 50% reduce 16% 13/07/09 12:30:38 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_00_1, Status : FAILED Error initializing attempt_201307091215_0001_m_00_1: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221
Re: Issues Running Hadoop 1.1.2 on multi-node cluster
Hi Siddharth, While running the multi-node we need to take care of the local host of the slave machine from the error messages the task tracker root directory not able to get to the masters. Please check and rerun it. Thanks, Kiran On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur sidh1...@gmail.comwrote: Hi, I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it watching this tutorial * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ * When I startup the hadoop, I get the folloing error in *all* the tasktrackers. 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051203_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201307051611_0001 for user-log deletion with retainTimeStamp:1373472921775 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker:*Failed to get system directory *... 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed to get system directory... *But everything looks fine in the webUI. * When I run a job, I get the following error but the job completes anyways. I have* attached the* *screenshots* of the maptask failed error log in the UI. ** 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to process : 2 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop library 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded 13/07/09 12:29:37 INFO mapred.JobClient: Running job: job_201307091215_0001 13/07/09 12:29:38 INFO mapred.JobClient: map 0% reduce 0% 13/07/09 12:29:41 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_01_0, Status : FAILED Error initializing attempt_201307091215_0001_m_01_0: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581) at java.lang.Thread.run(Thread.java:724) 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr 13/07/09 12:29:45 INFO mapred.JobClient: map 50% reduce 0% 13/07/09 12:29:53 INFO mapred.JobClient: map 50% reduce 16% 13/07/09 12:30:38 INFO mapred.JobClient: Task Id : attempt_201307091215_0001_m_00_1, Status : FAILED Error initializing attempt_201307091215_0001_m_00_1: ENOENT: No such file or directory at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method) at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221) at