Re: No job can run in YARN (Hadoop-2.2)

2014-05-13 Thread Tao Xiao
The *FileNotFoundException* was thrown when I tried to submit a job
calculating PI, actually there is no such exception thrown when I submit a
wordcount job, but I can still see Exception from container-launch... 
 and any other jobs would throw such exceptions.

Every job runs successfully when I commented out properties
*mapreduce.map.java.opts*
and *mapreduce.reduce.java.opts.*

Indeed sounds odd, but I think maybe it is because that these two
properties conflict with other memory-related properties, so the container
can not be launched.


2014-05-12 3:37 GMT+08:00 Jay Vyas jayunit...@gmail.com:

 Sounds oddSo (1) you got a filenotfound exception and (2) you fixed it
 by commenting out memory specific config parameters?

 Not sure how that would work... Any other details or am I missing
 something else?

 On May 11, 2014, at 4:16 AM, Tao Xiao xiaotao.cs@gmail.com wrote:

 I'm sure this problem is caused by the incorrect configuration. I
 commented out all the configurations regarding memory, then jobs can run
 successfully.


 2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:

 I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
 Installation: The definitive 
 guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
 .

 The configurations are as follows:

 ~/.bashrc http://pastebin.com/zQgwuQv2 
 core-site.xmlhttp://pastebin.com/rBAaqZps
  hdfs-site.xml http://pastebin.com/bxazvp2G   
 mapred-site.xml
 http://pastebin.com/N00SsMbzslaveshttp://pastebin.com/8VjsZ1uu
   yarn-site.xml http://pastebin.com/XwLQZTQb


 I started NameNode, DataNodes, ResourceManager and NodeManagers
 successfully, but no job can run successfully. For example, I  run the
 following job:

 [root@Single-Hadoop ~]#yarn jar
 /var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
 pi 2 4

 The output is as follows:

 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_00_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)



 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_01_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)

 ... ...


 14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
 14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed
 with state FAILED due to: Task failed task_1399733823963_0004_m_00
 Job failed as tasks failed. failedMaps:1 failedReduces:0

 14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
 Job Counters
 Failed map tasks=7
  Killed map tasks=1
 Launched map tasks=8
 Other local map tasks=6
  Data-local map tasks=2
 Total time spent by all maps in occupied slots (ms)=21602
 Total time spent by all reduces in occupied slots (ms)=0
  Map-Reduce Framework
 CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
  Virtual memory (bytes) snapshot=0
 Job Finished in 24.515 seconds
 java.io.FileNotFoundException: File does not exist: hdfs://
 

Re: No job can run in YARN (Hadoop-2.2)

2014-05-12 Thread Stanley Shi
The FileNotFoundException doesn't mean anything in the pi program. If you
have some error and the program didn't run successfully, it will always
throw this exception.
What do you have in the opts?

Regards,
*Stanley Shi,*



On Mon, May 12, 2014 at 2:09 PM, Tao Xiao xiaotao.cs@gmail.com wrote:

 The *FileNotFoundException* was thrown when I tried to submit a job
 calculating PI, actually there is no such exception thrown when I submit a
 wordcount job, but I can still see Exception from container-launch... 
  and any other jobs would throw such exceptions.

 Every job runs successfully when I commented out properties
 *mapreduce.map.java.opts*
 and *mapreduce.reduce.java.opts.*

 Indeed sounds odd, but I think maybe it is because that these two
 properties conflict with other memory-related properties, so the container
 can not be launched.


 2014-05-12 3:37 GMT+08:00 Jay Vyas jayunit...@gmail.com:

 Sounds oddSo (1) you got a filenotfound exception and (2) you fixed it
 by commenting out memory specific config parameters?

 Not sure how that would work... Any other details or am I missing
 something else?

 On May 11, 2014, at 4:16 AM, Tao Xiao xiaotao.cs@gmail.com wrote:

 I'm sure this problem is caused by the incorrect configuration. I
 commented out all the configurations regarding memory, then jobs can run
 successfully.


 2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:

 I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
 Installation: The definitive 
 guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
 .

 The configurations are as follows:

 ~/.bashrc http://pastebin.com/zQgwuQv2 
 core-site.xmlhttp://pastebin.com/rBAaqZps
  hdfs-site.xml http://pastebin.com/bxazvp2G   
 mapred-site.xml
 http://pastebin.com/N00SsMbzslaveshttp://pastebin.com/8VjsZ1uu
   yarn-site.xml http://pastebin.com/XwLQZTQb


 I started NameNode, DataNodes, ResourceManager and NodeManagers
 successfully, but no job can run successfully. For example, I  run the
 following job:

 [root@Single-Hadoop ~]#yarn jar
 /var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
 pi 2 4

 The output is as follows:

 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_00_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)



 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_01_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)

 ... ...


 14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
 14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed
 with state FAILED due to: Task failed task_1399733823963_0004_m_00
 Job failed as tasks failed. failedMaps:1 failedReduces:0

 14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
 Job Counters
 Failed map tasks=7
  Killed map tasks=1
 Launched map tasks=8
 Other local map tasks=6
  Data-local map tasks=2
 Total time spent by all maps in occupied slots (ms)=21602
 

Re: No job can run in YARN (Hadoop-2.2)

2014-05-11 Thread Tao Xiao
This is caused by properties *mapreduce.map.java.opts* and
*mapreduce.reduce.java.opts*


2014-05-11 16:16 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:

 I'm sure this problem is caused by the incorrect configuration. I
 commented out all the configurations regarding memory, then jobs can run
 successfully.


 2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:

 I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
 Installation: The definitive 
 guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
 .

 The configurations are as follows:

 ~/.bashrc http://pastebin.com/zQgwuQv2 
 core-site.xmlhttp://pastebin.com/rBAaqZps
  hdfs-site.xml http://pastebin.com/bxazvp2G   
 mapred-site.xml
 http://pastebin.com/N00SsMbzslaveshttp://pastebin.com/8VjsZ1uu
   yarn-site.xml http://pastebin.com/XwLQZTQb


 I started NameNode, DataNodes, ResourceManager and NodeManagers
 successfully, but no job can run successfully. For example, I  run the
 following job:

 [root@Single-Hadoop ~]#yarn jar
 /var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
 pi 2 4

 The output is as follows:

 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_00_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)



 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_01_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)

 ... ...


 14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
 14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed
 with state FAILED due to: Task failed task_1399733823963_0004_m_00
 Job failed as tasks failed. failedMaps:1 failedReduces:0

 14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
 Job Counters
 Failed map tasks=7
  Killed map tasks=1
 Launched map tasks=8
 Other local map tasks=6
  Data-local map tasks=2
 Total time spent by all maps in occupied slots (ms)=21602
 Total time spent by all reduces in occupied slots (ms)=0
  Map-Reduce Framework
 CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
  Virtual memory (bytes) snapshot=0
 Job Finished in 24.515 seconds
 java.io.FileNotFoundException: File does not exist: hdfs://
 Single-Hadoop.zd.com/user/root/QuasiMonteCarlo_1399737371038_1022927375/out/reduce-out
  at
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
  at
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
  at
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1749)
 at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1773)
  at
 org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
 at
 org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
  at 

No job can run in YARN (Hadoop-2.2)

2014-05-11 Thread Tao Xiao
I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
Installation: The definitive
guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
.

The configurations are as follows:

~/.bashrc http://pastebin.com/zQgwuQv2
core-site.xmlhttp://pastebin.com/rBAaqZps
 hdfs-site.xml http://pastebin.com/bxazvp2G
 mapred-site.xml
http://pastebin.com/N00SsMbzslaves http://pastebin.com/8VjsZ1uu
  yarn-site.xml http://pastebin.com/XwLQZTQb


I started NameNode, DataNodes, ResourceManager and NodeManagers
successfully, but no job can run successfully. For example, I  run the
following job:

[root@Single-Hadoop ~]#yarn jar
/var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi 2 4

The output is as follows:

14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
attempt_1399733823963_0004_m_00_0, Status : FAILED
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)



14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
attempt_1399733823963_0004_m_01_0, Status : FAILED
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

... ...


14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed
with state FAILED due to: Task failed task_1399733823963_0004_m_00
Job failed as tasks failed. failedMaps:1 failedReduces:0

14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=21602
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Job Finished in 24.515 seconds
java.io.FileNotFoundException: File does not exist: hdfs://
Single-Hadoop.zd.com/user/root/QuasiMonteCarlo_1399737371038_1022927375/out/reduce-out
at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1749)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1773)
at
org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at 

Re: No job can run in YARN (Hadoop-2.2)

2014-05-11 Thread Tao Xiao
I'm sure this problem is caused by the incorrect configuration. I commented
out all the configurations regarding memory, then jobs can run
successfully.


2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:

 I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
 Installation: The definitive 
 guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
 .

 The configurations are as follows:

 ~/.bashrc http://pastebin.com/zQgwuQv2 
 core-site.xmlhttp://pastebin.com/rBAaqZps
  hdfs-site.xml http://pastebin.com/bxazvp2G   
 mapred-site.xml
 http://pastebin.com/N00SsMbzslaveshttp://pastebin.com/8VjsZ1uu
   yarn-site.xml http://pastebin.com/XwLQZTQb


 I started NameNode, DataNodes, ResourceManager and NodeManagers
 successfully, but no job can run successfully. For example, I  run the
 following job:

 [root@Single-Hadoop ~]#yarn jar
 /var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
 pi 2 4

 The output is as follows:

 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_00_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)



 14/05/10 23:56:25 INFO mapreduce.Job: Task Id :
 attempt_1399733823963_0004_m_01_0, Status : FAILED
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:662)

 ... ...


 14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
 14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed
 with state FAILED due to: Task failed task_1399733823963_0004_m_00
 Job failed as tasks failed. failedMaps:1 failedReduces:0

 14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
 Job Counters
 Failed map tasks=7
  Killed map tasks=1
 Launched map tasks=8
 Other local map tasks=6
  Data-local map tasks=2
 Total time spent by all maps in occupied slots (ms)=21602
 Total time spent by all reduces in occupied slots (ms)=0
  Map-Reduce Framework
 CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
  Virtual memory (bytes) snapshot=0
 Job Finished in 24.515 seconds
 java.io.FileNotFoundException: File does not exist: hdfs://
 Single-Hadoop.zd.com/user/root/QuasiMonteCarlo_1399737371038_1022927375/out/reduce-out
  at
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
  at
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
  at
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1749)
 at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1773)
  at
 org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
 at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at
 org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
  at 

Re: No job can run in YARN (Hadoop-2.2)

2014-05-11 Thread Jay Vyas
Sounds oddSo (1) you got a filenotfound exception and (2) you fixed it by 
commenting out memory specific config parameters?

Not sure how that would work... Any other details or am I missing something 
else?

 On May 11, 2014, at 4:16 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
 
 I'm sure this problem is caused by the incorrect configuration. I commented 
 out all the configurations regarding memory, then jobs can run successfully. 
 
 
 2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:
 I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN 
 Installation: The definitive guide. 
 
 The configurations are as follows:
 
 ~/.bashrc core-site.xml   hdfs-site.xml
 mapred-site.xml slavesyarn-site.xml
 
 
 I started NameNode, DataNodes, ResourceManager and NodeManagers 
 successfully, but no job can run successfully. For example, I  run the 
 following job:
 
 [root@Single-Hadoop ~]#yarn jar 
 /var/soft/apache/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
  pi 2 4
 
 The output is as follows:
 
 14/05/10 23:56:25 INFO mapreduce.Job: Task Id : 
 attempt_1399733823963_0004_m_00_0, Status : FAILED
 Exception from container-launch: 
 org.apache.hadoop.util.Shell$ExitCodeException: 
  at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)
 
 
 
 14/05/10 23:56:25 INFO mapreduce.Job: Task Id : 
 attempt_1399733823963_0004_m_01_0, Status : FAILED
 Exception from container-launch: 
 org.apache.hadoop.util.Shell$ExitCodeException: 
  at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  at java.lang.Thread.run(Thread.java:662)
 
 ... ...
 
 
 14/05/10 23:56:36 INFO mapreduce.Job:  map 100% reduce 100%
 14/05/10 23:56:37 INFO mapreduce.Job: Job job_1399733823963_0004 failed with 
 state FAILED due to: Task failed task_1399733823963_0004_m_00
 Job failed as tasks failed. failedMaps:1 failedReduces:0
 
 14/05/10 23:56:37 INFO mapreduce.Job: Counters: 10
  Job Counters 
  Failed map tasks=7
  Killed map tasks=1
  Launched map tasks=8
  Other local map tasks=6
  Data-local map tasks=2
  Total time spent by all maps in occupied slots (ms)=21602
  Total time spent by all reduces in occupied slots (ms)=0
  Map-Reduce Framework
  CPU time spent (ms)=0
  Physical memory (bytes) snapshot=0
  Virtual memory (bytes) snapshot=0
 Job Finished in 24.515 seconds
 java.io.FileNotFoundException: File does not exist: 
 hdfs://Single-Hadoop.zd.com/user/root/QuasiMonteCarlo_1399737371038_1022927375/out/reduce-out
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
  at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
  at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1749)
  at