Hi,

I am trying to submit a remote job in Yarn MapReduce, but I can’t because I get the error [1]. I don’t have more exceptions in the other logs.

My Mapreduce runtime have 1 /ResourceManager/ and 3 /NodeManagers/, and the HDFS is running properly (all nodes are alive).

I have looked to all logs, and I still don’t understand why I get this error. Any help to fix this? Is it a problem of the remote job that I am submitting?

[1]

|$ less logs/hadoop-ubuntu-namenode-ip-172-31-17-45.log

2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.addBlock: file 
/tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.split
fileId=16394 for DFSClient_NONMAPREDUCE_-1923902075_14
2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.getAdditionalBlock: 
/tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.
split inodeId 16394 for DFSClient_NONMAPREDUCE_-1923902075_14
2015-05-18 10:42:16,571 DEBUG 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
choose remote rack (location = ~/default-rack), fallback to lo
cal rack
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:580)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:348)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
|

​

--
--

Reply via email to