Netflix Genie is what we use for submitting jobs.

William Watson
Software Engineer
(904) 705-7056 PCS

On Mon, May 18, 2015 at 7:07 AM, xeonmailinglist-gmail <
xeonmailingl...@gmail.com> wrote:

>  I also can't find a good site/book that explains well how to submit
> remote jobs. Also, can anyone know where can I get more useful info?
>
>
>
> -------- Forwarded Message --------  Subject: can't submit remote job  Date:
> Mon, 18 May 2015 11:54:56 +0100  From: xeonmailinglist-gmail
> <xeonmailingl...@gmail.com> <xeonmailingl...@gmail.com>  To:
> user@hadoop.apache.org <user@hadoop.apache.org> <user@hadoop.apache.org>
>
>  Hi,
>
> I am trying to submit a remote job in Yarn MapReduce, but I can’t because
> I get the error [1]. I don’t have more exceptions in the other logs.
>
> My Mapreduce runtime have 1 *ResourceManager* and 3 *NodeManagers*, and
> the HDFS is running properly (all nodes are alive).
>
> I have looked to all logs, and I still don’t understand why I get this
> error. Any help to fix this? Is it a problem of the remote job that I am
> submitting?
>
> [1]
>
> $ less logs/hadoop-ubuntu-namenode-ip-172-31-17-45.log
>
> 2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
> NameNode.addBlock: file 
> /tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.split
> fileId=16394 for DFSClient_NONMAPREDUCE_-1923902075_14
> 2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.getAdditionalBlock: 
> /tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.
> split inodeId 16394 for DFSClient_NONMAPREDUCE_-1923902075_14
> 2015-05-18 10:42:16,571 DEBUG 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> choose remote rack (location = ~/default-rack), fallback to lo
> cal rack
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:580)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:348)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>
> ​
>
> --
> --
>
>
>
>

Reply via email to