Shabab, I think so, but the Hadoop’s site says |The user@ mailing list is the preferred mailing list for end-user questions and discussion.|So I am using the right mailing list.

Back to my problem, I think that this is a problem about HDFS security. But the strangest thing is that I have disabled it in |hdfs-site.xml| [1].

I think that this error happens when MapReduce is trying to write the job configuration files in HDFS.

I have set the username of the remote client in the mapreduce using the commands in [2].

Now, I am looking to the Netflix Geni to figure it out how they do it, but right now I still haven’t found a solution to submit a remote job using Java. If anyone have a hint, or advice, please tell me. I really don’t understand why I get this error.

[1]

|$ cat etc/hadoop/hdfs-site.xml

<property> <name>dfs.permissions</name> <value>false</value> </property>
|

[2]

|```
   In Namenode host.

   $ sudo adduser xeon
   $ sudo adduser xeon ubuntu
|

```

On 05/18/2015 02:46 PM, Shahab Yunus wrote:

I think that poster wanted to unsubscribe from the mailing list?

Gopy, if that is the case then please see this for that:https://hadoop.apache.org/mailing_lists.html

Regards,
Shahab

On Mon, May 18, 2015 at 9:42 AM, xeonmailinglist-gmail <xeonmailingl...@gmail.com <mailto:xeonmailingl...@gmail.com>> wrote:

    Why "Remove"?


    On 05/18/2015 02:25 PM, Gopy Krishna wrote:
    REMOVE

    On Mon, May 18, 2015 at 6:54 AM, xeonmailinglist-gmail
    <xeonmailingl...@gmail.com <mailto:xeonmailingl...@gmail.com>> wrote:

        Hi,

        I am trying to submit a remote job in Yarn MapReduce, but I
        can’t because I get the error [1]. I don’t have more
        exceptions in the other logs.

        My Mapreduce runtime have 1 /ResourceManager/ and 3
        /NodeManagers/, and the HDFS is running properly (all nodes
        are alive).

        I have looked to all logs, and I still don’t understand why I
        get this error. Any help to fix this? Is it a problem of the
        remote job that I am submitting?

        [1]

        |$ less logs/hadoop-ubuntu-namenode-ip-172-31-17-45.log

        2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: 
*BLOCK* NameNode.addBlock: file 
/tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.split
        fileId=16394 for DFSClient_NONMAPREDUCE_-1923902075_14
        2015-05-18 10:42:16,570 DEBUG org.apache.hadoop.hdfs.StateChange: 
BLOCK* NameSystem.getAdditionalBlock: 
/tmp/hadoop-yarn/staging/xeon/.staging/job_1431945660897_0001/job.
        split inodeId 16394 for DFSClient_NONMAPREDUCE_-1923902075_14
        2015-05-18 10:42:16,571 DEBUG 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
choose remote rack (location = ~/default-rack), fallback to lo
        cal rack
        
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:580)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:348)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
                 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
                 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
                 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
        |

        ​

-- --




-- Thanks & Regards
    Gopy
    Rapisource LLC
    Direct: 732-419-9663 <tel:732-419-9663>
    Fax: 512-287-4047 <tel:512-287-4047>
    Email: g...@rapisource.com <mailto:g...@rapisource.com>
    www.rapisource.com <http://www.rapisource.com>
    http://www.linkedin.com/in/gopykrishna

    According to Bill S.1618 Title III passed by the 105th US
    Congress,this message is not considered as "Spam" as we have
    included the contact information.If you wish to be removed from
    our mailing list, please respond with "remove" in the subject
    field.We apologize for any inconvenience caused.

-- --


​

--
--

Reply via email to