Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-05 Thread S.L
There is no entry there.

Sent from my HTC

- Reply message -
From: "hadoop hive" 
To: 
Subject: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 
cluster.
Date: Tue, Aug 5, 2014 6:36 AM

Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L"  wrote:

> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> 
> 
> fs.default.name
> hdfs://server1.mydomain.com:9000
> 
> 
>
> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>

Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-05 Thread hadoop hive
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L"  wrote:

> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> 
> 
> fs.default.name
> hdfs://server1.mydomain.com:9000
> 
> 
>
> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>


Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-05 Thread Wellington Chevreuil
You should have /etc/hosts properly configured on all your cluster nodes.

On 5 Aug 2014, at 07:28, S.L  wrote:

> when you say /etc/hosts/ file , you mean only on the master of on both the 
> master and slaves?
> 
> 
> 
> 
> On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh  
> wrote:
> You have not given namenode uri in /etc/hosts file , thus it can't resolve it 
> to ipaddress and your namenode would also be not started. 
> Preferable practice is to start your cluster through start-dfs.sh command, it 
> implicitly starts first namenode and then all its datanodes.
> 
> Also make sure you have given ipaddress in salve file, if not then also make 
> entry for hostnames in /etc/hosts file
> 
> 
> BR,
> Satyam
> 
> On 08/05/2014 12:21 AM, S.L wrote:
>> 
>> The contents are 
>> 
>> 127.0.0.1   localhost localhost.localdomain localhost4 
>> localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 
>> localhost6.localdomain6
>> 
>> 
>> 
>> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh 
>>  wrote:
>> check the contents of '/etc/hosts' file
>> 
>> 
>> On Mon, Aug 4, 2014 at 3:27 AM, S.L  wrote:
>> Hi All,
>> 
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and 
>> three slave nodes , the slave nodes are listed in the 
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the 
>> Master Name node on port 9000, however when I start the datanode on any of 
>> the slaves I get the following exception .
>> 
>> 2014-08-03 08:04:27,952 FATAL 
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode 
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>  Datanode denied communication with namenode because hostname cannot be 
>> resolved .
>> 
>> The following are the contents of my core-site.xml.
>> 
>> 
>> 
>> fs.default.name
>> hdfs://server1.mydomain.com:9000
>> 
>> 
>> 
>> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or 
>> dfs.hosts.exclude properties.
>> 
>> Thanks.
>> 
>> 
> 
> 



Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-04 Thread S.L
when you say /etc/hosts/ file , you mean only on the master of on both the
master and slaves?




On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh 
wrote:

>  You have not given namenode uri in /etc/hosts file , thus it can't
> resolve it to ipaddress and your namenode would also be not started.
> Preferable practice is to start your cluster through start-dfs.sh command,
> it implicitly starts first namenode and then all its datanodes.
>
> Also make sure you have given ipaddress in salve file, if not then also
> make entry for hostnames in /etc/hosts file
>
>
> BR,
> Satyam
>
> On 08/05/2014 12:21 AM, S.L wrote:
>
>
>  The contents are
>
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
>
>
> On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
> riteshoneinamill...@gmail.com> wrote:
>
>> check the contents of '/etc/hosts' file
>>
>>
>>  On Mon, Aug 4, 2014 at 3:27 AM, S.L  wrote:
>>
>>>   Hi All,
>>>
>>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master
>>> and three slave nodes , the slave nodes are listed in the
>>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>>> Master Name node on port 9000, however when I start the datanode on any of
>>> the slaves I get the following exception .
>>>
>>> 2014-08-03 08:04:27,952 FATAL
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>>> Datanode denied communication with namenode because hostname cannot be
>>> resolved .
>>>
>>>  The following are the contents of my core-site.xml.
>>>
>>> 
>>> 
>>> fs.default.name
>>> hdfs://server1.mydomain.com:9000
>>> 
>>> 
>>>
>>>  Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
>>> dfs.hosts.exclude properties.
>>>
>>>  Thanks.
>>>
>>
>>
>
>


Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-04 Thread Satyam Singh
You have not given namenode uri in /etc/hosts file , thus it can't 
resolve it to ipaddress and your namenode would also be not started.
Preferable practice is to start your cluster through start-dfs.sh 
command, it implicitly starts first namenode and then all its datanodes.


Also make sure you have given ipaddress in salve file, if not then also 
make entry for hostnames in /etc/hosts file



BR,
Satyam

On 08/05/2014 12:21 AM, S.L wrote:


The contents are

127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6




On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh 
mailto:riteshoneinamill...@gmail.com>> 
wrote:


check the contents of '/etc/hosts' file


On Mon, Aug 4, 2014 at 3:27 AM, S.L mailto:simpleliving...@gmail.com>> wrote:

Hi All,

I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a
master and three slave nodes , the slave nodes are listed in
the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from
the slaves to the Master Name node on port 9000, however when
I start the datanode on any of the slaves I get the following
exception .

2014-08-03 08:04:27,952 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode:
Initialization failed for block pool Block pool
BP-1086620743-170.75.152.162-1407064313305 (Datanode Uuid
null) service to server1.dealyaft.com/170.75.152.162:9000


org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode because hostname
cannot be resolved .

The following are the contents of my core-site.xml.



fs.default.name 
hdfs://server1.mydomain.com:9000




Also in my hdfs-site.xml  I am not setting any value for
dfs.hosts or dfs.hosts.exclude properties.

Thanks.







Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-04 Thread S.L
The contents are

127.0.0.1   localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6



On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh <
riteshoneinamill...@gmail.com> wrote:

> check the contents of '/etc/hosts' file
>
>
> On Mon, Aug 4, 2014 at 3:27 AM, S.L  wrote:
>
>> Hi All,
>>
>> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
>> three slave nodes , the slave nodes are listed in the
>> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
>> Master Name node on port 9000, however when I start the datanode on any of
>> the slaves I get the following exception .
>>
>> 2014-08-03 08:04:27,952 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
>> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode because hostname cannot be
>> resolved .
>>
>> The following are the contents of my core-site.xml.
>>
>> 
>> 
>> fs.default.name
>> hdfs://server1.mydomain.com:9000
>> 
>> 
>>
>> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
>> dfs.hosts.exclude properties.
>>
>> Thanks.
>>
>
>


Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-03 Thread Ritesh Kumar Singh
check the contents of '/etc/hosts' file


On Mon, Aug 4, 2014 at 3:27 AM, S.L  wrote:

> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> 
> 
> fs.default.name
> hdfs://server1.mydomain.com:9000
> 
> 
>
> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>


Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-03 Thread S.L
Hi All,

I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
Master Name node on port 9000, however when I start the datanode on any of
the slaves I get the following exception .

2014-08-03 08:04:27,952 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode because hostname cannot be
resolved .

The following are the contents of my core-site.xml.



fs.default.name
hdfs://server1.mydomain.com:9000



Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
dfs.hosts.exclude properties.

Thanks.