Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, "S.L" <simpleliving...@gmail.com> wrote:

> Hi All,
>
> I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
> three slave nodes , the slave nodes are listed in the
> $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
> Master Name node on port 9000, however when I start the datanode on any of
> the slaves I get the following exception .
>
> 2014-08-03 08:04:27,952 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1086620743-170.75.152.162-1407064313305 (Datanode
> Uuid null) service to server1.dealyaft.com/170.75.152.162:9000
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode because hostname cannot be
> resolved .
>
> The following are the contents of my core-site.xml.
>
> <configuration>
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://server1.mydomain.com:9000</value>
>     </property>
> </configuration>
>
> Also in my hdfs-site.xml  I am not setting any value for dfs.hosts or
> dfs.hosts.exclude properties.
>
> Thanks.
>

Reply via email to