Re: Hadoop shows only one live datanode

2018-12-24 Thread Akira Ajisaka
Hi Jérémy,

Would you set "dfs.namenode.rpc-address" to "master:9000" in
hdfs-site.xml? The NameNode RPC address is "localhost:8020" by default
and that's why only the DataNode running on master is registered.
DataNodes running on slave1/slave2 want to connect to "localhost:8020"
and cannot find the NameNode because NameNode is not running on slave1
or slave2.

-Akira

2018年12月24日(月) 0:13 Jérémy C :
>
> Hello everyone,
>
>
> I installed hadoop 3.1.1 on 3 virtual machines with VMware on Ubuntu. When I 
> run hdfs namenode -format and start-all.sh then jps works correctly on my 
> master and two slaves nodes.
>
> However, with the command hdfs dfsadmin -report, I can see only one live data 
> node (I get the same result when I check on master:50070 or 8088).
>
>
> I tried to disable firewall as follows: ufw disable but it didn't solve the 
> problem. The 3 machines can connect with each other (without passwd) with 
> ping and ssh. I also deleted the hadoop tmp folder with datanode and namenode 
> folders but it didn't work as well. There are also no issues shown in the log 
> files.
>
>
> Do you have any solutions to get three live datanode instead of one? Thanks.
>
>
> You will find attached my configurations files.
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: Hadoop shows only one live datanode

2018-12-29 Thread Gurmukh Singh

core-site.xml is wrong.

It is "fs.defaultFS" not "fs.default.FS"

Also remove "/" after the port



fs.default.name
hdfs://master:9000/


fs.default.FS
hdfs://master:9000/



Also, you are running yarn; so you do not need the below:

 
mapreduce.job.tracker
master:5431


On 24/12/18 1:07 am, J�r�my C wrote:


 
 fs.default.name
 hdfs://master:9000/
 
 
 fs.default.FS
 hdfs://master:9000/
 



-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org