hello,
We configured hadoop successfully, but after some days  its configuration
file from datanode( hadoop-site.xml) went off , and datanode was not coming
up ,so we again did the same configuration, its showing one datanode and its
name as localhost rather than expected as either name of respected datanode
m/c or ip address of   actual datanode in ui interfece of hadoop.

But capacity as 80.0gb ,(we have  one namenode (40 gb) and datanode(40
gb))means capacity is updated ,we can browse the filesystem , it is showing
whatever directories we are creating in namenode .

but when we try to access the same through the datanode  machine
means doing ssh and executing series of commands its not able to connect to
the server.
saying retrying connect to the server

09/03/26 11:25:11 INFO ipc.Client: Retrying connect to server: /
172.16.6.102:21011. Already tried 0 time(s).

09/03/26 11:25:11 INFO ipc.Client: Retrying connect to server: /
172.16.6.102:21011. Already tried 1 time(s)


moreover we added one datanode into it and formatted namenode ,but that
datanode is not getting added. we are not understanding whats the problem.

Can configuration files in case of datanode automatcally lost  after some
days??

I have again one doubt , according to my understanding namenode doesnt store
any data , it stores metadata of all the data , so when i execute mkdir in
namenode machine  and copying some files into it, it means that data is
getting stored in datanode provided to it, please correct me if i am wrong ,
i am very new to hadoop.
So if i am able to view the data through inteface means its properly storing
data into respected datanode, So
why its showing localhost as datanode name rather than respected datanode
name.

can you please help.


Regards,
Snehal Nagmote
IIIT hyderabad

Reply via email to