hey,

Can we access NameNode's hdfs on our slave machines ??

I am just running command "hadoop dfs -ls" on my slave machine ( running
tasktracker and Datanode), and its giving me the following output :

hadoop@ub12:~$ hadoop dfs -ls
11/05/05 18:31:54 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 0 time(s).
11/05/05 18:31:55 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 1 time(s).
11/05/05 18:31:56 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 2 time(s).
11/05/05 18:31:57 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 3 time(s).
11/05/05 18:31:58 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 4 time(s).
11/05/05 18:31:59 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 5 time(s).
11/05/05 18:32:00 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 6 time(s).
11/05/05 18:32:01 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 7 time(s).
11/05/05 18:32:02 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 8 time(s).
11/05/05 18:32:03 INFO ipc.Client: Retrying connect to server: ub13/
162.192.100.53:54310. Already tried 9 time(s).
Bad connection to FS. command aborted.
I just restarted my Master Node ( and run start-all.sh )

The output on my master node is

hadoop@ub13:/usr/local/hadoop$ start-all.sh
starting namenode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-ub13.out
ub11: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-ub11.out
ub10: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-ub10.out
ub12: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-ub12.out
ub13: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-ub13.out
ub13: starting secondarynamenode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-ub13.out
starting jobtracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-ub13.out
ub10: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-ub10.out
ub11: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-ub11.out
ub12: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-ub12.out
ub13: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-ub13.out
hadoop@ub13:/usr/local/hadoop$ jps
6471 NameNode
7070 Jps
6875 JobTracker
6632 DataNode
7030 TaskTracker
6795 SecondaryNameNode
Thanks,
Praveenesh

Reply via email to