Hi All,

I have setup a single node cluster(release hadoop-1.0.4). Following is the 
configuration used -

core-site.xml :-

<property>
     <name>fs.default.name</name>
     <value>hdfs://localhost:54310</value>
</property>

masters:-
localhost

slaves:-
localhost

I am able to successfully format the Namenode and perform files system 
operations by running the CLIs on Namenode.

But I am receiving following error when I try to access HDFS from a remote 
machine -

$ bin/hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).
13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).
13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).
13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).
13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).
13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).
13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).
13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).
13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).
13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 
10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).
Bad connection to FS. command aborted. exception: Call to 
10.209.10.206/10.209.10.206:54310 failed on connection exception: 
java.net.ConnectException: Connection refused

Where 10.209.10.206 is the IP of the server hosting the Namenode and it  is 
also the configured value for "fs.default.name" in the core-site.xml file on 
the remote machine.

Executing 'bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /' also result in 
same output.

Also, I am writing a C application using libhdfs to communicate with HDFS. How 
do we provide credentials while connecting to HDFS?

Thanks
Saurabh


Reply via email to