If use change the hostname, you must change /etc/hosts and
/etc/sysconfig/network

for example:

-bash-3.00$ more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.102.205 hadoop205.test.com

-bash-3.00$ more /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop205.test.com


If you want to resolve other hosts, you must add the ip address and
hostname pair, for example:
-bash-3.00$ more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.102.205 hadoop205.test.com
192.168.102.206 slave.test.com

David Wei 写道:
> Dear all,
>
> I had configured all the nodes(master/slaves) with the correct hostname
> and all the slaves can be reached with hostname from master, and vice versa.
>
> But in my hadoop-site.xml file, if I configure master's
> "fs.default.name" and "mapred.job.tracker" with hostname, e.g.
> datacenter5:9000 and datacenter5:9001. All the slaves will not be able
> to connect to master:
>
> ************************************************************/
> 9 2008-10-17 14:58:10,940 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: datacenter5/192.168.52.129:9000. Already tried 0 time(s).
> 10 2008-10-17 14:58:11,943 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: datacenter5/192.168.52.129:9000. Already tried 1 time(s).
>
> If changed the settings with IP, e.g. 192.168.52.129. All the slaves
> could be mounted, but when you try to run something, you will get
> following exceptions:
> FAILED
> Error initializing attempt_200810170708_0003_m_000000_0:
> java.lang.IllegalArgumentException: Wrong FS:
> hdfs://192.168.52.129:9000/tmp/hadoop-root/mapred/system/job_200810170708_0003/job.xml,
> expected: hdfs://datacenter5:9000
>
> Does anybody can help?
>
> Thx!
>
> David
>
>
>   



Reply via email to