Hi, The documentation (1) suggested to set the `dfs.namenode.rpc-address.ns1` property to `hdfs://nn-host1:rpc-port` in the example. Changing the value to `nn-host1:rpc-port` (removing hdfs://) solved the problem. The document needs to be updated.
(1) - http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html Praveen On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati <praveensrip...@gmail.com>wrote: > Hi, > > Got the latest code to see if any bugs were fixed and did try federation > with the same configuration, but was getting similar exception. > > 2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.java:main(803)) > - Exception in namenode join > java.io.IOException: Failed on local exception: java.net.SocketException: > Unresolved address; Host Details : local host is: "hdfs"; destination host > is: "(unknown):0; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895) > at org.apache.hadoop.ipc.Server.bind(Server.java:231) > at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313) > at org.apache.hadoop.ipc.Server.<init>(Server.java:1600) > at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576) > at > org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322) > at > org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282) > at > org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46) > at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799) > Caused by: java.net.SocketException: Unresolved address > at sun.nio.ch.Net.translateToSocketException(Net.java:58) > at sun.nio.ch.Net.translateException(Net.java:84) > at sun.nio.ch.Net.translateException(Net.java:90) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61) > at org.apache.hadoop.ipc.Server.bind(Server.java:229) > ... 14 more > Caused by: java.nio.channels.UnresolvedAddressException > at sun.nio.ch.Net.checkAddress(Net.java:30) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > ... 15 more > > Regards, > Praveen > > On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati < > praveensrip...@gmail.com> wrote: > >> >> Hi, >> >> I am trying to setup a HDFS federation and getting the below error. Also, >> pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I >> miss something in the configuration files? >> >> 2012-01-11 12:12:15,759 ERROR namenode.NameNode >> (NameNode.java:main(803)) - Exception in namenode join >> java.lang.IllegalArgumentException: Can't parse port '' >> at >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198) >> at >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799) >> >> *core-site.xml* >> >> <?xml version="1.0"?> >> <configuration> >> <property> >> <name>hadoop.tmp.dir</name> >> <value>/home/praveensripati/tmp/hadoop-0.23.0/tmp</value> >> </property> >> </configuration> >> >> *hdfs-site.xml* >> >> <?xml version="1.0"?> >> <configuration> >> <property> >> <name>dfs.replication</name> >> <value>1</value> >> </property> >> <property> >> <name>dfs.permissions</name> >> <value>false</value> >> </property> >> <property> >> <name>dfs.federation.nameservices</name> >> <value>ns1</value> >> </property> >> <property> >> <name>dfs.namenode.rpc-address.ns1</name> >> <value>hdfs://praveen-laptop:9001</value> >> </property> >> <property> >> <name>dfs.namenode.http-address.ns1</name> >> <value>praveen-laptop:50071</value> >> </property> >> <property> >> <name>dfs.namenode.secondaryhttp-address.ns1</name> >> <value>praveen-laptop:50091</value> >> </property> >> </configuration> >> >> Regards, >> Praveen >> >> >