check dfs.include in your namenode. Entries in there should resolve to new 
addresses. 

On Feb 19, 2013, at 18:23, Henry JunYoung KIM <henry.jy...@gmail.com> wrote:

> hi, hadoopers.
> 
> Recently, we've moved our clusters to another idc center.
> We keep the same host-names, but, they have now different ip addresses.
> 
> Without any configuration changes, we got the following error after starting 
> cluster.
> 
> 13.110.239.218 <-- old ip 
> 13.271.6.54 <-- new ip 
> 
> 2013-02-20 10:26:10,536 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool BP-846907911-13.110.239.218-1359529186091 (storage id 
> DS-2127506481-13.110.239.155-50010-1359529245747) service to 
> search-ddm-test2.daum.net/13.271.5.233:8020
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>  Datanode denied communication with namenode: 
> DatanodeRegistration(13.271.6.54, 
> storageID=DS-2127506481-13.110.239.155-50010-1359529245747, infoPort=50075, 
> ipcPort=50020, 
> storageInfo=lv=-40;cid=CID-c497f9b4-77e1-4b04-acfe-31aceea9b0b1;nsid=582785493;c=0)
>    at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
>    at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
>    at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
>    at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
> 
> 
> any suggestions to resolve this problem?
> thanks for your concerns.

Reply via email to