Finally done!
We need to remove the filesystem before we try to format it! It seems
that format command will not totally re-format the filesystem if there
is alredy one.

Hopefully others will not meet the same problem from me. :=)

David

David Wei 写道:
> I think my config is okay regarding this temp folder. I just used the
> default setting of hadoop for temp folder.
> Right now I found that the problem is following:
>
> 2008-10-17 01:52:57,167* FATAL *org.apache.hadoop.dfs.StateChange:
> BLOCK* NameSystem.getDatanode: Data node 192.168.49.148:50010 is
> attempting to report storage ID
> DS-2140035130-127.0.0.1-50010-1223898963914. Node 192.168.55.104:50010
> is expected to serve this storage.
>
> trying following steps:
> 1. Ensure every node's setting identical
> 2. Format every node again
> 3. Remove temp folders on each node
> 4. Ensure the master and slaves can ssh to each others without password
> 5. Restart the whole cluster
>
> But the same situation... ... :=(
>
> David
>
> [EMAIL PROTECTED] 写道:
>   
>> have you config the property hadoop.tmp.dir in the configaration 
>> conf/hadoop-site.xml?
>>  the somefile will be stored in that directory。may be you could try to rm 
>> that directory and
>> "bin/hadoop namenode -format"  again. i meet the same problem.and i just do 
>> the same thing.
>> it runs ok.
>>
>> 2008-10-16
>>
>>
>>
>> [EMAIL PROTECTED]
>>
>>     
>
>
>
>   


Reply via email to