Hey Zheng,
You can explicitly exit safemode by
hadoop dfsadmin -safemode leave
Then, you can delete the file with the corrupt blocks by:
hadoop fsck / -delete
Joey: You may list the block locations for a specific file by:
hadoop fsck /user/brian/path/to/file -files -blocks
However, if they are missing, then there obviously won't be any listed
replicas for some block.
Hope it helps,
Brian
On Oct 21, 2008, at 4:33 PM, Zheng Shao wrote:
http://markmail.org/message/2xtywnnppacywsya shows we can exit safe
mode explicitly and just delete these corrupted files.
But I don't know how to exit safe mode explicitly.
Zheng
-----Original Message-----
From: Joey Pan [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 2:01 PM
To: core-user@hadoop.apache.org
Subject: replication issue and safe mode pending
When starting hadoop, it stays in safe mode forever. Looking into
the issue,
seems there is problem with block replication.
Command hadoop fsck / shows error msg:
/XXXXX/part-00007.deflate: CORRUPT block blk_1402039344260425079
/XXXXX/part-00007.deflate: MISSING 1 blocks of total size 2294
B......................
Tried to restart the fs , it doesn't solve the problem.
Question: how to find the data node that contains corrupted/missing
blocks?
Thanks,
Joey