Hi Varun, Thnk you for the reponse No there doesnt seem to be any corrupted blocks in my cluster. I did "hadoop fsck -blocks /" and it didnt report any corrupted block.
However, these are two WARNings in the namenode log, constantly repeating since the decommission. - 2013-01-22 09:16:30,908 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Cannot roll edit log, edits.new files already exists in all healthy directories: - 2013-01-22 09:12:10,885 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 There isn't any WARN or ERROR in the decommissioning datanode log Ben On Mon, Jan 21, 2013 at 3:05 PM, varun kumar <varun....@gmail.com> wrote: > Hi Ben, > > Are there any corrupted blocks in your hadoop cluster. > > Regards, > Varun Kumar > > > On Mon, Jan 21, 2013 at 8:22 AM, Ben Kim <benkimkim...@gmail.com> wrote: > >> Hi! >> >> I followed the decommissioning guide on the hadoop hdfs wiki. >> >> the hdfs web ui shows that the decommissioning proceess has successfully >> begun. >> >> it started redeploying 80,000 blocks through the hadoop cluster, but for >> some reason it stopped at 9059 blocks. I've waited 30 hours and still no >> progress. >> >> Any one with any idea? >> -- >> >> *Benjamin Kim* >> *benkimkimben at gmail* >> > > > > -- > Regards, > Varun Kumar.P > -- *Benjamin Kim* *benkimkimben at gmail*