OK, I've just solved problem with minor data lost. Steps to solve:

1) comment out FSEditLog.java:542
2) compile hadoop-core jar
3) start cluster with new jar
Namenode will skip bad records in "name/current/edits" and write new edits 
file back into fs. As bad records stand for actual IO operations, some files 
in HDFS may be deleted as they consist of blocks which do not correspond to 
edits entries. In my situtation, I've lost files of last fortnight period.
4) wait for some time while datanodes are removing blocks, that do not 
corresond to entries in edits file
5) stop cluster
6) replace hadoop-core jar with release one
7) start cluster

Reply via email to