[ https://issues.apache.org/jira/browse/HDFS-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13881506#comment-13881506 ]
Kihwal Lee commented on HDFS-5728: ---------------------------------- The build server has been down for more than a day, so precommit won't run any time soon. +1 I won't wait for the build server to return. The previous version of patch was fine except the lines removed in the latest version. > [Diskfull] Block recovery will fail if the metafile not having crc for all > chunks of the block > ---------------------------------------------------------------------------------------------- > > Key: HDFS-5728 > URL: https://issues.apache.org/jira/browse/HDFS-5728 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 0.23.10, 2.2.0 > Reporter: Vinay > Assignee: Vinay > Priority: Critical > Attachments: HDFS-5728.patch, HDFS-5728.patch, HDFS-5728.patch > > > 1. Client (regionsever) has opened stream to write its WAL to HDFS. This is > not one time upload, data will be written slowly. > 2. One of the DataNode got diskfull ( due to some other data filled up disks) > 3. Unfortunately block was being written to only this datanode in cluster, so > client write has also failed. > 4. After some time disk is made free and all processes are restarted. > 5. Now HMaster try to recover the file by calling recoverLease. > At this time recovery was failing saying file length mismatch. > When checked, > actual block file length: 62484480 > Calculated block length: 62455808 > This was because, metafile was having crc for only 62455808 bytes, and it > considered 62455808 as the block size. > No matter how many times, recovery was continously failing. -- This message was sent by Atlassian JIRA (v6.1.5#6160)