[ 
https://issues.apache.org/jira/browse/HDFS-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5728:
-----------------------------------
    Fix Version/s:     (was: 3.0.0)

> [Diskfull] Block recovery will fail if the metafile does not have crc for all 
> chunks of the block
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5728
>                 URL: https://issues.apache.org/jira/browse/HDFS-5728
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 0.23.10, 2.2.0
>            Reporter: Vinayakumar B
>            Assignee: Vinayakumar B
>            Priority: Critical
>             Fix For: 0.23.11, 2.3.0
>
>         Attachments: HDFS-5728.branch-0.23.patch, HDFS-5728.patch, 
> HDFS-5728.patch, HDFS-5728.patch
>
>
> 1. Client (regionsever) has opened stream to write its WAL to HDFS. This is 
> not one time upload, data will be written slowly.
> 2. One of the DataNode got diskfull ( due to some other data filled up disks)
> 3. Unfortunately block was being written to only this datanode in cluster, so 
> client write has also failed.
> 4. After some time disk is made free and all processes are restarted.
> 5. Now HMaster try to recover the file by calling recoverLease. 
> At this time recovery was failing saying file length mismatch.
> When checked,
>  actual block file length: 62484480
>  Calculated block length: 62455808
> This was because, metafile was having crc for only 62455808 bytes, and it 
> considered 62455808 as the block size.
> No matter how many times, recovery was continously failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to