[ 
https://issues.apache.org/jira/browse/HDFS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12901527#action_12901527
 ] 

sam rash commented on HDFS-1350:
--------------------------------

actually, it's in our latest branch here which is >= 20-append and includes 
your patch.  The problem is that getBlockMetaDataInfo() has this at the end:

{code}

    // paranoia! verify that the contents of the stored block
    // matches the block file on disk.
    data.validateBlockMetadata(stored);
{code}

which includes this check:

{code}

    if (f.length() > maxDataSize || f.length() <= minDataSize) {
      throw new IOException("Block " + b +
                            " is of size " + f.length() +
                            " but has " + (numChunksInMeta + 1) +
                            " checksums and each checksum size is " +
                            checksumsize + " bytes.");
    }
{code}

a block is not allowed to participate in lease recovery if this fails.  

> make datanodes do graceful shutdown
> -----------------------------------
>
>                 Key: HDFS-1350
>                 URL: https://issues.apache.org/jira/browse/HDFS-1350
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>            Reporter: sam rash
>            Assignee: sam rash
>
> we found that the Datanode doesn't do a graceful shutdown and a block can be 
> corrupted (data + checksum amounts off)
> we can make the DN do a graceful shutdown in case there are open files. if 
> this presents a problem to a timely shutdown, we can make a it a parameter of 
> how long to wait for the full graceful shutdown before just exiting

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to