[ 
https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14951004#comment-14951004
 ] 

Jing Zhao commented on HDFS-9173:
---------------------------------

Thanks a lot for working on this, [~walter.k.su]! As [~zhz] suggested, to use 
"the smallest length that covers at least 6 internal blocks" as the safe length 
can be a good start point. In some other scenarios, we can save more data 
though. E.g., if the data looks like:
{code}
blk_0  blk_1  blk_2  blk_3  blk_4  blk_5  blk_6  blk_7  blk_8
 64k    64k    64k    64k    64k    64k    64k    64k    64k
 64k    13k    ___    ___    ___    ___    64k    64k    64k
{code}
i.e., the failure happens when the client tries to close the file. But I'm fine 
if we do not handle this kind of scenario right now.

For the patch, do you think we can divide it into several smaller ones? 
Currently I'm thinking maybe we can separating the safe length calculation out 
into another jira, and add more tests there. These tests can only focus on the 
calculation thus do not need to be end-to-end, and in this way we can cover 
more scenarios.

I will review the whole patch and post comments later.

> Erasure Coding: Lease recovery for striped file
> -----------------------------------------------
>
>                 Key: HDFS-9173
>                 URL: https://issues.apache.org/jira/browse/HDFS-9173
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Walter Su
>            Assignee: Walter Su
>         Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to