[ 
https://issues.apache.org/jira/browse/HDFS-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103161#comment-14103161
 ] 

Zhe Zhang commented on HDFS-6867:
---------------------------------

How about starting a new block while recovering the partially-failed block in 
background? This will cause a small block in the middle of the file, which is 
suboptimal. To mitigate we can create a new block with a given size -- (default 
block size) - (size of the partial block being recovered). Then we can merge 
the recovered and the new blocks.

> For DFSOutputStream, do pipeline recovery for a single block in the background
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-6867
>                 URL: https://issues.apache.org/jira/browse/HDFS-6867
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>
> For DFSOutputStream, we should be able to do pipeline recovery in the 
> background, while the user is continuing to write to the file.  This is 
> especially useful for long-lived clients that write to an HDFS file slowly. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to