[ 
https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14112927#comment-14112927
 ] 

Yongjun Zhang commented on HDFS-4257:
-------------------------------------

Hi Nicholas, thanks for your earlier work on this. You have done all the work 
except to address some cosmetic change kind of comments from us. If you have 
time to finish the rest, that would be great, if not, I can post a revision, 
certainly without changing the assignee of this jira. Thanks.


> The ReplaceDatanodeOnFailure policies could have a forgiving option
> -------------------------------------------------------------------
>
>                 Key: HDFS-4257
>                 URL: https://issues.apache.org/jira/browse/HDFS-4257
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs-client
>    Affects Versions: 2.0.2-alpha
>            Reporter: Harsh J
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>         Attachments: h4257_20140325.patch, h4257_20140325b.patch, 
> h4257_20140326.patch, h4257_20140819.patch
>
>
> Similar question has previously come over HDFS-3091 and friends, but the 
> essential problem is: "Why can't I write to my cluster of 3 nodes, when I 
> just have 1 node available at a point in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an 
> error (at the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in 
> many cases).
> {{Default}} -> Replace based on a few conditions, but whose minimum never 
> touches 1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where 
> despite _trying_, if it isn't possible to have > 1 DN in the pipeline, do not 
> fail. I think that is what the former write behavior was, and what fit with 
> the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1 
> remaining replica in the pipeline (the minimum of 1 grows with the 
> replication factor demanded from the write), when replication is taken care 
> of immediately afterwards? How often have we seen missing blocks arise out of 
> allowing this + facing a big rack(s) failure or so?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to