[ 
https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142981#comment-16142981
 ] 

Yongjun Zhang commented on HDFS-11799:
--------------------------------------

Sorry for the delay [~brahmareddy], here are my comments, largely cosmetic:

1. HdfsClientConfigKeys
1.1. String  REPLICATION = PREFIX + "replication";
Need a better name than "replication", maybe "min-setup-replication"?
1.2 Change hdfs-default accordingly

2.DataStreamer

2.1 change
{code}
       //check the minimal numbers nodes available to continue the wrtie.
{code}
to
{code}
       // check the minimal number of nodes available to decide whether to
       // continue the wrtie.
{code}
2.2 change
{code}
        // threshold value, if yes continue writing to the two remaining nodes.
{code}
to
{code}
        // threshold value, continue writing to the remaining nodes.
{code}

2.3 Change
{code}
                  + " nodes since it's bigger than minimum replication: "
                  + dfsClient.dtpReplaceDatanodeOnFailureReplication
                  + " configued by "
{code}
to
{code}
                  + " nodes since it's no less than minimum setup replication: "
                  + dfsClient.dtpReplaceDatanodeOnFailureReplication
                  + " configured by "
{code}

3. hdfs-default.xml

Suggest to change
"If this is set to 0 or a negative number, an exception will be thrown
      when a replacement can not be found."
to
 "If this is set to 0, an exception will be thrown, when a replacement 
  can not be found."
and the code accordingly. Not supporting negative number here seems less
confusing.

Thanks.


> Introduce a config to allow setting up write pipeline with fewer nodes than 
> replication factor
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11799
>                 URL: https://issues.apache.org/jira/browse/HDFS-11799
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>            Assignee: Brahma Reddy Battula
>         Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, 
> HDFS-11799-004.patch, HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we 
> can't find enough DNs, we can have a similar config to enable writing with a 
> single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to