[ https://issues.apache.org/jira/browse/HDFS-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907471#comment-13907471 ]
Brandon Li commented on HDFS-5924: ---------------------------------- {quote}This feature does not guarantee all client writes to continue across restart. {quote} Would it cause data loss? especially when the only one or more than one datanode in the pipeline is shutting down for upgrade. > Utilize OOB upgrade message processing for writes > ------------------------------------------------- > > Key: HDFS-5924 > URL: https://issues.apache.org/jira/browse/HDFS-5924 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, ha, hdfs-client, namenode > Reporter: Kihwal Lee > Assignee: Kihwal Lee > Attachments: HDFS-5924_RBW_RECOVERY.patch, > HDFS-5924_RBW_RECOVERY.patch > > > After HDFS-5585 and HDFS-5583, clients and datanodes can coordinate > shutdown-restart in order to minimize failures or locality loss. > In this jira, HDFS client is made aware of the restart OOB ack and perform > special write pipeline recovery. Datanode is also modified to load marked RBW > replicas as RBW instead of RWR as long as the restart did not take long. > For clients, it considers doing this kind of recovery only when there is only > one node left in the pipeline or the restarting node is a local datanode. > For both clients and datanodes, the timeout or expiration is configurable, > meaning this feature can be turned off by setting timeout variables to 0. -- This message was sent by Atlassian JIRA (v6.1.5#6160)