[ https://issues.apache.org/jira/browse/HDFS-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220639#comment-15220639 ]
Arpit Agarwal edited comment on HDFS-10178 at 3/31/16 8:43 PM: --------------------------------------------------------------- Hi Kihwal, I think we can check {{replica.isOnTransientStorage()}} instead of passing the new flag. Something like this should work in {{BlockSender}}. {code} if (!replica.isOnTransientStorage() && metaIn.getLength() >= BlockMetadataHeader.getHeaderSize()) { {code} was (Author: arpitagarwal): Hi Kihwal, I think we can check {{replica.isOnTransientStorage()}} instead of passing the new flag. Something like this should work {code} if (!replica.isOnTransientStorage() && metaIn.getLength() >= BlockMetadataHeader.getHeaderSize()) { {code} > Permanent write failures can happen if pipeline recoveries occur for the > first packet > ------------------------------------------------------------------------------------- > > Key: HDFS-10178 > URL: https://issues.apache.org/jira/browse/HDFS-10178 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Kihwal Lee > Assignee: Kihwal Lee > Priority: Critical > Attachments: HDFS-10178.patch, HDFS-10178.v2.patch, > HDFS-10178.v3.patch > > > We have observed that write fails permanently if the first packet doesn't go > through properly and pipeline recovery happens. If the packet header is sent > out, but the data portion of the packet does not reach one or more datanodes > in time, the pipeline recovery will be done against the 0-byte partial block. > > If additional datanodes are added, the block is transferred to the new nodes. > After the transfer, each node will have a meta file containing the header > and 0-length data block file. The pipeline recovery seems to work correctly > up to this point, but write fails when actual data packet is resent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)