[
https://issues.apache.org/jira/browse/HDFS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hairong Kuang updated HDFS-101:
-------------------------------
Attachment: (was: detectDownDN1.patch)
> DFS write pipeline : DFSClient sometimes does not detect second datanode
> failure
> ---------------------------------------------------------------------------------
>
> Key: HDFS-101
> URL: https://issues.apache.org/jira/browse/HDFS-101
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.1
> Reporter: Raghu Angadi
> Assignee: Hairong Kuang
> Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: detectDownDN-0.20.patch, detectDownDN1-0.20.patch,
> detectDownDN2.patch, hdfs-101.tar.gz
>
>
> When the first datanode's write to second datanode fails or times out
> DFSClient ends up marking first datanode as the bad one and removes it from
> the pipeline. Similar problem exists on DataNode as well and it is fixed in
> HADOOP-3339. From HADOOP-3339 :
> "The main issue is that BlockReceiver thread (and DataStreamer in the case of
> DFSClient) interrupt() the 'responder' thread. But interrupting is a pretty
> coarse control. We don't know what state the responder is in and interrupting
> has different effects depending on responder state. To fix this properly we
> need to redesign how we handle these interactions."
> When the first datanode closes its socket from DFSClient, DFSClient should
> properly read all the data left in the socket.. Also, DataNode's closing of
> the socket should not result in a TCP reset, otherwise I think DFSClient will
> not be able to read from the socket.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.