[ 
https://issues.apache.org/jira/browse/HDFS-915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12803978#action_12803978
 ] 

Todd Lipcon commented on HDFS-915:
----------------------------------

Looking closer, I think the crux of this issue is that the DFSClient uses 
HdfsConstants.WRITE_TIMEOUT to time its writes out, and this value defaults to 
something like 8 minutes. If the reader sees an error, it should probably 
interrupt() the writer, which should treat the interrupt identically to 
IOException.

> Hung DN stalls write pipeline for far longer than its timeout
> -------------------------------------------------------------
>
>                 Key: HDFS-915
>                 URL: https://issues.apache.org/jira/browse/HDFS-915
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.20.1
>            Reporter: Todd Lipcon
>         Attachments: local-dn.log
>
>
> After running kill -STOP on the datanode in the middle of a write pipeline, 
> the client takes far longer to recover than it should. The ResponseProcessor 
> times out in the correct interval, but doesn't interrupt the DataStreamer, 
> which appears to not be subject to the same timeout. The client only recovers 
> once the OS actually declares the TCP stream dead, which can take a very long 
> time.
> I've experienced this on 0.20.1, haven't tried it yet on trunk or 0.21.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to