[ 
https://issues.apache.org/jira/browse/HDFS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12974455#action_12974455
 ] 

Hairong Kuang commented on HDFS-1529:
-------------------------------------

>  even though we slightly overrun the MAX_PACKETS iength.
Can you simply throwing away the current packet and throwing an 
InterruptedIOException? Since the write is interrupted, you do not need to 
guarantee that the packet gets delivered, right? By overruning the MAX_PACKETS 
length, you may make the client hit OOM and jeopardize all the queued packets.

> Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1529
>                 URL: https://issues.apache.org/jira/browse/HDFS-1529
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-1529.txt, hdfs-1529.txt, hdfs-1529.txt, Test.java
>
>
> In HDFS-895 the handling of interrupts during hflush/close was changed to 
> preserve interrupt status. This ends up creating an infinite loop in 
> waitForAckedSeqno if the waiting thread gets interrupted, since Object.wait() 
> has a strange semantic that it doesn't give up the lock even momentarily if 
> the thread is already in interrupted state at the beginning of the call.
> We should decide what the correct behavior is here - if a thread is 
> interrupted while it's calling hflush() or close() should we (a) throw an 
> exception, perhaps InterruptedIOException (b) ignore, or (c) wait for the 
> flush to finish but preserve interrupt status on exit?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to