Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock
------------------------------------------------------------------------

                 Key: HDFS-1529
                 URL: https://issues.apache.org/jira/browse/HDFS-1529
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs client
    Affects Versions: 0.22.0
            Reporter: Todd Lipcon
            Assignee: Todd Lipcon
            Priority: Blocker


In HDFS-895 the handling of interrupts during hflush/close was changed to 
preserve interrupt status. This ends up creating an infinite loop in 
waitForAckedSeqno if the waiting thread gets interrupted, since Object.wait() 
has a strange semantic that it doesn't give up the lock even momentarily if the 
thread is already in interrupted state at the beginning of the call.

We should decide what the correct behavior is here - if a thread is interrupted 
while it's calling hflush() or close() should we (a) throw an exception, 
perhaps InterruptedIOException (b) ignore, or (c) wait for the flush to finish 
but preserve interrupt status on exit?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to