[ https://issues.apache.org/jira/browse/HDFS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12989470#comment-12989470 ]
Hairong Kuang commented on HDFS-1529: ------------------------------------- +1. Let's commit this fix for now. > Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock > ------------------------------------------------------------------------ > > Key: HDFS-1529 > URL: https://issues.apache.org/jira/browse/HDFS-1529 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs client > Affects Versions: 0.22.0 > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Priority: Blocker > Fix For: 0.22.0 > > Attachments: Test.java, hdfs-1529.txt, hdfs-1529.txt, hdfs-1529.txt > > > In HDFS-895 the handling of interrupts during hflush/close was changed to > preserve interrupt status. This ends up creating an infinite loop in > waitForAckedSeqno if the waiting thread gets interrupted, since Object.wait() > has a strange semantic that it doesn't give up the lock even momentarily if > the thread is already in interrupted state at the beginning of the call. > We should decide what the correct behavior is here - if a thread is > interrupted while it's calling hflush() or close() should we (a) throw an > exception, perhaps InterruptedIOException (b) ignore, or (c) wait for the > flush to finish but preserve interrupt status on exit? -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira