[ https://issues.apache.org/jira/browse/HDFS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12974470#action_12974470 ]
Hairong Kuang commented on HDFS-1529: ------------------------------------- You have a valid case. We should not throw away current packet. But I am still concerned about the solution. Suppose we really use a buffer instead of a queue of packets and the buffer is full. Problematically you are not able to queue additional packet, right? Shall we check if the packet is full or not and does the queuing in the beginning of writeChunk before writing the chunk to the current packet? > Incorrect handling of interrupts in waitForAckedSeqno can cause deadlock > ------------------------------------------------------------------------ > > Key: HDFS-1529 > URL: https://issues.apache.org/jira/browse/HDFS-1529 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs client > Affects Versions: 0.22.0 > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Priority: Blocker > Attachments: hdfs-1529.txt, hdfs-1529.txt, hdfs-1529.txt, Test.java > > > In HDFS-895 the handling of interrupts during hflush/close was changed to > preserve interrupt status. This ends up creating an infinite loop in > waitForAckedSeqno if the waiting thread gets interrupted, since Object.wait() > has a strange semantic that it doesn't give up the lock even momentarily if > the thread is already in interrupted state at the beginning of the call. > We should decide what the correct behavior is here - if a thread is > interrupted while it's calling hflush() or close() should we (a) throw an > exception, perhaps InterruptedIOException (b) ignore, or (c) wait for the > flush to finish but preserve interrupt status on exit? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.