[ https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13669684#comment-13669684 ]
Colin Patrick McCabe commented on HDFS-4504: -------------------------------------------- I think the vast majority of these cases will simply be handled by block recovery. The other part of the time, block recovery has gotten into a state where it will never succeed, and we simply need to deal with that situation. Probably the best way is adding the force flag to {{completeFile}}. > DFSOutputStream#close doesn't always release resources (such as leases) > ----------------------------------------------------------------------- > > Key: HDFS-4504 > URL: https://issues.apache.org/jira/browse/HDFS-4504 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Colin Patrick McCabe > Assignee: Colin Patrick McCabe > Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch > > > {{DFSOutputStream#close}} can throw an {{IOException}} in some cases. One > example is if there is a pipeline error and then pipeline recovery fails. > Unfortunately, in this case, some of the resources used by the > {{DFSOutputStream}} are leaked. One particularly important resource is file > leases. > So it's possible for a long-lived HDFS client, such as Flume, to write many > blocks to a file, but then fail to close it. Unfortunately, the > {{LeaseRenewerThread}} inside the client will continue to renew the lease for > the "undead" file. Future attempts to close the file will just rethrow the > previous exception, and no progress can be made by the client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira