[ 
https://issues.apache.org/jira/browse/HADOOP-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12847537#action_12847537
 ] 

Ted Yu commented on HADOOP-4829:
--------------------------------

Clarification:
The comment about Cache.remove() was made for patched 0.20.1.

> Allow FileSystem shutdown hook to be disabled
> ---------------------------------------------
>
>                 Key: HADOOP-4829
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4829
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs
>    Affects Versions: 0.18.1
>            Reporter: Bryan Duxbury
>            Assignee: Todd Lipcon
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-4829-0.18.3.patch, hadoop-4829-v2.txt, 
> hadoop-4829-v3.txt, hadoop-4829.txt
>
>
> FileSystem sets a JVM shutdown hook so that it can clean up the FileSystem 
> cache. This is great behavior when you are writing a client application, but 
> when you're writing a server application, like the Collector or an HBase 
> RegionServer, you need to control the shutdown of the application and HDFS 
> much more closely. If you set your own shutdown hook, there's no guarantee 
> that your hook will run before the HDFS one, preventing you from taking some 
> shutdown actions.
> The current workaround I've used is to snag the FileSystem shutdown hook via 
> Java reflection, disable it, and then run it on my own schedule. I'd really 
> appreciate not having to do take this hacky approach. It seems like the right 
> way to go about this is to just to add a method to disable the hook directly 
> on FileSystem. That way, server applications can elect to disable the 
> automatic cleanup and just call FileSystem.closeAll themselves when the time 
> is right.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to