[
https://issues.apache.org/jira/browse/HADOOP-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575789#action_12575789
]
Olga Natkovich commented on HADOOP-2815:
----------------------------------------
If we are closing this JIRA, I would like to see a proposal for managing
application's temp data in a separate JIRA issue.
Also, JAVA allows to set DeleteOnExit for a directory which would work really
well for us so I am not sure that doing it that way is a bad idea for hadoop.
> Allowing processes to cleanup dfs on shutdown
> ---------------------------------------------
>
> Key: HADOOP-2815
> URL: https://issues.apache.org/jira/browse/HADOOP-2815
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Reporter: Olga Natkovich
> Assignee: dhruba borthakur
> Fix For: 0.16.1
>
>
> Pig creates temp files that it wants to be removed at the end of the
> processing. The code that removes the temp file is in the shutdown hook so
> that they get removed both under normal shutdown as well as when process gets
> killed.
> The problem that we are seeing is that by the time the code is called the DFS
> might already be closed and the delete fails leaving temp files behind. Since
> we have no control over the shutdown order, we have no way to make sure that
> the files get removed.
> One way to solve this issue is to be able to mark the files as temp files so
> that hadoop can remove them during its shutdown.
> The stack trace I am seeing is
> at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:158)
> at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:417)
> at
> org.apache.hadoop.dfs.DistributedFileSystem.delete(DistributedFileSystem.java:144)
> at
> org.apache.pig.backend.hadoop.datastorage.HPath.delete(HPath.java:96)
> at org.apache.pig.impl.io.FileLocalizer$1.run(FileLocalizer.java:275)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.