[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14127432#comment-14127432
 ] 

Daryn Sharp commented on MAPREDUCE-6075:
----------------------------------------

I'm +1 on the change.  The close/null/cleanup is a rather common pattern is 
hadoop.  Using flush isn't a substitute for a close for all filesystems.  Close 
must always be allowed to throw an exception and only swallowed when another 
exception occurred.

In java, close() is supposed to be idempotent so double close is fine.  Double 
closing a fd is bad because the fd may have already been recycled by another 
thread.

> HistoryServerFileSystemStateStore can create zero-length files
> --------------------------------------------------------------
>
>                 Key: MAPREDUCE-6075
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6075
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: jobhistoryserver
>    Affects Versions: 2.3.0
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>         Attachments: MAPREDUCE-6075.patch
>
>
> When the history server state store writes a token file it uses 
> IOUtils.cleanup() to close the file which will silently ignore errors.  This 
> can lead to empty token files in the state store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to