[ https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288320#comment-16288320 ]
Jason Lowe commented on HDFS-12881: ----------------------------------- Thanks for updating the patch! The patch looks much better, but it is modifying more places than intended. The changes in hadoop-common should be under HADOOP-15085 and the changes in YARN are already covered in YARN-7595. Also one minor nit, it's cleaner to call IOUtils.closeStream(x) rather than IOUtils.cleanupWithLogger(null, x) when there's only one stream to close. Would be nice if there was an IOUtils.closeStreams(...) method, but that's not part of this JIRA. > Output streams closed with IOUtils suppressing write errors > ----------------------------------------------------------- > > Key: HDFS-12881 > URL: https://issues.apache.org/jira/browse/HDFS-12881 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Jason Lowe > Assignee: Ajay Kumar > Attachments: HDFS-12881.001.patch, HDFS-12881.002.patch, > HDFS-12881.003.patch > > > There are a few places in HDFS code that are closing an output stream with > IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org