[ 
https://issues.apache.org/jira/browse/HADOOP-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575545#action_12575545
 ] 

Hadoop QA commented on HADOOP-2926:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12377211/closeStream.patch
against trunk revision 619744.

    @author +1.  The patch does not contain any @author tags.

    tests included -1.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no tests are needed for this patch.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler 
warnings.

    release audit +1.  The applied patch does not generate any new release 
audit warnings.

    findbugs -1.  The patch appears to introduce 2 new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1899/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1899/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1899/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1899/console

This message is automatically generated.

> Ignoring IOExceptions on close
> ------------------------------
>
>                 Key: HADOOP-2926
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2926
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Owen O'Malley
>            Assignee: dhruba borthakur
>            Priority: Critical
>             Fix For: 0.16.1
>
>         Attachments: closeStream.patch
>
>
> Currently in HDFS there are a lot of calls to IOUtils.closeStream that are 
> from finally blocks. I'm worried that this can lead to data corruption in the 
> file system. Take the first instance in DataNode.copyBlock: it writes the 
> block and then calls closeStream on the output stream. If there is an error 
> at the end of the file that is detected in the close, it will be *completely* 
> ignored. Note that logging the error is not enough, the error should be 
> thrown so that the client knows the failure happened.
> {code}
>    try {
>      file1.write(...);
>      file2.write(...);
>    } finally {
>       IOUtils.closeStream(file);
>   }
> {code}
> is *bad*. It must be rewritten as:
> {code}
>    try {
>      file1.write(...);
>      file2.write(...);
>      file1.close(...);
>      file2.close(...);
>    } catch (IOException ie) {
>      IOUtils.closeStream(file1);
>      IOUtils.closeStream(file2);
>      throw ie;
>    }
> {code}
> I also think that IOUtils.closeStream should be renamed 
> IOUtils.cleanupFailedStream or something to make it clear it can only be used 
> after the write operation has failed and is being cleaned up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to