[
https://issues.apache.org/jira/browse/HADOOP-3914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12631618#action_12631618
]
Hadoop QA commented on HADOOP-3914:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12390147/patch.HADOOP-3914
against trunk revision 696040.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified
tests.
Please justify why no tests are needed for this patch.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3278/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3278/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3278/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3278/console
This message is automatically generated.
> checksumOk implementation in DFSClient can break applications
> -------------------------------------------------------------
>
> Key: HADOOP-3914
> URL: https://issues.apache.org/jira/browse/HADOOP-3914
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.17.1
> Reporter: Christian Kunz
> Fix For: 0.19.0
>
> Attachments: patch.HADOOP-3914
>
>
> One of our non-map-reduce applications (written in C and using libhdfs to
> access dfs) stopped working after switch from 0.16 to 0.17.
> The problem was finally traced down to failures in checksumOk.
> I would assume, the purpose of checksumOk is for a DfsClient to indicate to a
> sending Datanode that the checksum of the received block is okay. This must
> be useful in the replication pipeline.
> How checksumOk is implemented is that any IOException is caught and ignored,
> probably because it is not essential for the client that the message is
> successful.
> But it proved fatal for our application because this application links in a
> 3rd-party library which seems to catch socket exceptions before libhdfs.
> Why was there an Exception? In our case the application reads a file into the
> local buffer of the DFSInputStream large enough to hold all data, the
> application reads to the end and the checksumOK is sent successfully at that
> time. But then the application does some other stuff and comes back to
> re-read the file (still open). It is then when it calls checksumOk again and
> crashes.
> It can easily be avoided by adding a Boolean making sure that checksumOk is
> called exactly once when EOS is encountered. Redundant calls to checksumOk do
> not seem to make sense anyhow.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.