[ 
https://issues.apache.org/jira/browse/HADOOP-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577394#action_12577394
 ] 

Hadoop QA commented on HADOOP-2955:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12377561/HADOOP-2955.patch
against trunk revision 619744.

    @author +1.  The patch does not contain any @author tags.

    tests included -1.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no tests are needed for this patch.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler 
warnings.

    release audit +1.  The applied patch does not generate any new release 
audit warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1937/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1937/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1937/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1937/console

This message is automatically generated.

> ant test fail for TestCrcCorruption with OutofMemory.
> -----------------------------------------------------
>
>                 Key: HADOOP-2955
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2955
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Mahadev konar
>            Assignee: Raghu Angadi
>            Priority: Blocker
>         Attachments: HADOOP-2955.java, HADOOP-2955.patch
>
>
> TestCrcCorruption sometimes corrupts the metadata for crc and leads to 
> corruption in the length of of bytes of checksum (second field in metadata). 
> This does not happen always but somtimes since corruption is random in the 
> test.
> I put in a debug statement in the allocation to see how many bytes were being 
> allocated and ran it for few times. This is one of the allocation in 
> BlockSender:sendBlock() 
>  int maxChunksPerPacket = Math.max(1,
>                       (BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum);
>         int sizeofPacket = PKT_HEADER_LEN + 
>         (bytesPerChecksum + checksumSize) * maxChunksPerPacket;
>         LOG.info("Comment: bytes to allocate " + sizeofPacket);
>         ByteBuffer pktBuf = ByteBuffer.allocate(sizeofPacket);
> The output in one of the allocations is 
>  dfs.DataNode (DataNode.java:sendBlock(1766)) - Comment: bytes to allocate 
> 1232596786
> So we should check for number of bytes being allocated in sendBlock (should 
> be less than the block size? -- seems like a good default).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to