[ 
https://issues.apache.org/jira/browse/HDFS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12968486#action_12968486
 ] 

Patrick Kling commented on HDFS-1527:
-------------------------------------

I have now run the complete set of HDFS tests. The only tests that fail are the 
ones that also fail on a clean trunk:

{code}
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
    [junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED
    [junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED (timeout)
    [junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED
    [junit] Test org.apache.hadoop.hdfs.server.namenode.TestStorageRestore 
FAILED
    [junit] Test org.apache.hadoop.hdfs.TestFileConcurrentReader FAILED 
(timeout)
    [junit] Test org.apache.hadoop.hdfs.server.balancer.TestBalancer FAILED
    [junit] Test org.apache.hadoop.hdfs.server.namenode.TestBlockTokenWithDFS 
FAILED
    [junit] Test 
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete FAILED (timeout)
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
{code}

> SocketOutputStream.transferToFully fails for blocks >= 2GB on 32 bit JVM
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1527
>                 URL: https://issues.apache.org/jira/browse/HDFS-1527
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.23.0
>         Environment: 32 bit JVM
>            Reporter: Patrick Kling
>            Assignee: Patrick Kling
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1527.2.patch, HDFS-1527.patch
>
>
> On 32 bit JVM, SocketOutputStream.transferToFully() fails if the block size 
> is >= 2GB. We should fall back to a normal transfer in this case. 
> {code}
> 2010-12-02 19:04:23,490 ERROR datanode.DataNode 
> (BlockSender.java:sendChunks(399)) - BlockSender.sendChunks() exception: 
> java.io.IOException: Value too large
>  for defined data type
>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>         at 
> sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418)
>         at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519)
>         at 
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:204)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:386)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:475)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.opReadBlock(DataXceiver.java:196)
>         at 
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opReadBlock(DataTransferProtocol.java:356)
>         at 
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:328)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
>         at java.lang.Thread.run(Thread.java:619)
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to