[ 
https://issues.apache.org/jira/browse/HDFS-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8347:
----------------------------
    Description: While investigating a test failure in 
{{TestRecoverStripedFile}}, found one issue. An extra configurable buffer size 
instead of the chunkSize defined the schema is used to perform the decoding, 
which needs further discussion and can cause test failure with latest erasure 
coder change.  (was: While investigating a test failure in 
{{TestRecoverStripedFile}}, found one issue. An extra configurable buffer size 
instead of the chunkSize defined the schema is used to perform the decoding, 
which is incorrect and will cause a decoding failure as below. This is exposed 
by latest change in erasure coder.
{noformat}
2015-05-08 18:50:06,607 WARN  datanode.DataNode 
(ErasureCodingWorker.java:run(386)) - Transfer failed for all targets.
2015-05-08 18:50:06,608 WARN  datanode.DataNode 
(ErasureCodingWorker.java:run(399)) - Failed to recover striped block: 
BP-1597876081-10.239.12.51-1431082199073:blk_-9223372036854775792_1001
2015-05-08 18:50:06,609 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(826)) - Exception for 
BP-1597876081-10.239.12.51-1431082199073:blk_-9223372036854775784_1001
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
        at java.lang.Thread.run(Thread.java:745)
{noformat})

> Using chunkSize to perform erasure decoding in stripping blocks recovering
> --------------------------------------------------------------------------
>
>                 Key: HDFS-8347
>                 URL: https://issues.apache.org/jira/browse/HDFS-8347
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>
> While investigating a test failure in {{TestRecoverStripedFile}}, found one 
> issue. An extra configurable buffer size instead of the chunkSize defined the 
> schema is used to perform the decoding, which needs further discussion and 
> can cause test failure with latest erasure coder change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to