Hi Jeff,

It is not causing it to re-register to the NN.

The PacketResponder gets interrupted when the write pipeline fails in
certain situations - the interruption should cause the pipeline to tear
down, and then the client will retry with a new pipeline (minus the
offending node).

-Todd

On Sun, Jul 4, 2010 at 10:30 PM, Jeff Zhang <zjf...@gmail.com> wrote:

> Hi all,
>
> In my data node logs, it says PacketResponder is interrupt,and it cuase my
> data node to re-register to namenode.
> The following is the log:  (Anyone has any clues ? Thanks )
>
> 868_1195229 java.io.EOFException: while trying to read 65557 bytes
> 2010-07-05 00:00:36,719 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_-6478248321374328051_1195260 0 : Thread is interrupted.
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block
> blk_-6478248321374328051_1195260 terminating
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_-4909137647091005195_1195309 java.io.EOFException: while
> trying to read 65557 bytes
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_8591229225807253868_1195229 0 : Thread is interrupted.
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block
> blk_8591229225807253868_1195229 terminating
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_-4909137647091005195_1195309 1 : Thread is interrupted.
> 2010-07-05 00:00:36,720 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block
> blk_-4909137647091005195_1195309 terminating
> 2010-07-05 00:00:36,721 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-6478248321374328051_1195260 received exception java.io.EOFException:
> while trying to read 65557 bytes
> 2010-07-05 00:00:36,721 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.1.8.143:50010, storageID=DS-1665371331-127.0.0.1-50010-1278047732176,
> infoPort=50075, ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:309)
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:353)
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:409)
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:617)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352)
>         at
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
>         at
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
> --
> Best Regards
>
> Jeff Zhang
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to