[ 
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046162#comment-13046162
 ] 

stack commented on HDFS-941:
----------------------------

On occasion I see these new additions to the datanode log:

{code}
2011-06-08 12:37:20,478 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
Client did not send a valid status code after reading. Will close connection.
2011-06-08 12:37:20,480 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
Client did not send a valid status code after reading. Will close connection.
2011-06-08 12:37:20,482 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
Client did not send a valid status code after reading. Will close connection.
2011-06-08 12:37:20,483 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
Client did not send a valid status code after reading. Will close connection.
{code}

Should these be logged as DEBUG and not ERROR?

I see this too, don't think it related:

{code}
2011-06-08 12:40:09,642 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving block blk_-2049668997072761677_6556 src: /10.4.9.34:36343 dest: 
/10.4.9.34:10010
2011-06-08 12:40:09,661 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
BlockSender.sendChunks() exception: java.io.IOException: Connection reset by 
peer
        at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:415)
        at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:516)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:204)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:481)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opReadBlock(DataXceiver.java:237)
        at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opReadBlock(DataTransferProtocol.java:356)
        at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:328)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:169)
        at java.lang.Thread.run(Thread.java:662)
{code}

Odd is that this is machine talking to itself.

> Datanode xceiver protocol should allow reuse of a connection
> ------------------------------------------------------------
>
>                 Key: HDFS-941
>                 URL: https://issues.apache.org/jira/browse/HDFS-941
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: bc Wong
>         Attachments: HDFS-941-1.patch, HDFS-941-2.patch, HDFS-941-3.patch, 
> HDFS-941-3.patch, HDFS-941-4.patch, HDFS-941-5.patch, HDFS-941-6.22.patch, 
> HDFS-941-6.patch, HDFS-941-6.patch, HDFS-941-6.patch, hdfs-941.txt, 
> hdfs-941.txt, hdfs941-1.png
>
>
> Right now each connection into the datanode xceiver only processes one 
> operation.
> In the case that an operation leaves the stream in a well-defined state (eg a 
> client reads to the end of a block successfully) the same connection could be 
> reused for a second operation. This should improve random read performance 
> significantly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to