[
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13021862#comment-13021862
]
sam rash commented on HDFS-941:
-------------------------------
The last failure I saw with this test was basically unrelated to the test
itself--it was a socket leak in the datanode, i think with RPCs.
I glanced at the first test failure output and found a similar error:
2011-04-11 21:29:36,962 INFO datanode.DataNode
(DataXceiver.java:opWriteBlock(458)) - writeBlock blk_-6878114854540472276_1001
received exception java.io.FileNotFoundException:
/grid/0/hudson/hudson-slave/workspace/PreCommit-HDFS-Build/trunk/build/test/data/dfs/data/data1/current/rbw/blk_-6878114854540472276_1001.meta
(Too many open files)
Note that this test implicitly finds any socket/fd leaks because it
opens/closes files repeatedly.
If you can check into this, that'd be great. I'll have some more time later
this week to help more.
> Datanode xceiver protocol should allow reuse of a connection
> ------------------------------------------------------------
>
> Key: HDFS-941
> URL: https://issues.apache.org/jira/browse/HDFS-941
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: data-node, hdfs client
> Affects Versions: 0.22.0
> Reporter: Todd Lipcon
> Assignee: bc Wong
> Attachments: HDFS-941-1.patch, HDFS-941-2.patch, HDFS-941-3.patch,
> HDFS-941-3.patch, HDFS-941-4.patch, HDFS-941-5.patch, HDFS-941-6.patch,
> HDFS-941-6.patch, HDFS-941-6.patch, hdfs941-1.png
>
>
> Right now each connection into the datanode xceiver only processes one
> operation.
> In the case that an operation leaves the stream in a well-defined state (eg a
> client reads to the end of a block successfully) the same connection could be
> reused for a second operation. This should improve random read performance
> significantly.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira