[
https://issues.apache.org/jira/browse/HADOOP-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613485#action_12613485
]
Raghu Angadi commented on HADOOP-3672:
--------------------------------------
> [...] So I don't see that RPC adds more copies.
current implementation? I think current implementation serializes/deserializes
all the arguments and returned objects, doesn't it imply extra copies?
Also to match with the current implementation, it some how support kernel
transfer too (this is the reason datanode takes 10 times less cpu compared to
0.16 while serving data). It is of course possible to enhance RPC to all these
and may some of these CPU benefits are not required.
I should probably just wait for a design for RPC transfers.
> support for persistent connections to improve random read performance.
> ----------------------------------------------------------------------
>
> Key: HADOOP-3672
> URL: https://issues.apache.org/jira/browse/HADOOP-3672
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.17.0
> Environment: Linux 2.6.9-55 , Dual Core Opteron 280 2.4Ghz , 4GB
> memory
> Reporter: George Wu
> Attachments: pread_test.java
>
>
> preads() establish new connections per request. yourkit java profiles show
> that this connection overhead is pretty significant on the DataNode.
> I wrote a simple microbenchmark program which does many iterations of pread()
> from different offsets of a large file. I hacked DFSClient/DataNode code to
> re-use the same connection/DataNode request handler thread. The performance
> improvement was 7% when the data is served from disk and 80% when the data is
> served from the OS page cache.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.