[ 
https://issues.apache.org/jira/browse/HADOOP-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14226273#comment-14226273
 ] 

Daryn Sharp commented on HADOOP-11339:
--------------------------------------

I made a patch for this last fall.  In my old age I forgot about it...  It 
implemented reusable fixed-size buffers and transparently handled the 
segmentation.  It also had a conf option to use direct byte buffers.    I'll 
try to dig it up.

> Reuse buffer for Hadoop RPC
> ---------------------------
>
>                 Key: HADOOP-11339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11339
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc, performance
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>
> For Hadoop RPCs, we will try to reuse the available connections.
> But when we process each rpc in the same connection, we will allocate a fresh 
> heap byte buffer to store the rpc bytes data. The rpc message may be very 
> large, i.e., datanode blocks report. 
> There is chance to cause full gc as discussed in HDFS-7435



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to