[ 
https://issues.apache.org/jira/browse/HADOOP-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14226074#comment-14226074
 ] 

Yi Liu commented on HADOOP-11339:
---------------------------------

For the buffer, the rpc data size may be different, so it's hard to reuse as a 
single byte array.
Currently in my initial patch, I write a chunked byte array, when the new rpc 
message is larger, we allocate new chunk.

Thoughts?

> Reuse buffer for Hadoop RPC
> ---------------------------
>
>                 Key: HADOOP-11339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11339
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc, performance
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>
> For Hadoop RPCs, we will try to reuse the available connections.
> But when we process each rpc in the same connection, we will allocate a fresh 
> heap byte buffer to store the rpc bytes data. The rpc message may be very 
> large, i.e., datanode blocks report. 
> There is chance to cause full gc as discussed in HDFS-7435



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to