[ 
https://issues.apache.org/jira/browse/HBASE-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144218#comment-15144218
 ] 

Anoop Sam John commented on HBASE-14490:
----------------------------------------

bq.Do we need to create another pool for read and write seperately? IMHO I feel 
yes.
Ya read reqs into one pool's buffer and write into another's... that is not 
really possible. Because we don't know the op type here in RPC. May be we can 
have one large buffer pool and one small  buffer pool. Write reqs normally land 
in the large category.  The worry was reserving more memory as we have 2 pools.

bq.Moreover, as you know, when you use a heap buffer the Oracle implementation 
uses an internal direct buffer pool and copies data between them, and not only 
that might cause shortage of the off-heap area, but also that causes overhead 
of redundant copying.
Yes. That is why along with HBASE-11425 work, we changed the BBBPool to be off 
heap rather than on heap. Did some perf testing also that time.  Having the 
BBBPool making on heap buffers and caching having some -ve impact in G1GC 
noticed by Elliot.  I forgot which Jira he says it.

bq.I prefer to use a pool of direct buffers whose have a fixed length, 
NIO_BUFFER_LIMIT, that is assumed to be larger than the size of the buffer of a 
native socket.
When u say this, u mean when we have a write req with size larger than this 
buffer's size, read it into N buffers rather than one and use?  I had some 
thinking in that direction and had some offline discussion with Stack and Ram.  
Also was working on a PoC patch there.   We are actively working in this area 
to see what best suits us.  Will be able to tell more in near future.

bq.So here the getBuffer will create a direct buffer if the queue is empty and 
that DBB we will throw away if the capacity is not enough? Such offheap memory 
area when will it get cleared ?
Ya that is another concern for me.  Off heap buffers we make and those are not 
pooled...  If we end up in many such allocations and throw away, the full GC, 
when needed will have an -ve impact?  When we are not able to get a buffer from 
our BBBPool  (these buffers are off heap), we should be creating only on heap 
buffers? data = ByteBuffer.allocate(dataLength);  -> Here I can see it is on 
heap. I mean when we ask Pool to give one and it is not able to find (it's 
capacity is reached and buffer is not available).  It is becoming more imp in 
request path as we know the size need.


> [RpcServer] reuse request read buffer
> -------------------------------------
>
>                 Key: HBASE-14490
>                 URL: https://issues.apache.org/jira/browse/HBASE-14490
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC
>    Affects Versions: 2.0.0, 1.0.2
>            Reporter: Zephyr Guo
>            Assignee: Zephyr Guo
>              Labels: performance
>             Fix For: 2.0.0, 1.0.2
>
>         Attachments: 14490.hack.to.1.2.patch, ByteBufferPool.java, 
> HBASE-14490-v1.patch, HBASE-14490-v10.patch, HBASE-14490-v11.patch, 
> HBASE-14490-v12.patch, HBASE-14490-v2.patch, HBASE-14490-v3.patch, 
> HBASE-14490-v4.patch, HBASE-14490-v5.patch, HBASE-14490-v6.patch, 
> HBASE-14490-v7.patch, HBASE-14490-v8.patch, HBASE-14490-v9.patch, gc.png, 
> hits.png, test-v12-patch
>
>
> Reusing buffer to read request.It's not necessary to every request free 
> buffer.The idea of optimization is to reduce the times that allocate 
> ByteBuffer.
> *Modification*
> 1. {{saslReadAndProcess}} ,{{processOneRpc}}, {{processUnwrappedData}}, 
> {{processConnectionHeader}} accept a ByteBuffer instead of byte[].They can 
> move {{ByteBuffer.position}} correctly when we have read the data.
> 2. {{processUnwrappedData}} no longer use any extra memory.
> 3. Maintaining a buffer pool in each {{Connection}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to