[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762979#comment-13762979
 ] 

stack commented on HBASE-8884:
------------------------------

[~stepinto] I was reading rpc code so I could better review an incoming rpc 
patch and because I have notions I will likely never get to (see below).  While 
reading, I was trying to write up documentation of how it all worked.  This is 
where I ran into how opaque and convoluted its operation what w/ unused thread 
locals used passing messages and then the stuff added by this patch -- 
complications we can hopefully clean up in subsequent refactorings as you 
suggest.  Do you have any pushback on my review comments?

'Pooling of buffers across requests' is a notion that rather than do

          data = ByteBuffer.allocate(dataLength);

inside in a rpc Reader thread every time we get a new request, since we have 
read the total rpc size and we know its size, that instead we could go to a 
pool of buffers and ask it for a buffer that is of appropriate size.  We'd 
check it out for the length of the request.  We'd need to check it back in when 
done (likely good spot is at the tail of the Handler when it adds the response 
to the Responder queue).  This could save us a bunch of allocations (and GC 
load, etc.).  I think we could get away w/ this given how KeyValues are copied 
into the MSLAB when we add them to the MemStore (we'd have to figure what to do 
about those that are not copied; i.e. KeyValues that are large).

If the above worked, we could then entertain making the pool be a pool of 
direct byte buffers (Downsides on DBBs are they take a while to allocate and 
their cleanup is unpreditable -- having them in a pool that we set up on server 
start w/ skirt some of these downsides; the copy from the socket channel to the 
DBB would be offheap making for more savings.

If the memstore implementation was itself offheap.... but now I am into fantasy 
so will stop. 

Thanks
                
> Pluggable RpcScheduler
> ----------------------
>
>                 Key: HBASE-8884
>                 URL: https://issues.apache.org/jira/browse/HBASE-8884
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC
>            Reporter: Chao Shi
>            Assignee: Chao Shi
>             Fix For: 0.98.0
>
>         Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
> hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
> hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch
>
>
> Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
> isolated thread-pools based on their priority. In the current implementation, 
> all normal get/put requests are using the same pool. We'd like to add some 
> per-user or per-region level isolation, so that a misbehaved user/region will 
> not saturate the thread-pool and cause DoS to others easily. The idea is 
> similar to FairScheduler in MR. The current scheduling code is not standalone 
> and is mixed with others (Connection#processRequest). The issue is the first 
> step to extract it to an interface, so that people are free to write and test 
> their own implementations.
> This patch doesn't make it completely pluggable yet, as some parameters are 
> pass from constructor. This is because HMaster and HRegionServer both use 
> RpcServer and they have different thread-pool size config. Let me know if you 
> have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to