[ 
https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-14479:
----------------------------------
    Attachment: HBASE-14479-V3-experimental_branch-1.patch

Added an experimental patch for branch-1. Sorry I don't know that can be even 
compiled :P

Reader threads are used for handling requests, except the last thread, which 
still reads requests from connections which have no queued request. 
RpcScheduler is completely bypassed (and is just created in vain). RpcScheduler 
seems to affect some metrics, and they will give no information. At most one 
request is queued from each connection (while each connection might parses data 
into multiple requests, for now).

SelectionKey.OP_READ is not recovered until you find no data available in the 
stream, for avoiding overhead of changing registration to the read selector.

This patch simply uses one FIFO queue because this is experimental, but it is 
possible that it would be best.

> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>
>                 Key: HBASE-14479
>                 URL: https://issues.apache.org/jira/browse/HBASE-14479
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>            Reporter: Hiroshi Ikeda
>            Assignee: Hiroshi Ikeda
>            Priority: Minor
>         Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, 
> HBASE-14479-V2.patch, HBASE-14479-V3-experimental_branch-1.patch, 
> HBASE-14479.patch, flamegraph-19152.svg, flamegraph-32667.svg, gc.png, 
> gets.png, io.png, median.png
>
>
> {{RpcServer}} uses multiple selectors to read data for load distribution, but 
> the distribution is just done by round-robin. It is uncertain, especially for 
> long run, whether load is equally divided and resources are used without 
> being wasted.
> Moreover, multiple selectors may cause excessive context switches which give 
> priority to low latency (while we just add the requests to queues), and it is 
> possible to reduce throughput of the whole server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to