Yu:
With positive results, more hbase users would be asking for the backport of 
offheap read path patches. 

Do you think you or your coworker has the bandwidth to publish backport for 
branch-1 ?

Thanks 

> On Nov 18, 2016, at 12:11 AM, Yu Li <car...@gmail.com> wrote:
> 
> Dear all,
> 
> We have backported read path offheap (HBASE-11425) to our customized 
> hbase-1.1.2 (thanks @Anoop for the help/support) and run it online for more 
> than a month, and would like to share our experience, for what it's worth 
> (smile).
> 
> Generally speaking, we gained a better and more stable throughput/performance 
> with offheap, and below are some details:
> 1. QPS become more stable with offheap
> 
> Performance w/o offheap:
> 
> 
> 
> Performance w/ offheap:
> 
> 
> 
> These data come from our online A/B test cluster (with 450 physical machines, 
> and each with 256G memory + 64 core) with real world workloads, it shows 
> using offheap we could gain a more stable throughput as well as better 
> performance
> 
> Not showing fully online data here because for online we published the 
> version with both offheap and NettyRpcServer together, so no standalone 
> comparison data for offheap
> 
> 2. Full GC frequency and cost
> 
> Average Full GC STW time reduce from 11s to 7s with offheap.
> 
> 3. Young GC frequency and cost
> 
> No performance degradation observed with offheap.
> 
> 4. Peak throughput of one single RS
> 
> On Singles Day (11/11), peak throughput of one single RS reached 100K, among 
> which 90K from Get. Plus internet in/out data we could know the average 
> result size of get request is ~1KB
> 
> 
> 
> Offheap are used on all online machines (more than 1600 nodes) instead of 
> LruCache, so the above QPS is gained from offheap bucketcache, along with 
> NettyRpcServer(HBASE-15756).
> 
> Just let us know if any comments. Thanks.
> 
> Best Regards,
> Yu
> 
> 
> 
> 
> 
> 
> 

Reply via email to