I can not see the images either... Du, Jingcheng <jingcheng...@intel.com>于2016年11月18日 周五16:57写道:
> Thanks Yu for the sharing, great achievements. > It seems the images cannot be displayed? Maybe just me? > > Regards, > Jingcheng > > From: Yu Li [mailto:car...@gmail.com] > Sent: Friday, November 18, 2016 4:11 PM > To: user@hbase.apache.org; d...@hbase.apache.org > Subject: Use experience and performance data of offheap from Alibaba > online cluster > > Dear all, > > We have backported read path offheap (HBASE-11425) to our customized > hbase-1.1.2 (thanks @Anoop for the help/support) and run it online for more > than a month, and would like to share our experience, for what it's worth > (smile). > > Generally speaking, we gained a better and more stable > throughput/performance with offheap, and below are some details: > > 1. QPS become more stable with offheap > > Performance w/o offheap: > > [cid:part1.582d4b6424f071c] > > Performance w/ offheap: > > [cid:part2.582d4b6424f071c] > > These data come from our online A/B test cluster (with 450 physical > machines, and each with 256G memory + 64 core) with real world workloads, > it shows using offheap we could gain a more stable throughput as well as > better performance > > Not showing fully online data here because for online we published the > version with both offheap and NettyRpcServer together, so no standalone > comparison data for offheap > > 2. Full GC frequency and cost > > Average Full GC STW time reduce from 11s to 7s with offheap. > > 3. Young GC frequency and cost > > No performance degradation observed with offheap. > > 4. Peak throughput of one single RS > > On Singles Day (11/11), peak throughput of one single RS reached 100K, > among which 90K from Get. Plus internet in/out data we could know the > average result size of get request is ~1KB > > [cid:part3.582d4b6424f071c] > > Offheap are used on all online machines (more than 1600 nodes) instead of > LruCache, so the above QPS is gained from offheap bucketcache, along with > NettyRpcServer(HBASE-15756). > Just let us know if any comments. Thanks. > > Best Regards, > Yu > > > > > > >