[ 
https://issues.apache.org/jira/browse/HBASE-21657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733059#comment-16733059
 ] 

Zheng Hu edited comment on HBASE-21657 at 1/4/19 3:31 AM:
----------------------------------------------------------

I made an performance comparasion between hbase2.0.4 without patch.v2 and 
hbase2.0.4 with patch.v2:
||Comparision||QPS|FlameGraph|L2 cacheHitRatio|
|HBase2.0.4 without patch.v2|9979.8 
ops/sec|[^hbase2.0.4-ssd-scan-traces.svg]|~95%|
|HBase2.0.4 with patch.v2|14392.7 
ops/sec|[^hbase2.0.4-ssd-scan-traces.2.svg]|~95%|

Later, I'll provide more details about the QPS & latency.

BTW, my testing environment were: 
 5 Nodes : 12*800G SSD / 24 Core / 128GB memory (50G onheap + 50G offheap for 
each RS, and allocated 36G for BucketCache). 
 and I used the YCSB 100% scan workload (by default, the ycsb will generate a 
scan with limit in [1...1000] )
{code:java}
table=ycsb-test
columnfamily=C
recordcount=100000000
operationcount=100000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1

clientbuffering=true
  
readallfields=true
writeallfields=true
  
readproportion=0
updateproportion=0
scanproportion=1.0
insertproportion=0
  
requestdistribution=zipfian
{code}


was (Author: openinx):
I made an performance comparasion between hbase2.0.4 without patch.v2 and 
hbase2.0.4 with patch.v2: 

|| Comparision || QPS| FlameGraph| L2 cacheHitRatio|
|HBase2.0.4 without patch.v2|14392.7 ops/sec| [^hbase2.0.4-ssd-scan-traces.svg] 
| ~95%|
|HBase2.0.4 with patch.v2| 9979.8 ops/sec|  [^hbase2.0.4-ssd-scan-traces.2.svg] 
| ~95% |

Later, I'll provide more details about the QPS & latency.

BTW,  my testing environment were: 
5 Nodes :  12*800G SSD / 24 Core / 128GB memory  (50G onheap + 50G offheap for 
each RS, and allocated 36G for BucketCache). 
and I used the YCSB 100% scan workload (by default, the ycsb will generate a 
scan with limit in [1...1000] )
{code}
table=ycsb-test
columnfamily=C
recordcount=100000000
operationcount=100000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1

clientbuffering=true
  
readallfields=true
writeallfields=true
  
readproportion=0
updateproportion=0
scanproportion=1.0
insertproportion=0
  
requestdistribution=zipfian
{code}

 


> PrivateCellUtil#estimatedSerializedSizeOf has been the bottleneck in 100% 
> scan case.
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-21657
>                 URL: https://issues.apache.org/jira/browse/HBASE-21657
>             Project: HBase
>          Issue Type: Bug
>          Components: Performance
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>             Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5
>
>         Attachments: HBASE-21657.v1.patch, HBASE-21657.v2.patch, 
> hbase2.0.4-ssd-scan-traces.2.svg, hbase2.0.4-ssd-scan-traces.svg, 
> hbase20-ssd-100-scan-traces.svg
>
>
> We are evaluating the performance of branch-2, and find that the throughput 
> of scan in SSD cluster is almost the same as HDD cluster. so I made a 
> FlameGraph on RS, and found that the 
> PrivateCellUtil#estimatedSerializedSizeOf cost about 29% cpu, Obviously, it 
> has been the bottleneck in 100% scan case.
> See theĀ [^hbase20-ssd-100-scan-traces.svg]
> BTW, in our XiaoMi branch, we introduce a 
> HRegion#updateReadRequestsByCapacityUnitPerSecond to sum up the size of cells 
> (for metric monitor), so it seems the performance loss was amplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to