[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16425066#comment-16425066
 ] 

stack edited comment on HBASE-20188 at 4/4/18 7:44 AM:
-------------------------------------------------------

I added a third sheet name "Short Circuit Reads 25M Run" at [1] with timings 
with short-circuit read in place for hbase1 and hbase2. Here are findings:

{quote}
Findings: hbase1.x performs better than 2.x in pure read and pure write modes. 
Mixed load (workloada 50/50), hbase2 is better no matter what combination. 
FastPath RPC Scheduler, the default for hbase2 is better than the hbase1 RCP 
scheduler though it looks ugly in the thread dumps with all threads seemingly 
backed up on its Semaphore coordinator.  hbase2 uses more CPU but seems to have 
a flatter GC profile. in-memory compaction costs. For load, no-compaction is 5% 
slower than hbase1, but with in-memory compaction, itis 11% slower. For 
workloada, no-in-memory-compaction is 24% faster than hbase1 and with in-memory 
compaction, 17% faster. For workloadc, with no-in-memory compaction, we are 2% 
slower. With it, we are 13% slower.
{quote}

[Edited the above. The numbers are a bit worse than first thought.  I 
mistakenly ran with CCSMap in place).

1. 
https://docs.google.com/spreadsheets/d/1w2NBqAPFthG8Ib4C0pHpLARYpWoIF2Vck2vHZW77zE4/edit#gid=1651250875


was (Author: stack):
I added a third sheet name "Short Circuit Reads 25M Run" at [1] with timings 
with short-circuit read in place for hbase1 and hbase2. Here are findings:

{quote}
Findings: hbase1.x performs better than 2.x in pure read and pure write modes 
(but we are now within 10%). Mixed load (workloada 50/50), hbase2 is better no 
matter what combination. FastPath RPC Scheduler, the default for hbase2 is 
better than the hbase1 RCP scheduler though it looks ugly in the thread dumps 
with all threads seemingly backed up on its Semaphore coordinator.  hbase2 uses 
more CPU but seems to have a flatter GC profile. in-memory compaction costs. 
For load, no-in-memory compaction is 4% slower than hbase1, but with in-memory 
compaction, it is 9% slower. For workloada, no-in-memory-compaction is 25% 
faster than hbase1 and with in-memory compaction, 17% faster. For workloadc, 
with no-in-memory compaction, we are 2% slower. With it, we are 5% slower.
{quote}

1. 
https://docs.google.com/spreadsheets/d/1w2NBqAPFthG8Ib4C0pHpLARYpWoIF2Vck2vHZW77zE4/edit#gid=1651250875

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188.sh, HBase 2.0 
> performance evaluation - Basic vs None_ system settings.pdf, 
> ITBLL2.5B_1.2.7vs2.0.0_cpu.png, ITBLL2.5B_1.2.7vs2.0.0_gctime.png, 
> ITBLL2.5B_1.2.7vs2.0.0_iops.png, ITBLL2.5B_1.2.7vs2.0.0_load.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memheap.png, ITBLL2.5B_1.2.7vs2.0.0_memstore.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, 
> lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to