[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16440606#comment-16440606
 ] 

Anoop Sam John edited comment on HBASE-20188 at 4/19/18 3:46 AM:
-----------------------------------------------------------------

PE command
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --presplit=40 
--size=30 --columns=10 --valueSize=100 --writeToWAL=false 
--inmemoryCompaction=NONE randomWrite 100
And I have 40 GB Xmx for RS. It is a single server setup.
The exact numbers I need to calc.  But the net PE run time is 3x for 2.0 
compared to 1.4.2

[~eshcar]  Pls note that we have only 40 regions and the global memstore size 
is enough for all regions to grow 4x flush size.   If the global size is less, 
might not see the RegionTooBusy exception and retry but writes blocked.  With 
that, the perf diff might not be this much.   If you test, pls do this way only.


was (Author: anoop.hbase):
PE command
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --presplit=40 
--size=30 --columns=10 --valueSize=100 --writeToWAL=false 
--inmemoryCompaction=NONE randomWrite 100
And I have 40 GB Xmx for RS. It is a single server setup.
The exact numbers I need to calc.  But the net PE run time is 3x for 2.0 
compared to 1.4.2

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188-xac.sh, 
> HBASE-20188.sh, HBase 2.0 performance evaluation - 8GB(1).pdf, HBase 2.0 
> performance evaluation - 8GB.pdf, HBase 2.0 performance evaluation - Basic vs 
> None_ system settings.pdf, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, 
> ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, 
> hbase-site.xml, hits.png, hits_with_fp_scheduler.png, 
> lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, perregion.png, run_ycsb.sh, 
> total.png, tree.txt, workloadx, workloadx
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to