[
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424604#comment-16424604
]
Eshcar Hillel commented on HBASE-20188:
---------------------------------------
Attached are the results of evaluations over *SSD* machines [^HBase 2.0
performance evaluation - Basic vs None_ system settings.pdf] , and the script
to run them [^HBASE-20188.sh] (which is based on the script by Stack).
The setting is also similar: 1 master, 1RS with 8GB heap, 1 ycsb client,
underlying HDFS set to 3-way replication.
Comparing Basic with default configuration vs None under different system
settings: cms/mslab vs cms/no-mslabs vs g1gc/no-maslab
Summary of results:
1) None outperforms Basic in a uniform distribution of insert-only operations
that includes multiple split events
2) Basic outperforms None in a mixed workload with zipfian distribution
3) None is slightly better than Basic in read-only zipfian workload
4) not using mslab improves performance in zipfian distribution workloads and
has a negative effect with insert-only uniform workload
5) g1gc performs worse in all cases; this could be due to lack of tuning
It is important to note that each configuration was tested once, each of these
runs can be an outlier - a good or bad outlier
Next we will come up with a workload which demonstrates the advantage of
in-memory compaction as well as continue with benchmarks to determine optimal
default values for in-memory compaction, namely portion of active segment,
length of pipeline, etc.
> [TESTING] Performance
> ---------------------
>
> Key: HBASE-20188
> URL: https://issues.apache.org/jira/browse/HBASE-20188
> Project: HBase
> Issue Type: Umbrella
> Components: Performance
> Reporter: stack
> Assignee: stack
> Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: CAM-CONFIG-V01.patch, HBASE-20188.sh, HBase 2.0
> performance evaluation - Basic vs None_ system settings.pdf,
> ITBLL2.5B_1.2.7vs2.0.0_cpu.png, ITBLL2.5B_1.2.7vs2.0.0_gctime.png,
> ITBLL2.5B_1.2.7vs2.0.0_iops.png, ITBLL2.5B_1.2.7vs2.0.0_load.png,
> ITBLL2.5B_1.2.7vs2.0.0_memheap.png, ITBLL2.5B_1.2.7vs2.0.0_memstore.png,
> ITBLL2.5B_1.2.7vs2.0.0_ops.png,
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png,
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png,
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png,
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor
> that it is much slower, that the problem is the asyncwal writing. Does
> in-memory compaction slow us down or speed us up? What happens when you
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something
> about perf when 2.0.0 ships.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)