[ https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16403691#comment-16403691 ]
stack commented on HBASE-20188: ------------------------------- [~eshcar] and [~anastas] You two see the svgs and the tree.txt file attached here? The CSLM is the umbrella under which much of the CPU usage on a mostly write basis occurs (ITBLL). The leafs are Cell compares (rowCompare in particular given rows are mostly unique in this dataset). You can see some CPU usage by in-memory compaction... as an umbrella again over compares. I was wondering what your thoughts were regards our doing MORE aggressive in-memory compaction moving Cells from CSLM to your flat structures, could it save on the number of overall compares (and hence CPU) or, if not on compares, overhead from the CSLM itself shows up as a pretty big CPU user too? What you reckon? > [TESTING] Performance > --------------------- > > Key: HBASE-20188 > URL: https://issues.apache.org/jira/browse/HBASE-20188 > Project: HBase > Issue Type: Umbrella > Components: Performance > Reporter: stack > Priority: Critical > Fix For: 2.0.0 > > Attachments: flamegraph-1072.1.svg, flamegraph-1072.2.svg, tree.txt > > > How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor > that it is much slower, that the problem is the asyncwal writing. Does > in-memory compaction slow us down or speed us up? What happens when you > enable offheaping? > Keep notes here in this umbrella issue. Need to be able to say something > about perf when 2.0.0 ships. -- This message was sent by Atlassian JIRA (v7.6.3#76005)