[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234852#comment-16234852 ] Hudson commented on HBASE-16417: FAILURE: Integrated in Jenkins build HBase-2.0 #782 (See [https://builds.apache.org/job/HBase-2.0/782/]) HBASE-16417: In-memory MemStore Policy for Flattening and Compactions (eshcar: rev 526d2826f5cce4f3b21326657d3c17a651aa6975) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CellArrayImmutableSegment.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CellChunkImmutableSegment.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/VersionedSegmentsList.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/MemoryCompactionPolicy.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRecoveredEdits.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BasicMemStoreCompactionStrategy.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentFactory.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AdaptiveMemStoreCompactionStrategy.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/EagerMemStoreCompactionStrategy.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWalAndCompactingMemStoreFlush.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CellSet.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingMemStore.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java * (add) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactionStrategy.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingToCellFlatMapMemStore.java > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch, HBASE-16417.12.patch, > HBASE-16417.13.patch, HBASE-16417.14.patch, HBASE-16417.15.patch, > HBASE-16417.16.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.17.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.18.patch, HBASE-16417.branch-2.17.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234101#comment-16234101 ] Eshcar Hillel commented on HBASE-16417: --- Test failures are unrelated (in replication package). Taking [~Apache9] advice and committing this to branch-2. Will take care of release notes afterwards. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch, HBASE-16417.12.patch, > HBASE-16417.13.patch, HBASE-16417.14.patch, HBASE-16417.15.patch, > HBASE-16417.16.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.17.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.18.patch, HBASE-16417.branch-2.17.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234072#comment-16234072 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 31s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} hbase-server: The patch generated 0 new + 203 unchanged - 38 fixed = 203 total (was 241) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 43s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 51m 5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 47s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 15s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12895156/HBASE-16417.branch-2.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars had
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234041#comment-16234041 ] Duo Zhang commented on HBASE-16417: --- +1 on commit to branch-2. Just go ahead. I think in memory compaction is a very important feature for 2.0.0. Thanks. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch, HBASE-16417.12.patch, > HBASE-16417.13.patch, HBASE-16417.14.patch, HBASE-16417.15.patch, > HBASE-16417.16.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.17.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.18.patch, HBASE-16417.branch-2.17.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16233729#comment-16233729 ] Eshcar Hillel commented on HBASE-16417: --- QA passed, committing this patch. Thanks all reviews and comments. Should I commit this to branch-2 as well? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch, HBASE-16417.12.patch, > HBASE-16417.13.patch, HBASE-16417.14.patch, HBASE-16417.15.patch, > HBASE-16417.16.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.17.patch, HBASE-16417.17.patch, HBASE-16417.17.patch, > HBASE-16417.18.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16227444#comment-16227444 ] Hadoop QA commented on HBASE-16417: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 46s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 43s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 51m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 49s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 27s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}186m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12895020/HBASE-16417.18.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16227016#comment-16227016 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 43s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 49m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 36s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894969/HBASE-16417.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16226338#comment-16226338 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 53s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 39s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 49m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 2s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894905/HBASE-16417.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225637#comment-16225637 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 59s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 52m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 2s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 1s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}213m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894765/HBASE-16417.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225032#comment-16225032 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 9m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 42s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 52m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 53s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 52s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}228m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894719/HBASE-16417.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16224608#comment-16224608 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 207 unchanged - 33 fixed = 207 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 46m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 44s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 19s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}199m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894688/HBASE-16417.17.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16224232#comment-16224232 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s{color} | {color:red} hbase-server: The patch generated 4 new + 204 unchanged - 36 fixed = 208 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 35s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 46m 0s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 57s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894644/HBASE-16417.16.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux b11ab558c4aa 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / e0a530e71
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16224116#comment-16224116 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 0s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 53s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 9s{color} | {color:red} hbase-server: The patch generated 3 new + 210 unchanged - 30 fixed = 213 total (was 240) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 51s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 51m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 6s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}187m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894633/HBASE-16417.15.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 98cb29a58054 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / e0a530e
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16224008#comment-16224008 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 22s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} hbase-server: The patch generated 15 new + 114 unchanged - 16 fixed = 129 total (was 130) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 52s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 49m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}105m 24s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894625/HBASE-16417.14.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 07905db45cea 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 482d6b
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221499#comment-16221499 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 7m 18s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 4s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 51m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 44s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}139m 40s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}224m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894203/HBASE-16417.13.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a981321d8fe1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 459202bab0 | | Default J
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16220869#comment-16220869 ] Ted Yu commented on HBASE-16417: Compilation failed due to unresolved conflict: {code} +<<< HEAD (CSLMImmutableSegment)s,idxType,newMemstoreAccounting); +=== + (CSLMImmutableSegment)s,idxType,newMemstoreSize,action); +>>> HBASE-16417: In-Memory MemStore Policy for Flattening and Compactions {code} Please check compilation before uploading the next patch. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch, HBASE-16417.12.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16220853#comment-16220853 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 49s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 37s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 18s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedjars {color} | {color:red} 2m 33s{color} | {color:red} patch has 80 errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 48s{color} | {color:red} The patch causes 80 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 11s{color} | {color:red} The patch causes 80 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 32s{color} | {color:red} The patch causes 80 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 56s{color} | {color:red} The patch causes 80 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 27s{color} | {color:red} The patch causes 80 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 51s{color} | {color:red} The patch causes 80 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 15s{color} | {color:red} The patch causes 80 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m 32s{color} | {color:red} The patch causes 80 errors with Hadoop v2.7.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 57s{color} | {color:red} The patch causes 80 errors with Hadoop v3.0.0-alpha4. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 6s{color} | {color:green} hbase-common in the patch passed. {color} | | {colo
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218747#comment-16218747 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HBASE-16417 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893948/HBASE-16417.11.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/9403/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch, HBASE-16417.11.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218183#comment-16218183 ] Eshcar Hillel commented on HBASE-16417: --- If no additional comments I will commit this later today. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214297#comment-16214297 ] Eshcar Hillel commented on HBASE-16417: --- QA passed. Any more comments? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch, > HBASE-16417.10.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214277#comment-16214277 ] Hadoop QA commented on HBASE-16417: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 36s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 10s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 41m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 51s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}110m 30s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:cb5c477 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893447/HBASE-16417.10.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 7cebf7e210b7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-s
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16210737#comment-16210737 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 19s{color} | {color:red} Docker failed to build yetus/hbase:4a7b430. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893008/HBASE-16417.09.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/9214/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch, HBASE-16417.07.patch, > HBASE-16417.07.patch, HBASE-16417.08.patch, HBASE-16417.09.patch > > > This Jira explores the performance of different memstore compaction policies. > It presents the result of write-only workload evaluation as well as read > performance in read-write workloads. > We investigate several settings of hardware (SSD, HDD), key distribution > (Zipf, uniform), with multiple settings of the system, and compare measures > like write throughput, read latency, write volume, total gc time, etc. > The submitted patch sets some system properties at the values yielding > optimal performance. In addition we suggest a new Adaptive memstore > compaction policy that shows good tradeoffs between write throughput and > write volume. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208246#comment-16208246 ] Eshcar Hillel commented on HBASE-16417: --- bq. I do have concern over the 2% size threshold to make an in memory flush(flatten). That seems too less and very aggressive Anoop I understand your concern. This number also came as a surprise to us :) However, experiments show that with this small active fraction gc decreases and throughput increases as a result of using smaller skip list. The intention is that this becomes the new default. But note that the pipeline is also changed to carry more segments before performing the merge of their indices. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208232#comment-16208232 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 3m 24s{color} | {color:red} HBASE-16417 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892671/HBASE-16417.06.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/9166/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch, HBASE-16417.06.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207872#comment-16207872 ] stack commented on HBASE-16417: --- Hey [~eshcar] ... May I have another look at it before commit? If I don't look at it by Friday, go ahead and commit. Thanks. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207870#comment-16207870 ] Anoop Sam John commented on HBASE-16417: Did not reviewed the full patch. But I do have concern over the 2% size threshold to make an in memory flush(flatten). That seems too less and very aggressive? I believe u r proposing this 2% as default. Pls correct if wrong. I did not see the patch.This will make many in memory flush actions and so in memory compactions! > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207679#comment-16207679 ] Eshcar Hillel commented on HBASE-16417: --- QA passed. Any additional comments or questions? If no objections I will commit this tomorrow. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch, > HBASE-16417.05.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207662#comment-16207662 ] Hadoop QA commented on HBASE-16417: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 38m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 26s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892573/HBASE-16417.05.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 104b0a00e4d1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205871#comment-16205871 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-16417 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892381/HBASE-16417.04.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/9137/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch, HBASE-16417.04.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16199817#comment-16199817 ] ramkrishna.s.vasudevan commented on HBASE-16417: bq.However, global pressure triggered many flushes, and there as you know it does check heap size and not data size Ya I meant this but said as blocking updates. bq.ith fast SSD hardware this has higher affect on throughput as memory management is more of a bottleneck. There are two aspects here - one is how much index is getting reduced and so the region size changes and so the frequencey of flushes and how is the hardware helping in the flushes to be faster or slower. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198545#comment-16198545 ] Eshcar Hillel commented on HBASE-16417: --- bq. So ur tests are with MSLAB feature turned OFF right? Yes all benchmarks run with MSLAB off. However, as part of the work [~anastas] is doing on cell-chunk map we are now running benchmarks with mslab and chunk pool turned on (both on HDD and SSD). We'll report the results when the experiments are completed. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16196982#comment-16196982 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 21s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 36m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 35s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891029/HBASE-16417.03.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 416cc2e4f446 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16196813#comment-16196813 ] Anoop Sam John commented on HBASE-16417: bq. Flat index not only takes less space but is also more friendly for memory management which is an advantage. This is the point u say in the doc "Not only is it bigger in size compared to a flat index, it is also fragmented whereas a static index is stored in a consecutive block of memory. Therefore, flat storage incurs smaller overhead in terms of allocation, GC, ". (?) So ur tests are with MSLAB feature turned OFF right? Some time back in a jira it was mentioned so. So here u speak abt the entry objects to CSLM vs the consecutive ref objects in the CellArray? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417 - parameter tuning - 20171001.pdf, HBASE-16417-V01.patch, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417.01.patch, > HBASE-16417.02.patch, HBASE-16417.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16196238#comment-16196238 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 37s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 30s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 57s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 36m 26s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 52s{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 50s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Integral division result cast to double or float in org.apache.hadoop.hbase.regionserver.AdaptiveMemStoreCompactionStrategy.updateStats(Segment) At AdaptiveMemStoreCompactionStrategy.java:double or float in org.apache.hadoop.hbase.regionserver.AdaptiveMemStoreCompactionStrategy.updateStats(Segment) At AdaptiveMemStoreCompactionStrategy.java:[line 85] | | | Dead store to count in org.apache.hadoop.hbase.regionserver.CompactionPipeline.swa
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195745#comment-16195745 ] Eshcar Hillel commented on HBASE-16417: --- Thanks all for your questions. bq. Can this go into branch-2? Sure why not :) bq. How long did the tests run for in each of the five cases? Write-only runs started from an empty table and performed 500M puts. This took over an hour in SSD and less than 2 hours in HDD. Read-write runs first loaded 10GB data and then ran 500K reads with heavy writes running in the background. These runs took 2-4 hours each. bq. What would you recommend as default? Should we enable adaptive by default? This is a good question. We performed rigorous benchmarks, however these are still only micro-benchmarks, namely rely on synthetic workloads. I think it is best to have Basic as default for 2.0 since its behavior is more predictive, and it requires no configuration. Once we have users feedback we can suggest them also to try playing with adaptive and see where it can further improve their performance. For sure they can configure it for specific column families which can benefit from data reduction. bq. The effect of HDD/SSDs does it come from the fact as how fast these segments in the pipeline are released after flushes? In write-only workload we see that the improvement in throughput has high correlation with reduction of total GC time. With fast SSD hardware this has higher affect on throughput as memory management is more of a bottleneck. bq. here we capture the throughput of writes and flushes are not in the hot path so does it mean that we get blocking updates and the throughput depends on how fast the blocking udpates are cleared and that depends on the segment count? You can see in the parameter tuning report throughput increases as the number of segments in the pipeline increases (up to some point), so I don't think we get more blocking updates with more segments in the pipeline. Also note that the number of segments in the snapshot depends on the timing of the flush, it could be less than the limit. bq. So these tests were done with changing back to the old way of per region flush decision based on heap size NOT on data size? NO. Did not have time to apply these changes yet. I plan to do this next. However, global pressure triggered many flushes, and there as you know it does check heap size and not data size bq. The more the data size, the lesser will be the gain. To have a fair eval what should be the val size to used? I agree. With greater values the gain will be smaller. But I believe we'll still see gain. Flat index not only takes less space but is also more friendly for memory management which is an advantage. Moreover with adaptive we'll still see reduction in space, flushes, disk compaction etc. AND a recent work claim that small values are typical in production workloads like in Facebook and Twitter (see "LSM-trie: An lsm-tree-based ultra-large key-value store for small data items"). We ran experiments with large values in the past. We can repeat some of the experiments with 500B which are also reported in this work. I need a rebase plus will implement the comments above or other comments you put on RB. Anyway happy to answer any further question/concerns you may have. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194201#comment-16194201 ] Anoop Sam John commented on HBASE-16417: bq.So since these are in-memory compactions the effect of HDD/SSDs does it come from the fact as how fast these segments in the pipeline are released after flushes? Looks like that is based on the fact that how much we delay flush of certain data and so possibly reading back most of the data from memory only and not from HDD/SSD files. The more eager way we do in memory flush, the lesser will be the heap size of the memstore and so the flush. So these tests were done with changing back to the old way of per region flush decision based on heap size NOT on data size? 2% seems very much eager. The more gain u r seeing because the data size is so small. The Key + Data size comes much lesser than the overhead because of CSLM addition. So in effect each flatten reducing the heap size occupancy almost by half. The more the data size, the lesser will be the gain. To have a fair eval what should be the val size to used? Any suggestions [~stack]? Nice tests and detailed report. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16194147#comment-16194147 ] ramkrishna.s.vasudevan commented on HBASE-16417: I read the doc first not seen the patch yet. So since these are in-memory compactions the effect of HDD/SSDs does it come from the fact as how fast these segments in the pipeline are released after flushes? As said in the doc scans are affected if the number of segments in pipeline are more and so is the case with flushes also which needs to read the segments? Because here we capture the throughput of writes and flushes are not in the hot path so does it mean that we get blocking updates and the throughput depends on how fast the blocking udpates are cleared and that depends on the segment count? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16193544#comment-16193544 ] stack commented on HBASE-16417: --- Skimmed the patch. Looks good to me. AdaptiveCompactionStrategy class needs a class comment giving overview on how it works. Ditto on rest. Should these policy classes be named AdaptiveInMemoryCompactionStategy... i.e. add the InMemory qualifier since Compaction is already a loaded term in hbase. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16193526#comment-16193526 ] stack commented on HBASE-16417: --- Thank you [~eshcar]. Fun read. Can this go into branch-2 [~eshcar]? How long did the tests run for in each of the five cases? What would you recommend as default? Should we enable adaptive by default (I like the HDD percentile improvements...) > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16191167#comment-16191167 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-16417 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890345/HBASE-16417.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8939/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417.01.patch, HBASE-16417 - Adaptive Compaction > Policy - 20171001.pdf, HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16191145#comment-16191145 ] Hadoop QA commented on HBASE-16417: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HBASE-16417 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-16417 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890340/HBASE-16417-V01.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8938/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16191144#comment-16191144 ] Eshcar Hillel commented on HBASE-16417: --- After a long period of time we experimented with different parameters and in-memory compaction policies we would like to share with you our main results. First set of experiments aims at tuning the Basic (default as of 2.0) in memory compaction policy. We found out that under zipf distribution its optimal write performance is with active segment size being 2% of the memory components and with pipeline size bounded to 5 on SSD and 4 on HDD. The experiments are summarized in the attached pdf file. Second set of experiments compares a new adaptive compaction policy with the existing basic and none policies. Bottom line is that there is a tradeoff between write performance and write volume. We also experimented with mixed read-write workload which show that with in-memory compaction we are able to reduce the high percentiles of read latency on HDD machines. On SSD machines we see a minor degradation in read latency. The experiments and main principle of the adaptive policy are also summarized in an attached pdf file. I am attaching a patch for trunk which includes the new adaptive policy as well as the new optimal parameters for basic. I would be happy to give more details on the experiment settings and results and answer any questions you may have. And also waiting to get your comments on reviewboard :) Thank you! > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 3.0.0 > > Attachments: HBASE-16417 - Adaptive Compaction Policy - 20171001.pdf, > HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf, HBASE-16417 - parameter tuning - > 20171001.pdf, HBASE-16417-V01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15935785#comment-15935785 ] Anoop Sam John commented on HBASE-16417: Specially when we have CellChunkMap in place and doing merge on those segments. I am more interested/worried in that area specifically. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15933509#comment-15933509 ] Edward Bortnikov commented on HBASE-16417: -- Okay agreed - BASIC will include merge. We'll update the docs, too. Regarding parallelism - a promising direction, but we should be careful here. More threads might come at someone else's expense (write throughput maybe), so need more scrutiny. If all we do is run a bunch of binary searches in parallel - might not be worth the synchronization. Worth checking. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15933096#comment-15933096 ] stack commented on HBASE-16417: --- Thank you [~eshcar] for digging in on my question. Sounds like we can simplify here if we get parallelism in here (I'm not sure why it is not parallel already -- it may just be cruft) and if we do the simplification, then we might be able to do w/o the merge. Suggest we open new perf issue to dig in on why NOT parallel but meantime, keep this issue moving forward but now BASIC includes merge. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15932430#comment-15932430 ] Eshcar Hillel commented on HBASE-16417: --- I can use the code of HBASE-17655 to run the mixed workload benchmark to see if parallel seek fixes the high percentiles degradation. However as I understand the code despite the fact that parallelSeekEnabled is set to true when StoreScanner is created, since isLazy = explicitColumnQuery && lazySeekEnabledGlobally and since lazySeekEnabledGlobally is set to true by default and explicitColumnQuery is set to numCol > 0 which is true in our case (I think it is always true) then parallelSeek() is never invoked. Am I missing anything? What was the original intention -- when would we not want to run parallel seek? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15932319#comment-15932319 ] Eshcar Hillel commented on HBASE-16417: --- bq. I don't get why reading from multiple indices ups our latency, unless the lookups are serial (i.e. why we need the merge). Yes the lookups are serial. The constructor of StoreScanner invokes seekScanners. There are several options here depending on 2 boolean flags: {code} if (isLazy) { for (KeyValueScanner scanner : scanners) { scanner.requestSeek(seekKey, false, true); } } else { if (!isParallelSeek) { for (KeyValueScanner scanner : scanners) { ... scanner.seek(seekKey); ... } else { parallelSeek(scanners, seekKey); } } {code} So parallel seek happens only if isLazy is off and isParallelSeek is on. However, since in the current master the memstore scanners are encapsulated in a single scanner (MemStoreScanner) even when the seek is parallel the seek of the MemStoreScanner is translated into serial seeks (actually series of peek() operations) of all the segment scanners when initializing the KeyValueHeap inside the MemStoreScanner. This is one more reason to applying the refactoring of HBASE-17655 for removing the MemStoreScanner. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15932000#comment-15932000 ] stack commented on HBASE-16417: --- Thanks for the nice writeup. bq. . Basic improves the 50th percentile by 7% but the performance of 95th and 99th percentile degrade the performance by 15-30%. Adding merge fixes the above 99th percentile degradation? You say it does at the end of the paragraph. I don't get why reading from multiple indices ups our latency, unless the lookups are serial (i.e. why we need the merge). bq. Note that in sync wal mode all policies have the same number of wal files and the same volume of wal data. The number of wal file is smaller with async wal for all policies (in zipfian and uniform key distribution). When you get the answer to why thIS happens it might explain the number of wal files in eager policy. That is cool that async is faster for you (we were finding otherwise in our tests... but this was without a regionserver context ... maybe we need to look into this). Suggest you file an issue so we can answer the above [~eshcar] It is an interesting question. The charts show some nice, substantial dents in GC activity. Thats sweet. So, enable BASIC+MERGE as default with EAGER for the case where a user knows that they have a lot of duplicate data? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931616#comment-15931616 ] Edward Bortnikov commented on HBASE-16417: -- [~eshcar], thanks for the thorough report, great stuff. Question to all - do these results suggest that we change the default to BASIC+MERGE? Seems that this method does not have any material overhead, even under the uniform workload. If the answer is "yes", we could take one of two ways: (1) say that BASIC+MERGE is a new BASIC (my favorite :)), or (2) introduce a new compaction level (MODERATE?). Let's converge fast - then we can update the documentation and finalize the code. This work notwithstanding, it is still appealing to come up with an automatic policy to tune handsfree (which was the original intent behind this JIRA). With the 2.0 release on our heels, we might not be able to make it until then. But let's have all the building blocks in place, at least (smile). > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929565#comment-15929565 ] Eshcar Hillel commented on HBASE-16417: --- Updated benchmarks report. You can see in Figure 8 that merging the indices of pipeline segments into a single index (depicted by orange bars) eliminates the overhead of reading multiple segments. In addition, figures 9 and 10 show that the performance of basic and basic with merge are equivalent in write-only workload both for the case of uniform distribution and for zipfian distribution. I also ran write-only benchmarks with big values. As we anticipated the affect of reducing the meta data decreases as the size of data itself increases. We still see the same trend with respect to write amplification with eager policy in zipfian distribution. It means that the main benefit we see in performance in the experiment is from reducing the gc. We almost don’t see any gain in throughput due to reduction in compaction. This might be due to the hardware - SSD which handles disk compaction well, or since we were not able to saturate the server enough so that compaction becomes a problem. But in any case running with basic+merge or eager is as good as running with no compaction, and we are not seeing any degradation in performance. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf, > HBASE-16417-benchmarkresults-20170317.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906123#comment-15906123 ] Edward Bortnikov commented on HBASE-16417: -- .. So [~eshcar] answered nearly all of it here .. A couple of small remarks. The expected number of 2 segments in the pipeline follows from the fact that disk flush normally happens when there are 4. Assuming we are growing from 0, the expectation is 2. The varying WAL size with Async WAL introduces much noise indeed. However, please note that the overall volume of WAL writes differs between Sync and Async without one line of Accordion involved, why does this happen with the same workload? (Note that with Sync, the WAL volume is the same no matter what type of in-memory compaction is used). Looking forward to some help here :) > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904985#comment-15904985 ] Eshcar Hillel commented on HBASE-16417: --- We already ran some experiments with merge with really good results for write-only workload and avoiding the extra overhead in mixed workloads. We though the right way to go was first to refresh the code, commit to master, and then re-run them and publish the result. You can review the code in HBASE-17765. In the past we ran experiments with value=1KB (see penultimate report) but since then the code changed a lot. Indeed the affect of reducing the meta data decreases as the size of data itself increases. It's a good idea to run (at least some of the experiments) with 1KB values We were unable to get greater throughput with sync wal mode (even with more than 12 threads) so we decided to test with async wal which helps simulate greater load by its nature. Batching at the client side is for the same reason -- it significantly increases the load on the servers and reduces the running time by order of magnitude. Note that in sync wal mode all policies have the same number of wal files and the same volume of wal data. The number of wal file is smaller with async wal for all policies (in zipfian and uniform key distribution). When you get the answer to why thIS happens it might explain the number of wal files in eager policy. Number of pipeline segments: while 4*4=16 would be the maximal number 4/2=2 would be the number in expectation. GC generally takes less than 1% of the running time. Since all experiments run with the same GC parameters I don't think its important which parameters we use. We are not trying to optimize the performance here but just to have a fair comparison under high load and high volume of data. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904884#comment-15904884 ] ramkrishna.s.vasudevan commented on HBASE-16417: I was about read this doc but every time got distracted. First question is that - just with default memstore in place we saw that Async WAL was performing lesser than Sync WAL. Are you not seeing that? We saw that behaviour with PE tool but till now I have not tested with YCSB. May be in throughput we don't see a hit. That is good to see. Next is that for the IHOP - 60 I believe for 40% memstore and 40% block cache case (total 80%) this would have been less when you had both MSLAB and block cache. Because you have already reserved 80% of your heap for these but still you ask the GC to get triggered if the occupancy is 60%. So immediately we will start getting GC and I think you will have mixed GCs. But when MSLAB is not enabled then I believe IHOP - 60 is fine. I need to verify this. Regarding GC pattern I have seen differences when the Xms is not equal to Xmx. But that is another story. Eager reducing number of WALs is something we need to explore. This YCSB does not check for correctness of data so may be with AsycnWAL we need to run LoadTestTool which has the option of checking the correctness. And thanks for a great detailed report. Nice work. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904833#comment-15904833 ] Anoop Sam John commented on HBASE-16417: Thanks for the nice perf tests and summary. bq. HBASE_OPTS="-XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=60 -XX:G1HeapWastePercent=20 -XX:G1MixedGCCountTarget=8 I have a concern here. Using G1GC with InitiatingHeapOccupancyPercent(IHOP) of 60%. But in mixed workload, the working size itself is 80% (40% for Memstore and 40% for BC). The GC impact will be more here..Can we have a test with better GC tuning? Already we have load on GC and the merge, as it needs more memory to work for the index copy (for merge), we might not see better results? Ya reading from more Segments, instead of 1, can be one reason for the 95th percentile perf degrade. bq.Since previous experiments show consistently weaker performance with on-heap mslabs we focus on running experiments with no chunks pool and no mslabs Above said GC config and HBase config can be an issue here also? The impact of this will be more when u work with MSLAB as MSLAB chunks will be always fixed to process once created. With G1GC we should revist our defaults for BC and Memstore size. We need to keep the sum of both to be under the IHOP. Or else up IHOP. Then the advantage of G1GC itself is lost. The default for IHOP is 45% bq.We run in synchronous and asynch WAL modes. How can the perf for CompactingMemstore be affected by WAL type? Any reason why u did test for both types? Any analysis here? bq.Batching writes at the client side; buffer size is 10KB Seems small buffer size. Any specific consideration in this selection? Just asking Still not sure how the Eager mode reduced the #WAL files. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904483#comment-15904483 ] Anoop Sam John commented on HBASE-16417: bq.how you think the numbers would change if 'default' 1k values were used? Would we see the same benefit do you think? I guess the improvement will be on decreasing side as the value size increases. When val size is low like 100 bytes, there will be more #cells added to CLSM before a flush to disk and more overhead in heap size. The in memory flush (BASIC) reduces this overhead greatly and also avoids the situation of adding more and more cells to an already large CSLM. But when val size is more, we might be dealing with much lesser #cells. This is what my thinking is. bq.The current default active segment size cap for in-memory flush is 1/4 the memstore size cap for disk flush. Which means that the expected number of segments in the pipeline is 4/2=2. Why 4/2? Sorry am not getting.. As 25% is the in memory flush cap, ideally 4 segments can be there (It is basic type and no merge).. But we have a blocking factor of 4 by default for the memstore flush size. Means we will allow max memstore size of 128 MB * 4 before a forceful flush to disk. So 4 * 4 = 16 can be the max #segments? My math is wrong some where? I was not getting that /2 thing. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903831#comment-15903831 ] Edward Bortnikov commented on HBASE-16417: -- bq. On the 90th percentile degradation when BASIC, how many segments we talking 2 or 3 or more than this? Taking the liberty of answering for [~eshcar]. The current default active segment size cap for in-memory flush is 1/4 the memstore size cap for disk flush. Which means that the expected number of segments in the pipeline is 4/2=2. However, since disk flush is non-immediate, new segments can sometime pile up, especially under a very high write rate as exercised in our test. We don't have easily trackable metrics installed (maybe should have) but probably we're speaking about many more segments here. The number can't exceed 30 - at that point, a forceful merge happens. We guess that looking up the key in every single segment (to initialize the scan) is what leads to the high tail latency. We're taking a closer look at merge (index compaction only, no data copy), hopefully we'll show there's no material damage about it .. even EAGER does not look too bad .. A matter of a few more days of experimentation. Thanks. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903755#comment-15903755 ] stack commented on HBASE-16417: --- bq. Any chance of using larger values (e.g. 2KB) ? Please be explicit on what you are asking for [~ted_yu] Asking for 'larger' values w/o why or fit criteria you would see satisfied is like asking for values that are 'ethical' or values that smell sweet. [~eshcar] Nice writeup. Those numbers are looking good (I like the write amplification improvements and the less GC...). (Trying to channel [~ted_yu]), how you think the numbers would change if 'default' 1k values were used? Would we see the same benefit do you think? On the 90th percentile degradation when BASIC, how many segments we talking 2 or 3 or more than this? In another issue we might dig in on why the degradation... I'd think read from Segments in parallel should be nice and prompt. Probably something dumb we are doing. For another issue. You thinking the numbers good enough to enable BASIC as default? Merging means more memory churn, more CPU expended would be cool if we could do w/o merge... Nice writeup. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903228#comment-15903228 ] Ted Yu commented on HBASE-16417: bq. small data (100B values) Any chance of using larger values (e.g. 2KB) ? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903176#comment-15903176 ] Eshcar Hillel commented on HBASE-16417: --- NEW BENCHMARKS RESULTS ARE ATTACHED!!! We focus on small data (100B values), uniform and zipfian distribution of keys, Async and sync wal modes. We see excellent results for uniform distribution and good results for zipfian distribution in write only mode, and fair improvement in read latencies in mixed workload. Summary of results: Write only We run write-only benchmarks with zipfian and uniform distribution of keys. * Uniform distribution: basic outperforms no compaction by 40%; eager improves throughput of no compaction by 10%, however the 99th latency percentile in eager is slower by 18% than no compaction. This is likely due to unneeded in-memory compactions occurring at the background. Basic and eager improve write amplification by 25% and 15% respectively. * Zipfian distribution: when running in async wal mode eager improves write amplification by 30%. Basic and eager outperform no compaction by 20% and 15%, resp. This can be attributed in part to running less GC and in part to doing less IO. When running sync wal mode eager improves write amplification by 10%. Other than that all policies are comparable. Basic and eager slightly improve over no compaction. In sync wal mode the throughput is much lower. Async wal mode represents a scenario where the system is loaded by many clients with much higher load. Mixed workload Eager improves over no compaction by 6-10%. Basic improves the 50th percentile by 7% but the performance of 95th and 99th percentile degrade the performance by 15-30%. This is as a result of reading from multiple segments in the compaction pipeline. Applying merge on pipeline segments more often should eliminate this overhead. We experimented with basic policy which merge segments in the pipeline upon each in-memory flush, this indeed solved the problem. We are currently working on bringing merge back to life including solving some bugs we identified and adding test to cover this path. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf, > HBASE-16417-benchmarkresults-20170309.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15890047#comment-15890047 ] ramkrishna.s.vasudevan commented on HBASE-16417: bq.Value size is set to 100B and number of columns is set to 50. As Duo Zhang mentioned it could be that the size of value plus size of key is overall >200B, especially given the number of columns. Ya right. Same with default also. So with 50 cols you will have each with 100B. bq.In NONE (which is the current default in master) I see a reduction in the overall wal size from 203GB with synchronous WAL to 189GB with async wal. Its more than 5% reduction in WAL size. Need to check this. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889602#comment-15889602 ] Duo Zhang commented on HBASE-16417: --- I do not think we will do compression by default so it is strange... Let me take a look later also. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889596#comment-15889596 ] Eshcar Hillel commented on HBASE-16417: --- I am not running the default values of YCSB. Value size is set to 100B and number of columns is set to 50. As [~Apache9] mentioned it could be that the size of value plus size of key is overall >200B, especially given the number of columns. bq. And it is not expected to have a small WAL size for AsyncFSWAL. In NONE (which is the current default in master) I see a reduction in the overall wal size from 203GB with synchronous WAL to 189GB. Its more than 5% reduction in WAL size. But could this be attributed to writing to disk in batches instead of every entry being synced to disk? Could be that the compression is more effective with batch writing no? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889466#comment-15889466 ] Duo Zhang commented on HBASE-16417: --- KeyValue has extra fields other than value, such as the rowkey, family, qualifier and timestamp etc. So a larger size for WAL is expected. You can increase the value size to see if it is still 2 times larger. The overhead should be a fixed value if you only change the value size. And it is not expected to have a small WAL size for AsyncFSWAL. There must be something wrong. Need check it. Need a UT to reproduce it first I think. Thanks. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889368#comment-15889368 ] ramkrishna.s.vasudevan commented on HBASE-16417: By default YCSB creates rows of 1K size with 10 cols and each having 100B of value and with meta data each row will be > 1K. Though in YCSB you specify 100G what is the number of records? May be that is the reason. How ever why EAGER reduces the size to 124G should be investigated may be the truncation is the reason how ever it is good to debug. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888755#comment-15888755 ] Eshcar Hillel commented on HBASE-16417: --- I am running YCSB, size of values is set to 100B. I checked and there is no double logging, so this is the correct data size written to WAL. >From inspecting the number of entries in each WAL file it turns out that each >entry takes ~200B. It could be that the size of the metadata per entry is >~100B, so when the values are big this is insignificant but with small value >the amplification is significant. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888424#comment-15888424 ] ramkrishna.s.vasudevan commented on HBASE-16417: What is the size of each KV? Are you doing YCSB or PE? bq.Could it be due to double logging of the same information? I don't think so. But not sure. I think depends on your cell size though value is 100B only. bq.Here the sizes of the files vary, NONE/BASIC write roughly 850 files, EAGER roughly 480. Something to check. Async mode as per our experiments were bit slower and had some bugs that [~Apache9] fixed. But I don't think we had any data loss there. Can check once. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888180#comment-15888180 ] Eshcar Hillel commented on HBASE-16417: --- To measure write amplification in our benchmark I'm trying to capture the total size of data that is written to WAL during the experiment. I do so by grep-ing log lines with both "filesize" and "wal" and adding the values written after "filesize=". I need help in explaining the numbers I get. I run both in synchronous and asynchronous wal modes, and recall that I write 100GB in the write-only experiments. (1) In sync mode I get roughly 200GB (!) that are written to wal, under all in-memory compaction policies. In all cases we have 1673 times 121MB. Is this reasonable? Could it be due to double logging of the same information? Should I expect only 100GB in wal? Could it be due to alignment (my values are small -- 100B)? Do you know of any duplication in wal processing? Obviously I count only the sizes written to hdfs and not considering the 3-way replication done at the data nodes level. (2) In async mode I get different numbers NONE/BASIC - 189GB, EAGER - 124GB. Here the sizes of the files vary, NONE/BASIC write roughly 850 files, EAGER roughly 480. Can you explain the difference in the data written to wal in sync mode vs async mode with no compaction? Could it be due to compression when writing batches of wal entries? Can the reduced number of files written in EAGER mode can be explained by wal truncation done after in-memory compaction? I realize these are a lot of questions, any input can help here. Thanks!! > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863623#comment-15863623 ] Eshcar Hillel commented on HBASE-16417: --- Thanks anoop and ramkrishna -- the columns were the solution (y) > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863319#comment-15863319 ] Anoop Sam John commented on HBASE-16417: Try IN_MEMORY_COMPACTION=>'BASIC' > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863364#comment-15863364 ] Eshcar Hillel commented on HBASE-16417: --- I realized the problem was that I didn't use columns around the constants so that's ok. Anyone knows what's the difference between a metadata property and a regular property? Running benchmarks -- will post the results soon. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15863387#comment-15863387 ] ramkrishna.s.vasudevan commented on HBASE-16417: Could you just add the BASIC and EAGER to hbase_constants.rb and try? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15862759#comment-15862759 ] Eshcar Hillel commented on HBASE-16417: --- Back to running large scale benchmarks !! After some changes to the code and some bug fixes, I am now trying to re-run the ycsb benchmarks with the most up-to-date code in master. However I have a problem when my scripts try to initialize the table with different in-memory compaction policies in hbase shell: {code} hbase(main):002:0> create 'usertable', {NAME=>'values', IN_MEMORY_COMPACTION=>NONE} Created table usertable Took 0.3080 seconds hbase(main):003:0> describe 'usertable' Table usertable is ENABLED usertable COLUMN FAMILIES DESCRIPTION {NAME => 'values', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0', METADATA => {'IN_MEMORY_COMPACTION' => 'NONE'}} 1 row(s) Took 0.1670 seconds hbase(main):004:0> disable 'usertable' Took 1.2640 seconds hbase(main):005:0> drop 'usertable' Took 0.2470 seconds hbase(main):006:0> create 'usertable', {NAME=>'values', IN_MEMORY_COMPACTION=>BASIC} NameError: uninitialized constant BASIC hbase(main):007:0> create 'usertable', {NAME=>'values', IN_MEMORY_COMPACTION=>EAGER} NameError: uninitialized constant EAGER {code} We have tried to make this work in HBASE-17492 but it seems that we need some help to get this right once and for all. questions: (1) Why does IN_MEMORY_COMPACTION is classified as METADATA? (2) How come the table is initialized with NONE while EAGER and BASIC are uninitialized, where all three are defined in one enum structure? Anyone can help us understand what we are doing wrong in the ruby script? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15761351#comment-15761351 ] Eshcar Hillel commented on HBASE-16417: --- Opened a new Jira with the patch for scan-memory-first optimization in HBASE-17339. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15728092#comment-15728092 ] Eshcar Hillel commented on HBASE-16417: --- I am not suggesting to set eager as default. We can have basic as default and if users want they can set eager on a per CF basis. Eager can be recommended to CF that have high churn, or that have a small working set at any point in time like our original sliding window scenario. For example, we can recommend all applications that used to set a CF to be IN_MEMORY to try and set it to eager. With respect to chunks, in my experiments running with MSLABS even with no chunk pool showed inferior results, however I didn't go much deeper with these parameters and didn't do full exhaustive tests. I think that you guys are much more familiar with all the details regarding L2 and off-heaping and which parameters can be played with and tuned, so it would be a good idea that you will run these tests. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725961#comment-15725961 ] Anoop Sam John commented on HBASE-16417: Why I mentioned abt correcting the GC config and HBase config is that ur tests on data compaction was not using MSLAB where as others. (The initial compares).. So the GC is paying the cost and it might be giving a false indication that data compaction is better. Ya data compaction might be better depending on the key generation. More and more duplicated keys (rk+cf+q+ts+type) means in memory compaction can get rid of many (def table versions = 1).. But I dont think by def we should enable this. Yes we need to consider that more users will go to G1GC. So as per ur tests u say disable memstore chunk pool by default but enable MSLAB is ok? This was/is on by def from long time. I strongly feel we should revisit our BC and memstore % def values. Specially to conisder that we will ON L2 off heap now. The data blocks will go to L2. L1 might have only index blocks.. So we dont need much size there. Even pls note that off heap backed MSLAB pool is all ready in trunk.. We will do tests similar to urs by working with off heap. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725746#comment-15725746 ] Eshcar Hillel commented on HBASE-16417: --- Thanks [~ram_krish]. AFAIK 2.0 is not released yet. I advocate to experiment with the final default configuration before releasing it, and comparing it to the previous default configuration -- make sure performance remains the same or improves in *all* common workloads, including mixed workload. If what is planned for 2.0 is 40% memstore, 40% BC, chunk pool on by default, and we assume G1GC will be used by many application, then yes definitely I advocate to revert. When running 3 node cluster the system pays much more for each compaction, since now network is also involved in writing new files. Having less compactions/writing less MB to HDFS with eager (and also with basic) means avoiding part of this cost. This is emphasized with SSDs, since just writing data to disk doesn't cost much. But once you also need to pay for network traffic the advantage is more pronounced. We are running zipfian distribution by YCSB. This is pretty much standard distribution for KV-stores benchmarks, which generates duplication. We do plan to play a bit with the alpha parameter to check performance under less/more heavy-head distributions. A workload that accesses 10-20% of the keys 80-90% of the time is considered valid. The amount of flushes depends on duplication ratio, and also on what the policy decides to flush. Currently basic flushes the entire pipeline, eager only flushes the tail. Other policies might decide differently. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725126#comment-15725126 ] ramkrishna.s.vasudevan commented on HBASE-16417: Thanks for the details report. Its great and well written. bq.IMHO, these must be explored before setting chunk pool to on by default in 2.0. So do you advocate reverting this change for now? Interesting fact is that in eager compaction (that is data compaction) you seem to perform better in 3 node cluster and not so well in single node cluster. Why do you think that is happening? My main question is that in these results are you generating lot of duplicatest such that eager compaction is reducing the duplicates? If there are not much duplicates then the amount of flushes/compaction should be same right? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724919#comment-15724919 ] Eshcar Hillel commented on HBASE-16417: --- Thanks for the comments [~anoop.hbase]. The scope of the current Jira is to explore performance of different in-memory compaction policies, and therefore all other parameters are used with their default/recommended values. Finding the sweat spot for mslab and chunk pool usage vs the size allocated for memstore and BC vs the parameters for the GC should be explored in a different Jira; there are too many parameters to optimize in a single round and finding the global maximum in terms of performance (total throughput, read and write latency, etc.) may require multiple experiments. IMHO, these must be explored before setting chunk pool to on by default in 2.0. Changing some parameters without re-tuning other parameters can cause major degradation in performance. But it is not the ticket of the current Jira. In my experiments IHOP is set to 60%. I can change it to 45% or 50% for future experiments but I don't think it will make much difference. The policy that is named none refers to using default memstore, no compaction whatsoever. Batching size at client side: some applications don't use any batching. Optimally to have a clean measure of the write latency the experiments should use zero batching, but this would make the experiment infinitely long. In striking the balance between valid measurements and reasonable run time having buffer of size 10KB (which is approximately 100 put operations) was the optimal point for me. Scan memstore-only first optimization: this is not a correct solution yet since it does not handle all the issues that you raised regarding TS etc. It is only for demonstrating potential gain of the optimization. I can publish the patch in a different Jira after we clean up some open Jira's we have here :). > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724769#comment-15724769 ] Anoop Sam John commented on HBASE-16417: I think there is some directly change needed now.. As per ur doc, the latest tests are with no MSLAB itself.. 2.0 now defaults to even having MSLAB pool on. As u also doubted, the G1GC initial heap occupancy factor and the BC + memstore size are the issues. We should set IHOP say 50% and then make sure BC + memstore size is under this. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724760#comment-15724760 ] Anoop Sam John commented on HBASE-16417: Thanks for the great work.. You tried diff cases and captured all results nicely. bq.This might be since now the block cache is used in full capacity, leaving the gc to struggle with less free space. The chunk pool uses more space than the application can afford. Your guess should be absolutely correct. The inital heap occupancy percent for G1GC is configured? Default is 45%. In write only test of ours, we put 42% for global memstore size with this factor in mind.. Now in ur tests 80% is what BC + memstore %. In some other place also I have commented, we must revist the default for BC and memstore size considering G1GC. The working set size to be at least 80% is too much for G1GC. Against its spirit of predictable GC pause bq.and batching writes at the client side in buffer of size 10KB (vs no WAL and a buffer of size 12MB in previous experiments) Why the batching size at client is reduced? 10 KB? That is too too small no? bq.none no compaction, Here none case means what? flattening index when #segments in pipeline is >2? So it clearly says the 1st memstore and then HFiles optimization is helping.. So you will raise it and provide ur internal patches there for initial ref? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722264#comment-15722264 ] Eshcar Hillel commented on HBASE-16417: --- Attached new benchmarks results. This time we focus on running experiments with no chunk pool and no mslabs. We compare the three policies discussed in HBASE-16851: *none* no compaction, *basic* flattening index, merge only upon flush to disk (HBASE-17081), and *eager* data compaction. You may review the results of cluster benchmarks again as they include experiments with the new *basic* policy (Anastasia's code). In write-only workload eager improves write amplification by 21%, while basic improves it by 15%; Basic and eager outperform no compaction by 15% and 25%, resp. In read-write workload we show reduction in number of cache accesses and cache misses by basic and eager w.r.t no compaction. We show modest improvement in average read latency of basic and eager over none. With slower disks (HDD) reduction in cache misses will have more positive affect on avg and read tail latencies. We experimented with read operation optimization that speculatively scans memory-only components first and only if the result is incomplete scans both memory and disk. We ran the mixed workload experiment for none, basic and eager with and without the optimization. The optimization improves avg read latency of none and basic by 10% and avg read latency of eager by 7%. Reduction in cache accesses (70% less accesses), and cache misses (40%-45% less misses) lead to this improvement. Finally we ran write-scan workload with size of scan is chosen uniformly from 1-5000. Scan performance of all three policies are comparable with a modest improvement of basic and eager over none (5%-7%). > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf, > HBASE-16417-benchmarkresults-20161205.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692865#comment-15692865 ] ramkrishna.s.vasudevan commented on HBASE-16417: In my opinion #2 has an impact due to xmx also. If you have 30G then you could actually have a better GC pattern with more threads and more flushes and GCs. With 16G your heap is overloaded when you have flushes/compactions happening frequently as a major garbage generator are from those two operations. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692806#comment-15692806 ] Anoop Sam John commented on HBASE-16417: 1. Ya pls turn ON wal. The configs that was put before were copied from the off heap write tests. We had some issues to solve there to use default WAL along with offheap cells.. That is why turned OFF then. Now even our tests use WAL write. Sorry for the mistake 2. Again some what related to WAL off.. The speed with which the writes were happening was more. So 2 threads are not enough for flush. Ya when the #flushThreads and/or #compactionThreads are more, the IO pressure will be more and the garbage creation rate also.. We will suffer with more GCs then.. Suggestion would be turn ON WAL and play with diff #threads and see how it is.. Blocking store files changes was also in similar lines. 3. We use G1GC in tests. And workload is RW. Means the heap memory of RS will be always above 80%. This will make more GCs to happen. The initial heap occupancy factor what u have is 45% I guess.. 80% is too high. So my suggestion was that HBase should think of making it down now. Or else we have to say when using G1GC how to tune IHOP. A large value for it dilutes the need of G1 itself. ie. predictable GC pause.. cc [~saint@gmail.com] > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692791#comment-15692791 ] Anoop Sam John commented on HBASE-16417: G1GC - Initial Heap Occupancy percentage > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692764#comment-15692764 ] Eshcar Hillel commented on HBASE-16417: --- Sorry, lost you with the acronyms ... what's IHOP? What configuration do you suggest to test exactly? Also wanted to re-consider again some other settings: (1) WAL - is there a reason not to run with WAL? Obviously it is easier to saturate the servers with no WAL, but this is not realistic, as almost 100% of the application use WAL, and as we've seen different setting result in different results - lets make sure there are no surprises there. (2) #flush threads (2->10), #store files before blocking writes (10->25) - why not use the default values? if most application use the default then we need to test with this value, and if these values are the recommended ones then why not change the default? (3) memstore and blockcahce - why not just use the defaults 40%, 40%? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692575#comment-15692575 ] Anoop Sam John commented on HBASE-16417: Any chance for doing such a test? We can help with config tuning > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692572#comment-15692572 ] Anoop Sam John commented on HBASE-16417: Make sense.. Agree to points raised by Eshcar and Ram. 40% default size for both memstore and block cache seems not a good choice. We should change this specially after G1GC. This makes the working memory size to be so high. The IHOP is very imp for G1GC to get a more predictable GC pause. This defaults to 45% only. May be if you tune the memstore and block cache size and IHOP accordingly, you might see better results. And best solution what we propose would be as Ram said. Use L2 off heap BC. The latest results shared by Alibaba , after backporting HBASE-11425 work , reveal that we will do better with new off heap L2 cache compared to L1 cache (on heap) > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692398#comment-15692398 ] ramkrishna.s.vasudevan commented on HBASE-16417: I agree. So this again takes back to the point that if we have L2 cache offheap then we will be benefited with MSLAB and chunkpool. Just saying. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692359#comment-15692359 ] Eshcar Hillel commented on HBASE-16417: --- My explanation is that with 100% writes the block cache is empty and does not take any memory from the heap. Whereas when there is even just 5% reads (will be the same for even less) the block cache is full taking 40% (!!!) of the heap space (or 38% to be precise in our settings). Only 62% of the heap space are left to be used by memstore, chunk pool, compactions (both from disk and memory) and also the GC which is know to take a lot of space (g1gc). So at some point there is just not enough space for all these to work well, or even to work at all. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15692240#comment-15692240 ] ramkrishna.s.vasudevan commented on HBASE-16417: Am more interested in that fig 8. Why do you think with MSLAB and chunk we get a much poor latency and through put? WE only have 5% reads. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15691259#comment-15691259 ] Eshcar Hillel commented on HBASE-16417: --- Evaluation results of benchmarks on a 3 machine cluster are attached. Main points: (1) in write-only workload ic2 and dc outperform no compaction (with no mslabs) by 25%. This can be attributed in part to running less GC and in part to executing less IO. dc improves write amplification by 25%. (2) in mixed workload All three options with no mslabs (no compaction, ic2, and dc) have comparable read latency and throughput. Avg read latency of no compaction with mslabs is 2.5x than the other options running with no mslabs. (one run in this setting even failed to complete). > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf, > HBASE-16417-benchmarkresults-20161123.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663137#comment-15663137 ] Anastasia Braginsky commented on HBASE-16417: - [~ram_krish], this fix is also included there. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662732#comment-15662732 ] ramkrishna.s.vasudevan commented on HBASE-16417: Is this JIRA due to this bq.This bug is about to be fixed in a new patch Anastasia is working on. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662731#comment-15662731 ] ramkrishna.s.vasudevan commented on HBASE-16417: ACtually we had a discussion around this. But as Anoop said we cannot go with first memstore and then HFiles as per the reasons stated above. For a given row - if there are 100 qualifiers (single CF), you could add them in 100 different puts one by one. So in this case when we say get that row - we are not sure if by this time if any flush had happened moving some of the qualifiers to the file. The InternalScan is not exposed but if you want a behaviour where you are sure that you do frequent updates and fetch only the recent one (the catch is that if the recent is not in memstore - you won't get any result) then we may have to expose readOnlyMemstore type of API in scan. But am not sure how beneficial it i will be where there are more chances for missing results. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662618#comment-15662618 ] Anoop Sam John commented on HBASE-16417: It is not just TS being applied by client issue. (Yes that is also one and when we are sure that all TS applied by server only and it is strictly increasing some other optimizations also possible. There is some jira around that. Forgot id. But LarsH raised that) The main thing is that when we do a row get, how u will know that u have done with all the columns in that row? HBase being a CF oriented NoSQL, we don't know the columns within a CF and it can differ from row to row. But when we know the column qualifiers always and specify in Get, and we look for only one version and we are sure abt the TS increasing nature, ya the optimization is possible. The ColumnTracker always track and allow only the given columns in Get are being selected. And the other 2 also stands, then we can do this optimization I believe.. I did not do any code read or deep analysis..I get ur point also.. In usecase like u described, it is some thing to really think abt {code} /** * Special scanner, currently used for increment operations to * allow additional server-side arguments for Scan operations. * * Rather than adding new options/parameters to the public Scan API, this new * class has been created. * * Supports adding an option to only read from the MemStore with * {@link #checkOnlyMemStore()} or to only read from StoreFiles with * {@link #checkOnlyStoreFiles()}. */ @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) public class InternalScan extends Scan { {code} This is not public client side exposed. Only can be used within CPs. JFYI > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662097#comment-15662097 ] Eshcar Hillel commented on HBASE-16417: --- bq. then also even if we read all qualifiers from memstore itself, how we know there are no other versions of this in other HFiles. I think I understand. So it is not enough just to check after the first (memstores scan) round that the result is not empty -- we need to go on and check that we retrieved data for all qualifiers in the get query, and if not do the second round which seeks the other Hfile. any problem with this? bq. U can see there is an InternalScan extension for Scan where one can say use memstore only or not. I am not familiar with this internal scanner. Where/when is it being used? bq. So this can be done with certain hard limitations. Only version should be present and/or ts on puts are always increasing. Yes, if the application manipulates ts so that it is not always increasing you have a point. Is there a way to know this for sure? I don't think so. Bottom line, [~anoop.hbase] you raise some valid concerns. So it might be that we cannot apply this optimization for all cases, however I am confident that 99.99% of the application can benefit from such optimization, it is highly unreasonable not to apply it just to allow corner cases like manipulating timestamps. Could we have the application "announce" that it is manipulating ts so then we avoid this optimization but apply it in all other common cases (?) > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15661998#comment-15661998 ] Edward Bortnikov commented on HBASE-16417: -- [~anoop.hbase], we certainly appreciate the input, feel free to fire the first thoughts going fwd (smile). Yes, we thought about the multi-cf case. We are speaking of single-row get only. The idea was trying to fetch from the set of the memstore scanners first. If the data can be retrieved, no need to go look in HFiles - isn't it? Am I missing something here? > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15661784#comment-15661784 ] Anoop Sam John commented on HBASE-16417: bq.4. Change read (get) implementation to first seek for the key in memstore(s) only, and only if no matching entry is found seek in all memstore segments and all relevant store files. This could be a subject of another Jira. We believe this would be beneficial also with no compaction, and even more when index-/data-compaction is employed. Any thought on this direction There are few issues for this. When we do a get for a row, memstore+HFiles read happens per cf wise. But how we can know all possible qualifiers in that. My Q is when we able to find an entry for the rk in memstore, how we can be sure that there are no other entries for this rk in HFiles? So suppose there are qualifiers also mentioned in the Get, (Get#addColumn(byte [] family, byte [] qualifier)) then also even if we read all qualifiers from memstore itself, how we know there are no other versions of this in other HFiles. U can see there is an InternalScan extension for Scan where one can say use memstore only or not. But am getting what u r saying is bit diff. So this can be done with certain hard limitations. Only version should be present and/or ts on puts are always increasing. Gets are issues with columns and/or while doing writes all columns of a CF are put together. Just noting down whatever comes at first thought. > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15660109#comment-15660109 ] Anastasia Braginsky commented on HBASE-16417: - Hi All, I just want you to pay attention that I have opened HBASE-17081 and there we want to add the ability to flush the entire content of the CompactingMemStore to disk (the active end the entire compacting pipeline). [~anoop.hbase] and [~ram_krish], you have actually raised the preferability of this over the merge. The code will follow very soon. Thanks, Anastasia > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
[ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15659859#comment-15659859 ] Edward Bortnikov commented on HBASE-16417: -- Just emphasizing the #4 point raised by [~eshcar], it looks pretty important. Does anyone see a problem with the "try-to-read-from-the-memstore-first" approach for scans? It seems to be pretty important for in-memory compaction. Please speak up (smile). > In-Memory MemStore Policy for Flattening and Compactions > > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Eshcar Hillel > Fix For: 2.0.0 > > Attachments: HBASE-16417-benchmarkresults-20161101.pdf, > HBASE-16417-benchmarkresults-20161110.pdf > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)