[ https://issues.apache.org/jira/browse/HBASE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16131131#comment-16131131 ]
Hudson commented on HBASE-18526: -------------------------------- FAILURE: Integrated in Jenkins build HBASE-14070.HLC #233 (See [https://builds.apache.org/job/HBASE-14070.HLC/233/]) HBASE-18526 FIFOCompactionPolicy pre-check uses wrong scope (Vladimir (tedyu: rev aa8f67a148cbefbfc4bfdc25b2dc48c7ed947212) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java > FIFOCompactionPolicy pre-check uses wrong scope > ----------------------------------------------- > > Key: HBASE-18526 > URL: https://issues.apache.org/jira/browse/HBASE-18526 > Project: HBase > Issue Type: Bug > Components: master > Affects Versions: 1.3.1 > Reporter: Lars George > Assignee: Vladimir Rodionov > Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-2 > > Attachments: 18526.branch-1.txt, HBASE-18526-v1.patch > > > See https://issues.apache.org/jira/browse/HBASE-14468 > It adds this check to {{HMaster.checkCompactionPolicy()}}: > {code} > // 1. Check TTL > if (hcd.getTimeToLive() == HColumnDescriptor.DEFAULT_TTL) { > message = "Default TTL is not supported for FIFO compaction"; > throw new IOException(message); > } > // 2. Check min versions > if (hcd.getMinVersions() > 0) { > message = "MIN_VERSION > 0 is not supported for FIFO compaction"; > throw new IOException(message); > } > // 3. blocking file count > String sbfc = htd.getConfigurationValue(HStore.BLOCKING_STOREFILES_KEY); > if (sbfc != null) { > blockingFileCount = Integer.parseInt(sbfc); > } > if (blockingFileCount < 1000) { > message = > "blocking file count '" + HStore.BLOCKING_STOREFILES_KEY + "' " > + blockingFileCount > + " is below recommended minimum of 1000"; > throw new IOException(message); > } > {code} > Why does it only check the blocking file count on the HTD level, while > others are check on the HCD level? Doing this for example fails > because of it: > {noformat} > hbase(main):008:0> create 'ttltable', { NAME => 'cf1', TTL => 300, > CONFIGURATION => { 'hbase.hstore.defaultengine.compactionpolicy.class' > => 'org.apache.hadoop.hbase.regionserver.compactions.FIFOCompactionPolicy', > 'hbase.hstore.blockingStoreFiles' => 2000 } } > ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: blocking file > count 'hbase.hstore.blockingStoreFiles' 10 is below recommended > minimum of 1000 Set hbase.table.sanity.checks to false at conf or > table descriptor if you want to bypass sanity checks > at > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1782) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1663) > at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1545) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) > Caused by: java.io.IOException: blocking file count > 'hbase.hstore.blockingStoreFiles' 10 is below recommended minimum of > 1000 > at > org.apache.hadoop.hbase.master.HMaster.checkCompactionPolicy(HMaster.java:1773) > at > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1661) > ... 7 more > {noformat} > The check should be performed on the column family level instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)