[
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15995774#comment-15995774
]
Mujtaba Chohan commented on PHOENIX-3797:
-----------------------------------------
[~rajeshbabu] The above was with HBase 0.98 and I'll try to get a clean repro.
Meanwhile with HBase 1.3.1 if I try to split a table while data load is in
progress, the table state remains in SPLITTING_NEW and index writer blocked.
Table splits fine if there is no active writes happening to the table when
split is requested.
{noformat}
Thread 163 (RpcServer.FifoWFPBQ.priority.handler=19,queue=1,port=48109):
State: WAITING
Blocked count: 100
Waited count: 463
Waiting on com.google.common.util.concurrent.AbstractFuture$Sync@16703eda
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:275)
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:185)
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:146)
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:135)
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:474)
org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:407)
org.apache.phoenix.hbase.index.Indexer.postPut(Indexer.java:375)
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$32.call(RegionCoprocessorHost.java:956)
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
{noformat}
Following schema was used with batch size of 1000 inserting data in background:
{noformat}
CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID CHAR(15),
CREATED_BY_ID VARCHAR,
FH VARCHAR, DT VARCHAR, OS VARCHAR, NS VARCHAR, OFN VARCHAR CONSTRAINT PK
PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI ))
VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true;
CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (PKF, CRD, PKP, EHI)
INCLUDE (FID, CREATED_BY_ID, FH, DT, OS, NS, OFN);
{noformat}
> Local Index - Compaction fails on table with local index due to
> non-increasing bloom keys
> -----------------------------------------------------------------------------------------
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
> Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied.
> HBase 0.98.23-hadoop2
> Reporter: Mujtaba Chohan
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR
> [RS:0;host:59455-smallCompactions-1492644947594]
> regionserver.CompactSplitThread: Compaction failed Request =
> regionName=FHA,00Dxx0000001gES005001xx000003DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
> storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M),
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys:
> 00Dxx0000001gES005001xx000003DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx0000001gER001001xx000003DGPb01739544DCtf
> after
> 00Dxx0000001gES005001xx000003DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx0000001gER001001xx000003DGPa017115434KTM
>
> at
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
> at
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
> at
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
> at
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
> at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
> at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
> at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)