[jira] [Updated] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2018-08-06 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-1718:
-
Description: 
I am making stablity test with phoenix 4.2.1 . But the regionserver became very 
slow after 4 hours , and i found some error log in the regionserver log file.

In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
setup 2 regionserver in each pc (total 16 rs).

1. create 8 tables, each table contains an index from TEST_USER0 to TEST_USER7.

create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
integer,attr8 integer,attr9 integer,attr10 integer ) 
DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
 = '65536',SALT_BUCKETS=32;
create local index TEST_USER_INDEX0 on 
TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);


2. deploy phoenix client each machine to upsert data to tables. ( client1 
upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
One phoenix client start 6 threads, and each thread upsert 10,000 rows in a 
batch. and each thread will upsert 500,000,000 in totally.
8 clients ran in same time.

the log as belowRunning 4 hours later, threre were about 1,000,000,000 rows in 
hbase, and error occur frequently at about running 4 hours and 50 minutes , and 
the rps became very slow , less than 10,000 (7, in normal) .

2015-03-09 19:15:13,337 ERROR [B.DefaultRpcServer.handler=2,queue=2,port=60022] 
parallel.BaseTaskRunner: Found a failed task because: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
(INT10): Unable to find cached index metadata. key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
java.util.concurrent.ExecutionException: 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
(INT10): Unable to find cached index metadata. key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3580)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3469)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29931)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): 
ERROR 2008 (INT10): Unable to find cached index metadata. 
key=-1715879467965695792 
region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
Index update failed
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
at 
org.apache.phoenix.index.Phoe

[jira] [Updated] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2015-03-12 Thread wuchengzhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wuchengzhi updated PHOENIX-1718:

Attachment: hbase-hadoop-regionserver-cluster-node134 .zip

Attachment is a part of the regionserver log, the error was start at line 45179 
in the log file.

> Unable to find cached index metadata during the stablity test with phoenix
> --
>
> Key: PHOENIX-1718
> URL: https://issues.apache.org/jira/browse/PHOENIX-1718
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2
> Environment: linux os ( 128G ram,48T disk,24 cores) * 8
> Hadoop 2.5.1
> HBase 0.98.7
> Phoenix 4.2.1
>Reporter: wuchengzhi
>Priority: Critical
> Attachments: hbase-hadoop-regionserver-cluster-node134 .zip
>
>
> I am making stablity test with phoenix 4.2.1 . But the regionserver became 
> very slow  after 4 hours , and i found some error log in the regionserver log 
> file.
> In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
> setup 2 regionserver in each pc (total 16 rs). 
> 1. create 8 tables, each table contains an index from TEST_USER0 to 
> TEST_USER7.
> create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
> varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
> integer,attr8 integer,attr9 integer,attr10 integer )  
> DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
>  = '65536',SALT_BUCKETS=32;
> create local index TEST_USER_INDEX0 on 
> TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);
> 
> 2.  deploy phoenix client each machine to upsert data to tables. ( client1 
> upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
> One phoenix client start 6 threads, and each thread upsert 10,000 rows in 
> a batch.  and each thread will upsert 500,000,000 in totally.
> 8 clients ran in same time.
>  the log as belowRunning 4 hours later,  threre were about 1,000,000,000 rows 
> in hbase,  and error occur  frequently at about running 4 hours and 50 
> minutes , and the rps became very slow , less than 10,000 (7, in normal) .
> 2015-03-09 19:15:13,337 ERROR 
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.BaseTaskRunner: 
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
>  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
> at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
> at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
> at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
> at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
> at 
> org.a