[ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359811#comment-14359811
 ] 

wuchengzhi commented on PHOENIX-1718:
-------------------------------------

Hi,[~rajeshbabu]

I upserting data like this.

int threads = 6;
final int rowBatch = 10000;
final totalRows = 500000000;

ThreadPoolExecutor executor = new ThreadPoolExecutor(tn, tn, 30, 
TimeUnit.SECONDS, new ArrayBlockingQueue(50));
 for (int i = 0; i < threads; i++) {
       Connection conn  = createConnection(server, port, db, user, passwd);
       conn.setAutoCommit(false);
       Statement stmt = conn.createStatement();
        for (int j = 0; j < totalRows; j++) {
               String sql = createUpsertSql(db, table, ..., ...);
               stmt.addBatch(sql);
                if (j > 0 && j % (rowBatch) == 0) {
                                         stmt.executeBatch();
                     conn.commit();
                }
        }
          stmt.executeBatch();
      conn.commit();
 }

> Unable to find cached index metadata during the stablity test with phoenix
> --------------------------------------------------------------------------
>
>                 Key: PHOENIX-1718
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1718
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.2
>         Environment: linux os ( 128G ram,48T disk,24 cores) * 8
> Hadoop 2.5.1
> HBase 0.98.7
> Phoenix 4.2.1
>            Reporter: wuchengzhi
>            Priority: Critical
>         Attachments: hbase-hadoop-regionserver-cluster-node134 .zip
>
>
> I am making stablity test with phoenix 4.2.1 . But the regionserver became 
> very slow  after 4 hours , and i found some error log in the regionserver log 
> file.
> In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
> setup 2 regionserver in each pc (total 16 rs). 
> 1. create 8 tables, each table contains an index from TEST_USER0 to 
> TEST_USER7.
> create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
> varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
> integer,attr8 integer,attr9 integer,attr10 integer )  
> DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
>  = '65536',SALT_BUCKETS=32;
> create local index TEST_USER_INDEX0 on 
> TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);
> ........
> 2.  deploy phoenix client each machine to upsert data to tables. ( client1 
> upsert into TEST_USER0 , client 2 upsert into TEST_USER1.....)
>     One phoenix client start 6 threads, and each thread upsert 10,000 rows in 
> a batch.  and each thread will upsert 500,000,000 in totally.
>     8 clients ran in same time.
>  the log as belowRunning 4 hours later,  threre were about 1,000,000,000 rows 
> in hbase,  and error occur  frequently at about running 4 hours and 50 
> minutes , and the rps became very slow , less than 10,000 (7,0000 in normal) .
> 2015-03-09 19:15:13,337 ERROR 
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.BaseTaskRunner: 
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
>  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
>         at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
>         at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>         at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
>         at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
>         at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3580)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3469)
>         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29931)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>         at java.lang.Thread.run(Thread.java:724)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): 
> ERROR 2008 (INT10): Unable to find cached index metadata.  
> key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
>         at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
>         at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:104)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpdates(PhoenixIndexCodec.java:130)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexDeletes(PhoenixIndexCodec.java:119)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addDeleteUpdatesToMap(CoveredColumnsIndexBuilder.java:403)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addCleanupForCurrentBatch(CoveredColumnsIndexBuilder.java:287)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addMutationsForBatch(CoveredColumnsIndexBuilder.java:239)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.batchMutationAndAddUpdates(CoveredColumnsIndexBuilder.java:136)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.getIndexUpdate(CoveredColumnsIndexBuilder.java:99)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:133)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:129)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         ... 1 more
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6.
>         at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:336)
>         at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:102)
>         ... 14 more
> 2015-03-09 19:15:13,338 INFO  
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.TaskBatch: 
> Aborting batch of tasks because Found a failed task because: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> 2015-03-09 19:15:13,338 ERROR 
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] builder.IndexBuildManager: 
> Found a failed index update!
> 2015-03-09 19:15:13,338 INFO  
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] util.IndexManagementUtil: 
> Rethrowing org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): 
> ERROR 2008 (INT10): Unable to find cached index metadata.  
> key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> 2015-03-09 19:15:13,372 INFO  
> [B.DefaultRpcServer.handler=2,queue=2,port=60022] util.IndexManagementUtil: 
> Rethrowing org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): 
> ERROR 2008 (INT10): Unable to find cached index metadata.  
> key=-1715879467965695792 
> region=TEST5.TEST_USER6,\x16,1425881401238.c3827b5f890bfdff5d71eb96d1dbd62a. 
> Index update failed
> 2015-03-09 19:15:13,384 ERROR 
> [B.DefaultRpcServer.handler=22,queue=2,port=60022] parallel.BaseTaskRunner: 
> Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
>  key=-2055417764840908969 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
> java.util.concurrent.ExecutionException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-2055417764840908969 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
>         at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
>         at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>         at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
>         at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
>         at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
>         at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3580)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3469)
>         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29931)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>         at java.lang.Thread.run(Thread.java:724)
>         Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  
> key=-2055417764840908969 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
> Index update failed
>         at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:76)
>         at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:104)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpdates(PhoenixIndexCodec.java:130)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexDeletes(PhoenixIndexCodec.java:119)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addDeleteUpdatesToMap(CoveredColumnsIndexBuilder.java:403)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addCleanupForCurrentBatch(CoveredColumnsIndexBuilder.java:287)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addMutationsForBatch(CoveredColumnsIndexBuilder.java:239)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.batchMutationAndAddUpdates(CoveredColumnsIndexBuilder.java:136)
>         at 
> org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.getIndexUpdate(CoveredColumnsIndexBuilder.java:99)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:133)
>         at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:129)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         ... 1 more
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached 
> index metadata.  key=-2055417764840908969 
> region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6.
>         at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:336)
>         at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
>         at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexMaintainers(PhoenixIndexCodec.java:102)
>         ... 14 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to