Hi,
We are using a Phoenix table , where we constantly upgrade the structure by
altering the table and adding new columns.
Almost, every 3 days we see that the table becomes unusable via phoenix after
some alter commands have altered the table.
Earlier we were under the impression that one of the columns gets created which
is a duplicate causing a metadata issue with phoenix which it is
Unable to manage at this stage based on online discussions.
Also, there exists an unresolved issue below about the same.
https://issues.apache.org/jira/browse/PHOENIX-3196
Can anyone tell me if they are facing the same and what they have done in order
to check this occurrence.
Please find the stack trace for my problem below:
Error: org.apache.hadoop.hbase.DoNotRetryIOException: DATAWAREHOUSE3: null
at
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:546)
at
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
at
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6001)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3510)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3492)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30950)
at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2109)
at
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException
SQLState: 08000
ErrorCode: 101
Thanks,
Siddharth Ubale,