HI Jaanai Zhang,

When you say migrate the data, do you mean somehow export the data from
phoenix tables(phoenix 4.6) and bulk-insert into new phoenix
tables(phoenix-4.14) ?
Do you have any data migration script or something which I can take help of
?

Thanks,
Tanvi


On Wed, Oct 17, 2018 at 5:41 PM Jaanai Zhang <cloud.pos...@gmail.com> wrote:

> It seems that is impossible to upgrade from Phoenix-4.6 to Phoenix-4.14,
> the schema of SYSTEM  had been changed or some futures will be
> incompatible.  Maybe you can migrate data from Phoenix-4.6 to Phoenix-4.14,
> this solution can ensure that everything will be right.
>
> ----------------------------------------
>    Jaanai Zhang
>    Best regards!
>
>
>
> Tanvi Bhandari <tanvi.bhand...@gmail.com> 于2018年10月17日周三 下午3:48写道:
>
>> @Shamvenk
>>
>> Yes I did check the STATS table from hbase shell, it's not empty.
>>
>> After dropping all SYSTEM tables and mapping hbase-tables to phoenix
>> tables by executing all DDLs, I am seeing new issue.
>>
>> I have a table and an index on that table. Number of records in index
>> table and main table are not matching now.
>> select count(*) from "my_index";
>> select count(COL) from "my_table";-- where COL is not part of index.
>>
>> Can someone tell me what can be done here? Is there any easier way to
>> upgrade from Phoenix-4.6 to Phoenix-4.14?
>>
>>
>>
>> On Thu, Sep 13, 2018 at 8:55 PM venk sham <shamv...@gmail.com> wrote:
>>
>>> Did you check system.stats,. If it us empty, needs to be rebuilt by
>>> running major compact on hbasr
>>>
>>> On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari <tanvi.bhand...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> I am trying to upgrade the phoenix binaries in my setup from
>>>> phoenix-4.6 (had optional concept of schema) to phoenix-4.14 (schema is a
>>>> must in here).
>>>>
>>>> Earlier, I had the phoenix-4.6-hbase-1.1 binaries. When I try to run
>>>> the phoenix-4.14-hbase-1.3 on the same data. Hbase comes up fine But when I
>>>> try to connect to phoenix using sqline client,  I get the following error
>>>> on *console*:
>>>>
>>>>
>>>>
>>>> 18/09/07 04:22:48 WARN ipc.CoprocessorRpcChannel: Call failed on
>>>> IOException
>>>>
>>>> org.apache.hadoop.hbase.DoNotRetryIOException:
>>>> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
>>>>
>>>>         at
>>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3572)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>
>>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
>>>>
>>>>         at
>>>> org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
>>>>
>>>>         at
>>>> org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
>>>>
>>>>         at
>>>> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587)
>>>>
>>>>        at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568)
>>>>
>>>>         ... 10 more
>>>>
>>>>
>>>>
>>>>         at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>>
>>>>         at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>>
>>>>         at
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>
>>>>         at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:326)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1629)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:107)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>>>>
>>>>         at
>>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getVersion(MetaDataProtos.java:16739)
>>>>
>>>>         at
>>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1271)
>>>>
>>>>         at
>>>> org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1263)
>>>>
>>>>         at
>>>> org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1736)
>>>>
>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Region-server logs are as follows: *
>>>>
>>>> 2018-09-07 03:23:36,170 ERROR
>>>> [B.defaultRpcServer.handler=1,queue=1,port=29062]
>>>> coprocessor.MetaDataEndpointImpl: loading system catalog table inside
>>>> getVersion failed
>>>>
>>>> java.lang.ArrayIndexOutOfBoundsException: 63
>>>>
>>>>                at
>>>> org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
>>>>
>>>>                at
>>>> org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
>>>>
>>>>                at
>>>> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
>>>>
>>>>                at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046)
>>>>
>>>>                at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587)
>>>>
>>>>                at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305)
>>>>
>>>>                at
>>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568)
>>>>
>>>>                at
>>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>>>
>>>>                at
>>>> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>>>
>>>>                at java.lang.Thread.run(Thread.java:745)
>>>>
>>>>
>>>>
>>>> suspecting that this could be something wrong with SYSTEM table, I went
>>>> ahead and dropped all SYSTEM tables from hbase shell and again tried
>>>> connecting to phoenix sqlline client. This time connecting through
>>>> phoenix-sqlline worked for me. But none of my tables were visible in
>>>> Phoenix shell only SYSTEM tables were visible.  So I went ahead and mapped
>>>> hbase tables to phoenix and created them explicitly from phoenix sqlline
>>>> client. I first created schema corresponding to namespace and then tables.
>>>> This way my tables were visible in phoenix sqlline. *Select Count(*)*
>>>> query on my table was returning 8 (expected) records as well but when 
>>>> *select
>>>> ** query is not returning any record. Can someone tell me what can I
>>>> do next in this case?
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Tanvi
>>>>
>>>>
>>>>
>>>

Reply via email to