Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63
Did you check system.stats,. If it us empty, needs to be rebuilt by running major compact on hbasr On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari wrote: > Hi, > > > > I am trying to upgrade the phoenix binaries in my setup from phoenix-4.6 > (had optional concept of schema) to phoenix-4.14 (schema is a must in > here). > > Earlier, I had the phoenix-4.6-hbase-1.1 binaries. When I try to run the > phoenix-4.14-hbase-1.3 on the same data. Hbase comes up fine But when I try > to connect to phoenix using sqline client, I get the following error on > *console*: > > > > 18/09/07 04:22:48 WARN ipc.CoprocessorRpcChannel: Call failed on > IOException > > org.apache.hadoop.hbase.DoNotRetryIOException: > org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63 > > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120) > > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3572) > > at > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422) > > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435) > > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875) > > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857) > > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209) > > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114) > > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > > at > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > > at java.lang.Thread.run(Thread.java:745) > > Caused by: java.lang.ArrayIndexOutOfBoundsException: 63 > > at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517) > > at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421) > > at > org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406) > > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046) > > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587) > >at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305) > > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568) > > ... 10 more > > > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:326) > > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1629) > > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136) > > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:107) > > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56) > > at > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getVersion(MetaDataProtos.java:16739) > > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1271) > > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1263) > > at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1736) > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > at java.lang.Thread.run(Thread.java:745) > > > > > > *Region-server logs are as follows: * > > 2018-09-07 03:23:36,170 ERROR > [B.defaultRpcServer.handler=1,queue=1,port=29062] > coprocessor.M
Phoenix 4.10 onwards phoenix view reurning empty result
Hi, From 4.10 onwards select query on phoenix view returning empty results though underlying hbase table has the data. Same select query with with lower phoenix veriosns (4.9 and below) is able to return the results. Please let me know any configuration is needed to get the data from view from 4.10 onwards. Thanks Venkat
Phoenix view returning empty results
Hi, Phoenix 4.10 client returning empty results on view. Where as lower version client able to return proper results. Appreciate response Thanks Venkat
Re: Phoenix shows incorrect count as compared to Hbase count
Delete system.stat entries. It should fix the issue. On Mar 1, 2018 3:04 AM, "Azharuddin Shaikh" wrote: > We are using phoenix(4.8-Version) to perform read operation on our > Hbase(1.2.3 - Version) table but suddenly after 7 months from deployment we > are stuck in an serious issue where suddenly phoenix is showing the table > count as '16232555' but when we perform count using Hbase Shell it is > reflecting correctly as '1985950'. We had tried to drop the table & reload > the data but again after sometime phoenix count is reflecting incorrectly. > > 1. We are unable to understand what is triggering this issue > > 2. How can we solve this issue > > 3. Is there any issue related to Phoenix guidepost width, since after > increasing the guidepost width from default(100mb) to 5GB it is reflecting > the correct count, so not able to understand what should be the correct > value for guidepost & what is the use of this. > > 4. What measures to be implemented to avoid this issue in future. > > Any help would greatly appreciated. Thanks > > > > -- > Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/ >
Re: How to recover SYSTEM.STATS?
You can re create is by running the stats create query, you can get the Query from Connectionqueryserviceimpl class of Phoenix code On Jul 22, 2017 10:09 PM, "venk sham" wrote: > Running major compact will build stats to some extent, and as you keep > using tables this will get populated > > On Jul 22, 2017 7:40 PM, "Batyrshin Alexander" <0x62...@gmail.com> wrote: > >> Hello, >> We accidentally lost SYSTEM.STATS. How to recover/recreate it? >> >
Re: How to recover SYSTEM.STATS?
Running major compact will build stats to some extent, and as you keep using tables this will get populated On Jul 22, 2017 7:40 PM, "Batyrshin Alexander" <0x62...@gmail.com> wrote: > Hello, > We accidentally lost SYSTEM.STATS. How to recover/recreate it? >
phoniex 4.6 to 4.8 upgrade SYSTEM.STATS issue
Hi, During the phoenix upgrade from 4.6 to 4.8 SYSTEM.STATS meta data is getting deleted and recreated in SYSTEM.CATALOG, but SYSTEM.CATALOG actual data is not getting cleared. phioenix 4.6 SYSTEM.STATS has row key as physical_Table_name + column family+ region name. Since SYSTEM.STATS data is not deleted during the upgrade and not corrected the data, once upgraded Parallel scans are not proper (as start key and stop key are invalid) and returning multiple records. Example: in the below example "startRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,3,1463423431278.7efe47a97f737de2ed34690f453d06bf.", CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS physical_name region name : 1463423431278.7efe47a97f737de2ed34690f453d06bf. }, { "loadColumnFamiliesOnDemand": null, "filter": "default.MESSAGE_ID = 'c2f8d7fe-6e29-4f34-bc38-b61bce8424cc'", "startRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,3,1463423431278.7efe47a97f737de2ed34690f453d06bf.", "stopRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,4,1463423431278.0a1ac472674298a1c3cb03fb48458665.", "batch": -1, "cacheBlocks": true, "totalColumns": 8, "maxResultSize": -1, "families": { "default": ["CREATED_BY", "JMS_MESSAGE_ID", "MESSAGE_BYTES", "MESSAGE_ID"] }, "caching": 2147483647, "maxVersions": 1, "timeRange": [0, 1485535243351] }, { "loadColumnFamiliesOnDemand": null, "filter": "default.MESSAGE_ID = 'c2f8d7fe-6e29-4f34-bc38-b61bce8424cc'", "startRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,4,1463423431278.0a1ac472674298a1c3cb03fb48458665.", "stopRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,5,1463423431278.0f69b359c43fc1ce5f546ada80982ff7.", "batch": -1, "cacheBlocks": true, "totalColumns": 8, "maxResultSize": -1, "families": { "default": ["CREATED_BY", "JMS_MESSAGE_ID", "MESSAGE_BYTES", "MESSAGE_ID"] }, "caching": 2147483647, "maxVersions": 1, "timeRange": [0, 1485535243351] }, { "loadColumnFamiliesOnDemand": null, "filter": "default.MESSAGE_ID = 'c2f8d7fe-6e29-4f34-bc38-b61bce8424cc'", "startRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,5,1463423431278.0f69b359c43fc1ce5f546ada80982ff7.", "stopRow": "CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS,6,1463423431278.761ad4a881f40fb34027b80008550821.", "batch": -1, "cacheBlocks": true, "totalColumns": 8, "maxResultSize": -1, "families": { "default": ["CREATED_BY", "JMS_MESSAGE_ID", "MESSAGE_BYTES", "MESSAGE_ID"] }, "caching": 2147483647, "maxVersions": 1, "timeRange": [0, 1485535243351] }, { Thanks Venkat
Phoenix 4.8.1 client returns multiple records for single record in table
Hi, Phoenix 4.8.1 client SELECT is returning multiple records when non primary key is used in where clause. where are returning only one record when primary key is used in where clause. Please find the details *Table Description* 0: jdbc:phoenix:localhost> !describe CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS; ++--+--+--+++--++-+-+---+ | TABLE_CAT | TABLE_SCHEM |TABLE_NAME| COLUMN_NAME | DATA_TYPE | TYPE_NAME | COLUMN_SIZE | BUFFER_LENGTH | DECIMAL_DIGITS | NUM_PREC_RADIX | NULLABLE | ++--+--+--+++--++-+-+---+ || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | ROW_KEY | 12 | VARCHAR| null | null | null | null| 0 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | JMS_MESSAGE_ID | 12 | VARCHAR| null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | QUEUE_NAME | 12 | VARCHAR| null | null | null | null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | ORGANIZATION_ID | 12 | VARCHAR| null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | SUB_ORGANIZATION_ID | 12 | VARCHAR| null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | USER | 12 | VARCHAR| null | null | null | null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | MESSAGE_TEXT | 12 | VARCHAR| null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | MESSAGE_BYTES| -3 | VARBINARY | null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | CREATED_BY | 12 | VARCHAR| null | null | null | null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | CREATED_DATE | 93 | TIMESTAMP | null | null | null| null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | UPDATED_BY | 12 | VARCHAR| null | null | null | null| 1 | || CDS_TEST | MONITORED_QUEUE_MESSAGE_DETAILS | UPDATED_DATE | 93 | TIMESTAMP | null | null | null| null| 1 | ++--+--+--+++--++-+-+---+ *Query with primary key returned single record.* 0: jdbc:phoenix:localhost> SELECT row_key,jms_message_id,created_by,message_text,message_bytes,organization_id,sub_organization_id,message_id FROM CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS (message_id VARCHAR) WHERE row_key = 'c8014083-f749-4caf-bee1-f12392fc0673'; +---+---++---++---+-+ |ROW_KEY|JMS_MESSAGE_ID | CREATED_BY | MESSAGE_TEXT | MESSAGE_BYTES | ORGANIZATION_ID| SUB_ORGANIZ | +---+---++---++---+-+ | c8014083-f749-4caf-bee1-f12392fc0673 | ID:CDS-HDE-EMS-Server.9B8585353AB3F43D4:1243 | cds_test_user | | [B@77128dab| f3f424f4-2fa0-446c-afc5-f0d6d6ac7f60 | | +---+---++---++---+-+ 1 row selected (0.105 seconds) Query with non-primary key returned 4 records with same primary Key. 0: jdbc:phoenix:localhost> SELECT row_key,jms_message_id,created_by,message_text,message_bytes,organization_id,sub_organization_id,message_id FROM CDS_TEST.MONITORED_QUEUE_MESSAGE_DETAILS (message_id VARCHAR) WHERE message_id = '0d4c5a22-7f89-4cc8-8727-2bc372
Re: phoenix upgrade issue
Thanks James, I tried to debug to find the newer time stamps. (final PTable pdataTable = PhoenixRuntime.getTable(connection, "SYSTEM.CATALOG") ) when I inspect the timestamp its showing as 0. Can you please let me know how can find the newer timestamps which are causing the issue. Thanks Venkat On Tue, Dec 6, 2016 at 12:01 PM, James Taylor wrote: > Hi Venkat, > Did you ever update the SYSTEM.CATALOG table manually? If so, did you set > the CURRENT_SCN property on the connection so that the timestamp of the > modified rows continue to have the required timestamp > (MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_6_0 or 9 in this case)? It > sounds like you have cells with newer timestamps which will cause the > upgrade to not happen correctly. > > Not sure about (2), but maybe Rajeshbabu would know. > > Another alternative to the automatic upgrade is to just disable and drop > the SYSTEM.CATALOG table as well as any indexes from the HBase shell and > rerun all your DDL statements. When you create a Phoenix table, it will map > to existing HBase table data, so you won't lose data. Phoenix will think it > needs to add an empty key value cell to each row, but you could get around > this by setting the CURRENT_SCN property to the long value that represents > the current time on the connection you're using to run the DDL statements. > See https://phoenix.apache.org/faq.html#Can_phoenix_work_ > on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API for more info > on the CURRENT_SCN property. > > Thanks, > James > > On Tue, Dec 6, 2016 at 9:50 AM, venk sham wrote: > >> while upgrading phoenix from 4.6.1 to 4.8.1. >> >> >> >> While connection is getting initialized, it’s trying to upgrader system >> tables with missing columns related to 4.7 and 4.8. While altering >> SYSTEM.CATALOG to add the columns related to 4.7 and 4.8 facing the >> following issues. >> >> >> >> 1. Its throwing “NEWER_TABLE_FOUND“ and the columns are not being >> added. >> >> 2. Its trying to disable Indexes on Views during this process its >> adding “_IDX_” to the physical table of the view. Whereas at HBase side the >> table would be “_LOCAL_IDX” because of this reason it’s not able to find >> the table and throwing the exception. >> >> >> >> >> >> Thanks >> >> Venkat >> > >
phoenix upgrade issue
while upgrading phoenix from 4.6.1 to 4.8.1. While connection is getting initialized, it’s trying to upgrader system tables with missing columns related to 4.7 and 4.8. While altering SYSTEM.CATALOG to add the columns related to 4.7 and 4.8 facing the following issues. 1. Its throwing “NEWER_TABLE_FOUND“ and the columns are not being added. 2. Its trying to disable Indexes on Views during this process its adding “_IDX_” to the physical table of the view. Whereas at HBase side the table would be “_LOCAL_IDX” because of this reason it’s not able to find the table and throwing the exception. Thanks Venkat