Re: Unable to find cached index metadata

2018-09-02 Thread Batyrshin Alexander
Yes, it's longer. 
Thank you. We will try to decrease batch size

> On 3 Sep 2018, at 04:14, Thomas D'Silva  wrote:
> 
> Is your cluster under heavy write load when you see these expceptions? How 
> long does it take to write a batch of mutations?
> If its longer than the config value of maxServerCacheTimeToLiveMs you will 
> see the exception because the index metadata expired from the cache.
> 
> 
> On Sun, Sep 2, 2018 at 4:02 PM, Batyrshin Alexander <0x62...@gmail.com 
> <mailto:0x62...@gmail.com>> wrote:
>   Hello all,
> We use mutable table with many indexes on it. On upserts we getting this 
> error:
> 
> o.a.phoenix.execute.MutationState - Swallowing exception and retrying after 
> clearing meta cache on connection. java.sql.SQLException: ERROR 2008 (INT10): 
> Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 2008 (INT10): 
> Unable to find cached index metadata. key=8283602185356160420 
> region=HISTORY,D\xEF\xBF\xBD\xEF\xBF\xBDNt\x1B\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD5\x1E\x01W\x02\xEF\xBF\xBD$,1531781097243.95d19923178a7d80fa55428b97816e3f.host=cloud016,60020,1535926087741
>  Index update failed
> 
> 
> Current config:
> phoenix-4.14.0-HBase-1.4
> phoenix.coprocessor.maxServerCacheTimeToLiveMs = 6
> ALTER TABLE HISTORY SET UPDATE_CACHE_FREQUENCY=6
> 



Re: Unable to find cached index metadata

2018-09-02 Thread Thomas D'Silva
Is your cluster under heavy write load when you see these expceptions? How
long does it take to write a batch of mutations?
If its longer than the config value of maxServerCacheTimeToLiveMs you will
see the exception because the index metadata expired from the cache.


On Sun, Sep 2, 2018 at 4:02 PM, Batyrshin Alexander <0x62...@gmail.com>
wrote:

>   Hello all,
> We use mutable table with many indexes on it. On upserts we getting this
> error:
>
> o.a.phoenix.execute.MutationState - Swallowing exception and retrying
> after clearing meta cache on connection. java.sql.SQLException: ERROR 2008
> (INT10): Unable to find cached index metadata. ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata. key=8283602185356160420
> region=HISTORY,D\xEF\xBF\xBD\xEF\xBF\xBDNt\x1B\xEF\xBF\xBD\
> xEF\xBF\xBD\xEF\xBF\xBD5\x1E\x01W\x02\xEF\xBF\xBD$,1531781097243.
> 95d19923178a7d80fa55428b97816e3f.host=cloud016,60020,1535926087741 Index
> update failed
>
>
> Current config:
> phoenix-4.14.0-HBase-1.4
> phoenix.coprocessor.maxServerCacheTimeToLiveMs = 6
> ALTER TABLE HISTORY SET UPDATE_CACHE_FREQUENCY=6


Unable to find cached index metadata

2018-09-02 Thread Batyrshin Alexander
  Hello all,
We use mutable table with many indexes on it. On upserts we getting this error:

o.a.phoenix.execute.MutationState - Swallowing exception and retrying after 
clearing meta cache on connection. java.sql.SQLException: ERROR 2008 (INT10): 
Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 2008 (INT10): 
Unable to find cached index metadata. key=8283602185356160420 
region=HISTORY,D\xEF\xBF\xBD\xEF\xBF\xBDNt\x1B\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD5\x1E\x01W\x02\xEF\xBF\xBD$,1531781097243.95d19923178a7d80fa55428b97816e3f.host=cloud016,60020,1535926087741
 Index update failed


Current config:
phoenix-4.14.0-HBase-1.4
phoenix.coprocessor.maxServerCacheTimeToLiveMs = 6
ALTER TABLE HISTORY SET UPDATE_CACHE_FREQUENCY=6

Re: Unable to find cached index metadata

2016-11-26 Thread James Taylor
Not sure of the JIRA, but it sounds familiar. Try searching for it here:
https://issues.apache.org/jira/browse/PHOENIX

On Sat, Nov 26, 2016 at 1:22 PM Neelesh  wrote:

> Thanks James! Is there a jira ref for the fix?
>
> On Nov 26, 2016 11:50 AM, "James Taylor"  wrote:
>
> I believe that issue has been fixed. The 4.4 release is 1 1/2 years old
> and we've had five releases since that have fixed hundreds of bugs. Please
> encourage your vendor to provide a more recent release.
>
> Thanks,
> James
>
> On Sat, Nov 26, 2016 at 10:23 AM Neelesh  wrote:
>
> Hi All,
>   we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
> We're struggling with the following error on pretty much all our region
> servers. The indexes are global, the data table has more than a 100B rows
>
> 2016-11-26 12:15:41,250 INFO
>  [RW.default.writeRpcServer.handler=40,queue=6,port=16020]
> util.IndexManagementUtil: Rethrowing
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
>  key=7015231383024113337 region=,-056946674
>,1477336770695.07d70ebd63f737a62e24387cf0912af5. Index
> update failed
>
> I looked at https://issues.apache.org/jira/browse/PHOENIX-1718  and
> bumped up the settings mentioned there to 1 hour
>
> 
> phoenix.coprocessor.maxServerCacheTimeToLiveMs
> 360
> 
> 
> phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
> 360
> 
>
> but to no avail.
>
> Any help is appreciated!
>
> Thanks!
>
>


Re: Unable to find cached index metadata

2016-11-26 Thread Neelesh
Thanks James! Is there a jira ref for the fix?

On Nov 26, 2016 11:50 AM, "James Taylor"  wrote:

> I believe that issue has been fixed. The 4.4 release is 1 1/2 years old
> and we've had five releases since that have fixed hundreds of bugs. Please
> encourage your vendor to provide a more recent release.
>
> Thanks,
> James
>
> On Sat, Nov 26, 2016 at 10:23 AM Neelesh  wrote:
>
>> Hi All,
>>   we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
>> We're struggling with the following error on pretty much all our region
>> servers. The indexes are global, the data table has more than a 100B rows
>>
>> 2016-11-26 12:15:41,250 INFO  
>> [RW.default.writeRpcServer.handler=40,queue=6,port=16020]
>> util.IndexManagementUtil: Rethrowing 
>> org.apache.hadoop.hbase.DoNotRetryIOException:
>> ERROR 2008 (INT10): ERROR
>> 2008 (INT10): Unable to find cached index metadata.
>>  key=7015231383024113337 region=,-056946674
>>,1477336770695.07d70ebd63f737a62e24387cf0912af5. Index
>> update failed
>>
>> I looked at https://issues.apache.org/jira/browse/PHOENIX-1718  and
>> bumped up the settings mentioned there to 1 hour
>>
>> 
>> phoenix.coprocessor.maxServerCacheTimeToLiveMs
>> 360
>> 
>> 
>> phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
>> 360
>> 
>>
>> but to no avail.
>>
>> Any help is appreciated!
>>
>> Thanks!
>>
>>


Re: Unable to find cached index metadata

2016-11-26 Thread James Taylor
I believe that issue has been fixed. The 4.4 release is 1 1/2 years old and
we've had five releases since that have fixed hundreds of bugs. Please
encourage your vendor to provide a more recent release.

Thanks,
James

On Sat, Nov 26, 2016 at 10:23 AM Neelesh  wrote:

> Hi All,
>   we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
> We're struggling with the following error on pretty much all our region
> servers. The indexes are global, the data table has more than a 100B rows
>
> 2016-11-26 12:15:41,250 INFO
>  [RW.default.writeRpcServer.handler=40,queue=6,port=16020]
> util.IndexManagementUtil: Rethrowing
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
>  key=7015231383024113337 region=,-056946674
>,1477336770695.07d70ebd63f737a62e24387cf0912af5. Index
> update failed
>
> I looked at https://issues.apache.org/jira/browse/PHOENIX-1718  and
> bumped up the settings mentioned there to 1 hour
>
> 
> phoenix.coprocessor.maxServerCacheTimeToLiveMs
> 360
> 
> 
> phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
> 360
> 
>
> but to no avail.
>
> Any help is appreciated!
>
> Thanks!
>
>


Unable to find cached index metadata

2016-11-26 Thread Neelesh
Hi All,
  we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
We're struggling with the following error on pretty much all our region
servers. The indexes are global, the data table has more than a 100B rows

2016-11-26 12:15:41,250 INFO
 [RW.default.writeRpcServer.handler=40,queue=6,port=16020]
util.IndexManagementUtil: Rethrowing
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
2008 (INT10): Unable to find cached index metadata.
 key=7015231383024113337 region=,-056946674
   ,1477336770695.07d70ebd63f737a62e24387cf0912af5. Index
update failed

I looked at https://issues.apache.org/jira/browse/PHOENIX-1718  and bumped
up the settings mentioned there to 1 hour


phoenix.coprocessor.maxServerCacheTimeToLiveMs
360


phoenix.coprocessor.maxMetaDataCacheTimeToLiveMs
360


but to no avail.

Any help is appreciated!

Thanks!


Unable to find cached index metadata

2016-03-21 Thread Pedro Gandola
Hi,

I'm using *Phoenix4.6* and in my use case I have a table that keeps a
sliding window of 7 days worth of data. I have 3 local indexes on this
table and in out use case we have aprox: 150 producers that are inserting
data (in batches of 300-1500 events) in real-time.

Some days ago I started to get a lot of errors like the below ones. The
number of errors was so large that the cluster performance dropped a lot
and my disks read bandwidth was crazy high but the write bandwidth was
normal. I can ensure that during that period no readers were running only
producers.

ERROR [B.defaultRpcServer.handler=25,queue=5,port=16020]
> parallel.BaseTaskRunner: Found a failed task because:
> org.apache.hadoop.hbase.DoNotRetryIOException: *ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.*  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
>  *Index
> update failed*
> java.util.concurrent.ExecutionException:
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
>  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
> Index update failed
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.
>  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
> Index update failed
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find
> cached index metadata.  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
> INFO  [B.defaultRpcServer.handler=25,queue=5,port=16020]
> parallel.TaskBatch: Aborting batch of tasks because Found a failed task
> because: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10):
> ERROR 2008 (INT10): Unable to find cached index metadata.
>  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
> Index update failed
> ERROR [B.defaultRpcServer.handler=25,queue=5,port=16020] 
> *builder.IndexBuildManager:
> Found a failed index update!*
> INFO  [B.defaultRpcServer.handler=25,queue=5,port=16020]
> util.IndexManagementUtil: Rethrowing
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR
> 2008 (INT10): Unable to find cached index metadata.
>  key=4276342695061435086
> region=BIDDING_EVENTS,\xFEK\x17\xE4\xB1~K\x08,1458435680333.ee29454d68f5b679a8e8cc775dd0edfa.
> Index update failed


I searched for the error and I made the following changes on the server
side:

   - *phoenix.coprocessor.maxServerCacheTimeToLiveMs *from 30s to 2min
   - *phoenix.coprocessor.maxMetaDataCacheSize* from 20MB to 40MB

After I changed these properties I restarted the cluster and the errors
were gone but disks read bandwidth was still very high and I was getting
*responseTooSlow* warnings. As a quick solution I created fresh tables and
then the problems were gone.

Now, after one day running with new tables I started to see the problem
again but I think this was during a major compaction but I would like to
understand more the reasons&consequences of these problems.

- What are the major consequences of these errors? I assume that index data
is not written within the index table, right? Then, why was the read
bandwidth of my disks so high even without readers and after changed those
properties?

- Is there any optimal or recommended value for the above properties or am
I missing some tunning on other properties for the metadata cache?

Thank you,
Pedro


Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
phoenix.upsert.batch.size is a client side property.  We lowered it down to
20-50. Its YMMV as per your use case.

phoenix.coprocessor.maxServerCacheTimeToLiveMs is a server side property.
You will need to restart your hbase cluster for this.

On Wed, Feb 17, 2016 at 3:01 PM, Neelesh  wrote:

> Also, was your change to phoenix.upsert.batch.size on the client or on
> the region server or both?
>
> On Wed, Feb 17, 2016 at 2:57 PM, Neelesh  wrote:
>
>> Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs,
>> but haven't tried playing with phoenix.upsert.batch.size. Its at the
>> default 1000.
>>
>> On Wed, Feb 17, 2016 at 12:48 PM, anil gupta 
>> wrote:
>>
>>> I think, this has been answered before:
>>> http://search-hadoop.com/m/9UY0h2FKuo8RfAPN
>>>
>>> Please let us know if the problem still persists.
>>>
>>> On Wed, Feb 17, 2016 at 12:02 PM, Neelesh  wrote:
>>>
>>>> We've been running phoenix 4.4 client for a while now with HBase
>>>> 1.1.2.  Once in a while while UPSERTing records (on a table with 2 global
>>>> indexes), we see the following error.  I found
>>>> https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both
>>>> values in that JIRA to 360. This still does not help and we keep
>>>> seeing this once in a while. What's not clear is also if this setting is
>>>> relevant for client or just the server.
>>>>
>>>> Any help is appreciated
>>>>
>>>> org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
>>>> 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): 
>>>> ERROR 2008 (INT10): Unable to find cached index metadata.  
>>>> key=5115312427460709976 region=TEST_TABLE,111-222-950835849
>>>>   ,1455513914764.48b2157bcdac165898983437c1801ea7. Index 
>>>> update failed
>>>> at 
>>>> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444) 
>>>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>> at 
>>>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>> at 
>>>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
>>>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>> at 
>>>> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>


-- 
Thanks & Regards,
Anil Gupta


Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
Also, was your change to phoenix.upsert.batch.size on the client or on the
region server or both?

On Wed, Feb 17, 2016 at 2:57 PM, Neelesh  wrote:

> Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs,
> but haven't tried playing with phoenix.upsert.batch.size. Its at the
> default 1000.
>
> On Wed, Feb 17, 2016 at 12:48 PM, anil gupta 
> wrote:
>
>> I think, this has been answered before:
>> http://search-hadoop.com/m/9UY0h2FKuo8RfAPN
>>
>> Please let us know if the problem still persists.
>>
>> On Wed, Feb 17, 2016 at 12:02 PM, Neelesh  wrote:
>>
>>> We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
>>> Once in a while while UPSERTing records (on a table with 2 global indexes),
>>> we see the following error.  I found
>>> https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both
>>> values in that JIRA to 360. This still does not help and we keep
>>> seeing this once in a while. What's not clear is also if this setting is
>>> relevant for client or just the server.
>>>
>>> Any help is appreciated
>>>
>>> org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
>>> 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): 
>>> ERROR 2008 (INT10): Unable to find cached index metadata.  
>>> key=5115312427460709976 region=TEST_TABLE,111-222-950835849 
>>>  ,1455513914764.48b2157bcdac165898983437c1801ea7. Index update 
>>> failed
>>> at 
>>> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444) 
>>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>> at 
>>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>> at 
>>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
>>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>> at 
>>> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>>>  ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs,
but haven't tried playing with phoenix.upsert.batch.size. Its at the
default 1000.

On Wed, Feb 17, 2016 at 12:48 PM, anil gupta  wrote:

> I think, this has been answered before:
> http://search-hadoop.com/m/9UY0h2FKuo8RfAPN
>
> Please let us know if the problem still persists.
>
> On Wed, Feb 17, 2016 at 12:02 PM, Neelesh  wrote:
>
>> We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
>> Once in a while while UPSERTing records (on a table with 2 global indexes),
>> we see the following error.  I found
>> https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both values
>> in that JIRA to 360. This still does not help and we keep seeing
>> this once in a while. What's not clear is also if this setting is relevant
>> for client or just the server.
>>
>> Any help is appreciated
>>
>> org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
>> 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): 
>> ERROR 2008 (INT10): Unable to find cached index metadata.  
>> key=5115312427460709976 region=TEST_TABLE,111-222-950835849  
>> ,1455513914764.48b2157bcdac165898983437c1801ea7. Index update 
>> failed
>> at 
>> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444) 
>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at 
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459) 
>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at 
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456) 
>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at 
>> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456) 
>> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


Re: ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread anil gupta
I think, this has been answered before:
http://search-hadoop.com/m/9UY0h2FKuo8RfAPN

Please let us know if the problem still persists.

On Wed, Feb 17, 2016 at 12:02 PM, Neelesh  wrote:

> We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
> Once in a while while UPSERTing records (on a table with 2 global indexes),
> we see the following error.  I found
> https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both values
> in that JIRA to 360. This still does not help and we keep seeing this
> once in a while. What's not clear is also if this setting is relevant for
> client or just the server.
>
> Any help is appreciated
>
> org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 2008 
> (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): ERROR 
> 2008 (INT10): Unable to find cached index metadata.  key=5115312427460709976 
> region=TEST_TABLE,111-222-950835849  
> ,1455513914764.48b2157bcdac165898983437c1801ea7. Index update failed
> at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444) 
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459) 
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456) 
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456) 
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>
>


-- 
Thanks & Regards,
Anil Gupta


ERROR 2008 (INT10): Unable to find cached index metadata.

2016-02-17 Thread Neelesh
We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
Once in a while while UPSERTing records (on a table with 2 global indexes),
we see the following error.  I found
https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both values in
that JIRA to 360. This still does not help and we keep seeing this once
in a while. What's not clear is also if this setting is relevant for client
or just the server.

Any help is appreciated

org.apache.phoenix.execute.CommitException: java.sql.SQLException:
ERROR 2008 (INT10): Unable to find cached index metadata.  ERROR 2008
(INT10): ERROR 2008 (INT10): Unable to find cached index metadata.
key=5115312427460709976 region=TEST_TABLE,111-222-950835849
  ,1455513914764.48b2157bcdac165898983437c1801ea7.
Index update failed
at 
org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]


Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-15 Thread anil gupta
Hi James,

Thanks for your reply. My problem was resolved by setting
phoenix.coprocessor.maxServerCacheTimeToLiveMs to 3 minutes and
phoenix.upsert.batch.size to 10. I think, i can increase
phoenix.upsert.batch.size to a higher value but haven't got opportunity to
try that out yet.

Thanks,
Anil Gupta


On Thu, Jan 14, 2016 at 6:28 PM, James Taylor 
wrote:

> Hi Anil,
> This error occurs if you're performing an update that takes a long time on
> a mutable table that has a secondary index. In this case, we make an RPC
> before the update which sends index metadata to the region server which
> it'll use for the duration of the update to generate the secondary index
> rows based on the data rows. In this case, the cache entry is expiring
> before the update (i.e. your MR job) completes. Try
> increasing phoenix.coprocessor.maxServerCacheTimeToLiveMs in the region
> server hbase-site.xml. See our Tuning page[1] for more info.
>
> FWIW, 500K rows would be much faster to insert via our standard UPSERT
> statement.
>
> Thanks,
> James
> [1] https://phoenix.apache.org/tuning.html
>
> On Sun, Jan 10, 2016 at 10:18 PM, Anil Gupta 
> wrote:
>
>> Bump..
>> Can secondary index commiters/experts provide any insight into this? This
>> is one of the feature that encouraged us to use phoenix.
>> Imo, global secondary index should be handled as a inverted index table.
>> So, i m unable to understand why its failing on region splits.
>>
>> Sent from my iPhone
>>
>> On Jan 6, 2016, at 11:14 PM, anil gupta  wrote:
>>
>> Hi All,
>>
>> I am using Phoenix4.4, i have created a global secondary in one table. I
>> am running MapReduce job with 20 reducers to load data into this
>> table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K
>> rows only. My mapreduce job is failing due to this exception:
>> Caused by: org.apache.phoenix.execute.CommitException:
>> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
>> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
>> index metadata.  key=-413539871950113484
>> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
>> Index update failed
>> at
>> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
>> at
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>> at
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> at
>> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>> at
>> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
>> ... 14 more
>>
>> It seems like i am hitting
>> https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have
>> heavy write or read load like wuchengzhi. I haven't dont any tweaking in
>> Phoenix/HBase conf yet.
>>
>> What is the root cause of this error? What are the recommended changes in
>> conf for this?
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>>
>


-- 
Thanks & Regards,
Anil Gupta


Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-14 Thread James Taylor
Hi Anil,
This error occurs if you're performing an update that takes a long time on
a mutable table that has a secondary index. In this case, we make an RPC
before the update which sends index metadata to the region server which
it'll use for the duration of the update to generate the secondary index
rows based on the data rows. In this case, the cache entry is expiring
before the update (i.e. your MR job) completes. Try
increasing phoenix.coprocessor.maxServerCacheTimeToLiveMs in the region
server hbase-site.xml. See our Tuning page[1] for more info.

FWIW, 500K rows would be much faster to insert via our standard UPSERT
statement.

Thanks,
James
[1] https://phoenix.apache.org/tuning.html

On Sun, Jan 10, 2016 at 10:18 PM, Anil Gupta  wrote:

> Bump..
> Can secondary index commiters/experts provide any insight into this? This
> is one of the feature that encouraged us to use phoenix.
> Imo, global secondary index should be handled as a inverted index table.
> So, i m unable to understand why its failing on region splits.
>
> Sent from my iPhone
>
> On Jan 6, 2016, at 11:14 PM, anil gupta  wrote:
>
> Hi All,
>
> I am using Phoenix4.4, i have created a global secondary in one table. I
> am running MapReduce job with 20 reducers to load data into this
> table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K
> rows only. My mapreduce job is failing due to this exception:
> Caused by: org.apache.phoenix.execute.CommitException:
> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
> index metadata.  key=-413539871950113484
> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
> Index update failed
> at
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
> at
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
> ... 14 more
>
> It seems like i am hitting
> https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy
> write or read load like wuchengzhi. I haven't dont any tweaking in
> Phoenix/HBase conf yet.
>
> What is the root cause of this error? What are the recommended changes in
> conf for this?
> --
> Thanks & Regards,
> Anil Gupta
>
>


Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-10 Thread Anil Gupta
Bump.. 
Can secondary index commiters/experts provide any insight into this? This is 
one of the feature that encouraged us to use phoenix.
Imo, global secondary index should be handled as a inverted index table. So, i 
m unable to understand why its failing on region splits.

Sent from my iPhone

> On Jan 6, 2016, at 11:14 PM, anil gupta  wrote:
> 
> Hi All,
> 
> I am using Phoenix4.4, i have created a global secondary in one table. I am 
> running MapReduce job with 20 reducers to load data into this table(maybe i m 
> doing 50 writes/second/reducer). Dataset  is around 500K rows only. My 
> mapreduce job is failing due to this exception:
> Caused by: org.apache.phoenix.execute.CommitException: java.sql.SQLException: 
> ERROR 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  
> key=-413539871950113484 
> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
>  Index update failed
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
> ... 14 more
> 
> It seems like i am hitting 
> https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy 
> write or read load like wuchengzhi. I haven't dont any tweaking in 
> Phoenix/HBase conf yet.
> 
> What is the root cause of this error? What are the recommended changes in 
> conf for this? 
> -- 
> Thanks & Regards,
> Anil Gupta


Re: RE: Unable to find cached index metadata

2014-09-02 Thread su...@certusnet.com.cn
Hi, Rajeshbabu,
  Really appreciated for your suggestion. Actually I am running spark job to 
load data into phoenix, 
  and the data are storaged in HDFS as sequence file. I am trying to facilitate 
some tunning about 
  optimizing the data loading into phoenix as my current project requires heavy 
data write. Previous 
 test about no index data loading and global index data loading works fine 
though with different loading 
 speed. With the latest 4.1 release, I got the feature about local indexing 
with the following use case suggesion:
   Local indexing targets write heavy, space constrained use cases. 

  So I would like to test the local indexing for my projects. However, the data 
loading speed got extremely slow compared with
 my previous data loading. 
  Following are my scala code snippet for the data loading into Phoenix:
 

iter1.grouped(5000).zipWithIndex foreach { case (batch, batchIndex) => 
batch foreach { v =>   //  batch JDBC upsert
//stmt.addBatch() 
hbaseUpsertExecutor.execute(v._2,true); //  here will upsert each record in 
HDFS sequence file into phoenix table with ‘upsert into mytable values ()’
//hbaseUpsertExecutor.executeBatch() 
//stmt.execute(); 
} 
hbaseUpsertExecutor.executeBatch() 
// connection.setAutoCommit(false); 
conn.commit();  //  here got the error message.   ERROR 
2008 (INT10): Unable to find cached index metadata.
// logger.info(" inserted batch " + batchIndex + " with " + batch.size + " 
elements") 
}
 
Not quite sure about the error cause through stack trace and expecting to 
understand whether the local indexing needs some 
   additional configuration or something to get attention.
   Best regards, 
   Sun





CertusNet 

From: rajeshbabu chintaguntla
Date: 2014-09-02 16:47
To: user@phoenix.apache.org
Subject: RE: Re: Unable to find cached index metadata
bq. I am trying to load data into the phoenix table, as Phoenix may not support 
index related  
   data bulkload, I am tring to upsert data into phoenix through JDBC 
statements. 

In 4.1 release CSVBulkLoadTool can be used to build indexes when loading data. 
See [1].
And also some more work is going for the same[2].

1. https://issues.apache.org/jira/browse/PHOENIX-1069
2. https://issues.apache.org/jira/browse/PHOENIX-1056

Are you getting the exception for first attempt of upsert or in the middle of 
loading the data?

Can you provide the code snippet(or statements) which you are using to upsert  
data?

Thanks,
Rajeshbabu.


This e-mail and its attachments contain confidential information from HUAWEI, 
which 
is intended only for the person or entity whose address is listed above. Any 
use of the 
information contained herein in any way (including, but not limited to, total 
or partial 
disclosure, reproduction, or dissemination) by persons other than the intended 
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by 
phone or email immediately and delete it!


From: su...@certusnet.com.cn [su...@certusnet.com.cn]
Sent: Tuesday, September 02, 2014 8:27 AM
To: user
Subject: Re: Re: Unable to find cached index metadata

Hi,
   Thanks for your reply. Sorry for not completely describing my job 
information.
   I had configured the properties in hbase-site.xml in hmaster node and run 
sqlline to
  create table in Phoenix, while creating a local index on my table. 
   I am trying to load data into the phoenix table, as Phoenix may not support 
index related 
   data bulkload, I am tring to upsert data into phoenix through JDBC 
statements. Then I got the 
   following error, not quite sure about the reason. BTW, no local index data 
upserting works fine.
   Hoping for your reply and thks.  





CertusNet 
 
From: rajesh babu Chintaguntla
Date: 2014-09-02 10:55
To: user
Subject: Re: Unable to find cached index metadata
Hi Sun, 
Thanks for testing,

Have you configured following properties at master side and restarted it before 
creating local indexes?

  hbase.master.loadbalancer.class
  org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer


  hbase.coprocessor.master.classes
  org.apache.phoenix.hbase.index.master.IndexMasterObserver





On Tue, Sep 2, 2014 at 7:35 AM, su...@certusnet.com.cn  
wrote:
Hi, everyone,
   I used the latest 4.1 release to run some tests about local indexing. When I 
am trying to load data into 
   phoenix table with local index, I got the following error. Not sure whether 
got some relation with Hbase
   local index table, cause Hbase local index table is uniformly prefixed with 
'_LOCAL_IDX_' + TableRef.
   Any available hints? Also corrects me if I got some misunderstanding. 
   Best Regards, Sun.
 org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
2008 (INT10): Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 
2008 (INT10): Unable to find cached index metadata. key=-8614672384

RE: Re: Unable to find cached index metadata

2014-09-02 Thread rajeshbabu chintaguntla
bq. I am trying to load data into the phoenix table, as Phoenix may not support 
index related
   data bulkload, I am tring to upsert data into phoenix through JDBC 
statements.

In 4.1 release CSVBulkLoadTool can be used to build indexes when loading data. 
See [1].
And also some more work is going for the same[2].

1. https://issues.apache.org/jira/browse/PHOENIX-1069
2. https://issues.apache.org/jira/browse/PHOENIX-1056

Are you getting the exception for first attempt of upsert or in the middle of 
loading the data?

Can you provide the code snippet(or statements) which you are using to upsert  
data?

Thanks,
Rajeshbabu.

This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

From: su...@certusnet.com.cn [su...@certusnet.com.cn]
Sent: Tuesday, September 02, 2014 8:27 AM
To: user
Subject: Re: Re: Unable to find cached index metadata

Hi,
   Thanks for your reply. Sorry for not completely describing my job 
information.
   I had configured the properties in hbase-site.xml in hmaster node and run 
sqlline to
  create table in Phoenix, while creating a local index on my table.
   I am trying to load data into the phoenix table, as Phoenix may not support 
index related
   data bulkload, I am tring to upsert data into phoenix through JDBC 
statements. Then I got the
   following error, not quite sure about the reason. BTW, no local index data 
upserting works fine.
   Hoping for your reply and thks.




CertusNet


From: rajesh babu Chintaguntla<mailto:chrajeshbab...@gmail.com>
Date: 2014-09-02 10:55
To: user<mailto:user@phoenix.apache.org>
Subject: Re: Unable to find cached index metadata
Hi Sun,
Thanks for testing,

Have you configured following properties at master side and restarted it before 
creating local indexes?


  hbase.master.loadbalancer.class
  org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer


  hbase.coprocessor.master.classes
  org.apache.phoenix.hbase.index.master.IndexMasterObserver





On Tue, Sep 2, 2014 at 7:35 AM, 
su...@certusnet.com.cn<mailto:su...@certusnet.com.cn> 
mailto:su...@certusnet.com.cn>> wrote:
Hi, everyone,
   I used the latest 4.1 release to run some tests about local indexing. When I 
am trying to load data into
   phoenix table with local index, I got the following error. Not sure whether 
got some relation with Hbase
   local index table, cause Hbase local index table is uniformly prefixed with 
'_LOCAL_IDX_' + TableRef.
   Any available hints? Also corrects me if I got some misunderstanding.
   Best Regards, Sun.
 org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
2008 (INT10): Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 
2008 (INT10): Unable to find cached index metadata. key=-861467238479432 
region=RANAPSIGNAL,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1409566437551.9e47a9f579f7cf3865d1148480a3b1b9.
 Index update failed

org.apache.phoenix.execute.MutationState.commit(MutationState.java:433)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:384)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:381)
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:381)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:113)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:104)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:104)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:89)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)

org.apache.spark.SparkContext$$anonfun$runJ

Re: Re: Unable to find cached index metadata

2014-09-01 Thread su...@certusnet.com.cn
Hi,
   Thanks for your reply. Sorry for not completely describing my job 
information.
   I had configured the properties in hbase-site.xml in hmaster node and run 
sqlline to
  create table in Phoenix, while creating a local index on my table. 
   I am trying to load data into the phoenix table, as Phoenix may not support 
index related 
   data bulkload, I am tring to upsert data into phoenix through JDBC 
statements. Then I got the 
   following error, not quite sure about the reason. BTW, no local index data 
upserting works fine.
   Hoping for your reply and thks.  





CertusNet 
 
From: rajesh babu Chintaguntla
Date: 2014-09-02 10:55
To: user
Subject: Re: Unable to find cached index metadata
Hi Sun,
Thanks for testing,

Have you configured following properties at master side and restarted it before 
creating local indexes?

  hbase.master.loadbalancer.class
  org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer


  hbase.coprocessor.master.classes
  org.apache.phoenix.hbase.index.master.IndexMasterObserver





On Tue, Sep 2, 2014 at 7:35 AM, su...@certusnet.com.cn  
wrote:
Hi, everyone,
   I used the latest 4.1 release to run some tests about local indexing. When I 
am trying to load data into 
   phoenix table with local index, I got the following error. Not sure whether 
got some relation with Hbase
   local index table, cause Hbase local index table is uniformly prefixed with 
'_LOCAL_IDX_' + TableRef.
   Any available hints? Also corrects me if I got some misunderstanding. 
   Best Regards, Sun.
 org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
2008 (INT10): Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 
2008 (INT10): Unable to find cached index metadata. key=-861467238479432 
region=RANAPSIGNAL,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1409566437551.9e47a9f579f7cf3865d1148480a3b1b9.
 Index update failed
org.apache.phoenix.execute.MutationState.commit(MutationState.java:433)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:384)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:381)
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:381)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:113)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:104)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:104)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:89)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)





CertusNet 




Re: Unable to find cached index metadata

2014-09-01 Thread rajesh babu Chintaguntla
Hi Sun,
Thanks for testing,

Have you configured following properties at master side and restarted it
before creating local indexes?


  hbase.master.loadbalancer.class
  org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer


  hbase.coprocessor.master.classes
  org.apache.phoenix.hbase.index.master.IndexMasterObserver





On Tue, Sep 2, 2014 at 7:35 AM, su...@certusnet.com.cn <
su...@certusnet.com.cn> wrote:

> Hi, everyone,
>I used the latest 4.1 release to run some tests about local indexing.
> When I am trying to load data into
>phoenix table with local index, I got the following error. Not sure
> whether got some relation with Hbase
>local index table, cause Hbase local index table is uniformly prefixed
> with '_LOCAL_IDX_' + TableRef.
>Any available hints? Also corrects me if I got some misunderstanding.
>Best Regards, Sun.
>  org.apache.phoenix.execute.CommitException: java.sql.SQLException:
> ERROR 2008 (INT10): Unable to find cached index metadata. ERROR 2008
> (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.
> key=-861467238479432
> region=RANAPSIGNAL,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1409566437551.9e47a9f579f7cf3865d1148480a3b1b9.
> Index update failed
>
> 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:433)
> 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:384)
> 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:381)
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:381)
> 
> com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:113)
> 
> com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:104)
> scala.collection.Iterator$class.foreach(Iterator.scala:727)
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> 
> com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:104)
> 
> com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:89)
> scala.collection.Iterator$class.foreach(Iterator.scala:727)
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)
> org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)
> 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> org.apache.spark.scheduler.Task.run(Task.scala:54)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:744)
>
>
> --
> --
>
> CertusNet
>
>
>


Unable to find cached index metadata

2014-09-01 Thread su...@certusnet.com.cn
Hi, everyone,
   I used the latest 4.1 release to run some tests about local indexing. When I 
am trying to load data into 
   phoenix table with local index, I got the following error. Not sure whether 
got some relation with Hbase
   local index table, cause Hbase local index table is uniformly prefixed with 
'_LOCAL_IDX_' + TableRef.
   Any available hints? Also corrects me if I got some misunderstanding. 
   Best Regards, Sun.
 org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 
2008 (INT10): Unable to find cached index metadata. ERROR 2008 (INT10): ERROR 
2008 (INT10): Unable to find cached index metadata. key=-861467238479432 
region=RANAPSIGNAL,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1409566437551.9e47a9f579f7cf3865d1148480a3b1b9.
 Index update failed
org.apache.phoenix.execute.MutationState.commit(MutationState.java:433)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:384)

org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:381)
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:381)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:113)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13$$anonfun$apply$1.apply(RanapSignalJdbcPhoenix.scala:104)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:104)

com.certusnet.spark.bulkload.ranap.RanapSignalJdbcPhoenix$$anonfun$13.apply(RanapSignalJdbcPhoenix.scala:89)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)
org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:759)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)





CertusNet