I have a table like this:

CREATE TABLE fma. er_keyed_gz_meterkey_split_custid (
    meter_key varchar not null,
    ...
    sample_point  integer not null,
    ...
   endpoint_id integer,
    ...
    CONSTRAINT pk_rma_er_keyed_filtered PRIMARY KEY (meter_key, sample_point)
    )
    COMPRESSION='GZ'
    SPLIT ON (.....)

I need a secondary index that allows me to query based on endpoint_id and 
sample_point. A row with a given meter_key will ALMOST always have the same 
endpoint_id, but there are exceptions.

So I did this:

ALTER TABLE fma.er_keyed_gz_meterkey_split_custid SET IMMUTABLE_ROWS=true;

And this:

create index fma_er_keyed_gz_endpoint_id_include_sample_point on 
fma.er_keyed_gz_meterkey_split_custid (endpoint_id) include (sample_point) 
SALT_BUCKETS = 256;

After a very long time, the 'create index' command comes back with: "Error:  
(state=08000,code=101)"

However, if I query '!indexes fma.ER_KEYED_GZ_METERKEY_SPLIT_CUSTID;' it shows 
me my new index.

But when I try to query against it: select endpoint_id, sample_point from 
fma.er_keyed_gz_meterkey_split_custid where endpoint_id = 49799898;

I end up getting exceptions like the one below.

I'm guessing that this has something to do with the many billions of rows that 
are already in my table.

Can anyone help? Am I doing something wrong? Is there a better way to make this 
table easy to query by meter_key and/or endpoint_id + sample_point?


15/07/22 08:28:42 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.regionserver.LeaseException: 
org.apache.hadoop.hbase.regionserver.LeaseException: lease '16982' does not 
exist
        at 
org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
        at 
org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3305)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:744)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:306)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:323)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:189)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:119)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
        at 
org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:476)
        at 
org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:41)
        at 
org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:64)
        at 
org.apache.phoenix.iterate.ChunkedResultIterator$SingleChunkResultIterator.close(ChunkedResultIterator.java:173)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:131)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:74)
        at 
org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:68)
        at 
org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:90)
        at 
org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:70)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:631)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:622)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.LeaseException):
 org.apache.hadoop.hbase.regionserver.LeaseException: lease '16982' does not 
exist
        at 
org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
        at 
org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3305)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:744)

        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1538)
        at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
        at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30397)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:321)
        ... 20 more
15/07/22 08:28:42 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.regionserver.LeaseException: 
org.apache.hadoop.hbase.regionserver.LeaseException: lease '6136' does not exist
        at 
org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
        at 
org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3305)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:744)

Reply via email to