Hi, I am running a query that takes a long time, via sqlline.py, but I am repeatedly getting

15/09/29 08:54:27 WARN client.ScannerCallable: Ignore, probably already closed org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 3085, already closed? at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2223) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322) at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:357) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:195) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:142) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.client.StatsTrackingRpcRetryingCaller.callWithoutRetries(StatsTrackingRpcRetryingCaller.java:56) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258) at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:532) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55) at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:107) at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:125) at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:83) at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:62) at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:78) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:109) at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:100)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: *Name: 3085, already closed?* at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2223) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1196) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651) at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:355)
    ... 23 more

Name: ### varies

My hbase-site settings:
  phoenix.query.timeoutMs = 6000000
# When the number of threads is greater than the core in the client side thread pool executor, this is the maximum time in milliseconds that excess idle threads will wait for a new tasks before terminating. Default is 60 sec.
  phoenix.query.keepAliveMs = 6000000
  hbase.client.operation.timeout = 1200000
  hbase.client.backpressure.enabled = true
  hbase.client.retries.number = 1
  hbase.rpc.timeout = 6000000

Any ideas of what might be causing this?

Thanks

Reply via email to