[ 
https://issues.apache.org/jira/browse/PHOENIX-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428320#comment-15428320
 ] 

James Taylor commented on PHOENIX-2161:
---------------------------------------

You'd set phoenix.query.timeoutMs to cause the query to timeout after this 
amount of milliseconds and in older version of Phoenix, the hbase.rpc.timeout 
to prevent lease expirations. You need to work with your vendor to determine 
how to set these properties so they take effect.

> Can't change timeout
> --------------------
>
>                 Key: PHOENIX-2161
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2161
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.4.0
>         Environment: Hadoop with Ambari 2.1.0
> Phoenix 4.4.0.2.3
> HBase 1.1.1.2.3
> HDFS 2.7.1.2.3
> Zookeeper 3.4.6.2.3
>            Reporter: AdriĆ  V.
>              Labels: hbase, operation, phoenix, timeout
>
> Phoenix or HBase keeps throwing a timeout exception. I have tryed every 
> configuration I could think about to increase it.
> Partial stacktrace:
> {quote}
> Caused by: java.io.IOException: Call to 
> hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
> operationTimeout=60000 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
> waitTime=60001, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
> ... 13 more
> {quote}
> The Phoenix (hbase-site.xml) properties:
> - phoenix.query.timeoutMs
> - phoenix.query.keepAliveMs
> I've tryed editing HBase config files and also setting config in Ambari with 
> the next keys to increase the timeout with no success:
> - hbase.rpc.timeout
> - dfs.socket.timeout
> - dfs.client.socket-timeout
> - zookeeper.session.timeout
> Full stack trace:
> {quote}
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:157)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1250)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=60000, callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
> hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:542)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
> at org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
> at 
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
> at 
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:339)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=60000, callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
> hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:202)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:538)
> ... 11 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> attempts=36, exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=60000, callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
> hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at 
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:56)
> at 
> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:104)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
> at 
> org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
> at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:97)
> at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:85)
> ... 5 more
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=60000, callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
> hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:223)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> at 
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:50)
> ... 11 more
> Caused by: java.net.SocketTimeoutException: callTimeout=60000, 
> callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topics,,1438084107396.cdbdc246ff0b7dfed31d481e0bccd2b5., 
> hostname=hdp-w-1.c.dks-hadoop.internal,16020,1438619912282, seqNum=45322
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
> at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
> ... 3 more
> Caused by: java.io.IOException: Call to 
> hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
> operationTimeout=60000 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
> waitTime=60001, operationTimeout=60000 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
> ... 13 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to