Krunal created PHOENIX-2001:
--------------------------------
Summary: Join create OOM with java heap space on phoenix client
Key: PHOENIX-2001
URL: https://issues.apache.org/jira/browse/PHOENIX-2001
Project: Phoenix
Issue Type: Bug
Affects Versions: 4.3.1
Reporter: Krunal
Hi
I have 2 issues with phoenix client:
1. Heap memory is not cleanup after each query is finished. So, it keeps
increasing every time when we submit new query.
2. I am try to do a normal join operation on two tables but getting exception.
Below is the details:
These are some sample queries I tried:
1. select p1.host, count(1) from PERFORMANCE_5000000 p1, PERFORMANCE_25000000
p2 where p1.host = p2.host group by p1.host;
2. select p1.host from PERFORMANCE_5000000 p1, PERFORMANCE_25000000 p2 where
p1.host = p2.host group by p1.host;
3. select count(1) from PERFORMANCE_5000000 p1, PERFORMANCE_25000000 p2 where
p1.host = p2.host group by p1.host;
Here is explain plan:
explain select count(1) from PERFORMANCE_5000000 p1, PERFORMANCE_25000000 p2
where p1.host = p2.host group by p1.host;
+------------------------------------------+
| PLAN |
+------------------------------------------+
| CLIENT 9-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_5000000 |
| SERVER FILTER BY FIRST KEY ONLY |
| SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [HOST] |
| CLIENT MERGE SORT |
| PARALLEL INNER-JOIN TABLE 0 (SKIP MERGE) |
| CLIENT 18-CHUNK PARALLEL 1-WAY FULL SCAN OVER PERFORMANCE_25000000 |
| SERVER FILTER BY FIRST KEY ONLY |
| DYNAMIC SERVER FILTER BY HOST IN (P2.HOST) |
+------------------------------------------+
8 rows selected (0.127 seconds)
Phoenix client heap size is 16GB. ( noticed that above queries are dumping data
in local heap, I see millions of instances for
org.apache.phoenix.expression.literalexpression)
phoenix version: 4.3.1
hbase version: 0.98.1
and my exceptions are:
java.sql.SQLException: Encountered exception in sub plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:235)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:226)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:225)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1066)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException:
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:247)
at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
at
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.Exception:
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:239)
... 7 more
Caused by: java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:212)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182)
... 4 more
Caused by: java.lang.OutOfMemoryError: Java heap space
May 20, 2015 4:58:01 PM ServerCommunicatorAdmin reqIncoming
WARNING: The server has decided to close this client connection.
15/05/20 16:56:43 WARN client.HTable: Error calling coprocessor service
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService
for row CSGoogle\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap
space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:188)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.google.protobuf.ByteString$CodedBuilder.<init>(ByteString.java:907)
at com.google.protobuf.ByteString$CodedBuilder.<init>(ByteString.java:902)
at com.google.protobuf.ByteString.newCodedBuilder(ByteString.java:898)
at
com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
at
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:81)
at
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:57)
at
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189)
at org.apache.hadoop.hbase.client.HTable$17.call(HTable.java:1608)
... 4 more
0: jdbc:phoenix:prbhadoop004iad.io.askjeeves.> 15/05/20 16:56:43 WARN
client.HTable: Error calling coprocessor service
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService
for row EUGoogle\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap
space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:188)
at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:182)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
15/05/20 16:59:37 WARN client.HConnectionManager$HConnectionImplementation:
This client just lost it's session with ZooKeeper, closing it. It will be
recreated next time someone needs it
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode =
Session expired
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:403)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:321)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Error: (state=,code=0)
java.sql.SQLFeatureNotSupportedException
at org.apache.phoenix.jdbc.PhoenixStatement.cancel(PhoenixStatement.java:958)
at sqlline.DispatchCallback.forceKillSqlQuery(DispatchCallback.java:83)
at sqlline.SqlLine.begin(SqlLine.java:695)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Can someone please help?
Thanks!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)