Hi,
The join strategy we have is a hash join. Thus one side of the join (with
the filters applied) is expected to be able to fit into memory. The order
you join the tables in the from clause is important: you should start with
the table that will return the most rows and end with the table that will
return the least rows. The results are cached in an in memory table on the
server-side and the amount of memory you're allowed to consume
is phoenix.query.maxGlobalMemoryPercentage * Runtime.maxMemory() / 100. 10M
rows is going to be pushing this limit. Is there any filtering you'd do on
this join? The memory on the client side is not a factory, as the results
from the table being cached will be spooled to disk if necessary.

Two good resources are: http://phoenix.incubator.apache.org/joins.html and
http://phoenix.incubator.apache.org/tuning.html

Thanks,
James


On Tue, Apr 22, 2014 at 10:33 PM, Saravanan A <[email protected]>wrote:

> yes am using sqlline...JVM heap size is 14GB.
>
>
> On Wed, Apr 23, 2014 at 10:55 AM, Vladimir Rodionov <
> [email protected]> wrote:
>
>> Increase JVM heap size. It seems that you are using sqlline?
>>
>>
>> On Tue, Apr 22, 2014 at 10:04 PM, Saravanan A 
>> <[email protected]>wrote:
>>
>>>
>>> Hi,
>>>
>>> I am try to do a normal join operation on two tables but getting
>>> exception…where my table size is 10million and 23 millions..
>>>
>>> This is my query
>>>
>>>
>>>
>>> select "ga__dmne","wl__dmne" from "ff_ga_main" inner join "ff_wl_main"
>>> wl on("evdt");
>>>
>>>
>>>
>>> and my exceptions are:
>>>
>>>
>>>
>>> java.sql.SQLException: Encountered exception in hash plan [0] execution.
>>>
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:13 9)
>>>
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatemen
>>> t.java:202)
>>>
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.jav a:188)
>>>
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPrepa
>>> redStatement.java:146)
>>>
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPrepa
>>> redStatement.java:151)
>>>
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixCo
>>> nnection.java:207)
>>>
>>>         at
>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRunti
>>> me.java:257)
>>>
>>>         at
>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:201)
>>>
>>> Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM
>>> limit
>>>
>>>         at java.util.Arrays.copyOf(Arrays.java:2271)
>>>
>>>         at
>>> java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>>
>>>         at
>>> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja va:93)
>>>
>>>         at
>>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>>
>>>         at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>>
>>>         at org.apache.phoenix.util.TupleUtil.write(TupleUtil.java:148)
>>>
>>>         at
>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.jav a:101)
>>>
>>>         at
>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.
>>> java:77)
>>>
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:112)
>>>
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:107)
>>>
>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>>> java:1145)
>>>
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
>>> .java:615)
>>>
>>>         at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>>
>>>
>>>
>>> can someone help?????
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Reply via email to