[ 
https://issues.apache.org/jira/browse/PHOENIX-34?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13918620#comment-13918620
 ] 

Maryann Xue commented on PHOENIX-34:
------------------------------------

[~mujtabachohan] 
bq. java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException: 
Requested memory of 22886552 bytes is larger than global pool of 22111027 bytes.

Looks like your region server only allows 22MB memory to Phoenix. What is your 
configuration?

> Insufficient memory exception on join when RHS rows count > 250K 
> -----------------------------------------------------------------
>
>                 Key: PHOENIX-34
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-34
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>         Environment: HBase 0.94.14, r1543222, Hadoop 1.0.4, r1393290, 2 RS + 
> 1 Master, Heap 4GB per RS
>            Reporter: Mujtaba Chohan
>             Fix For: 3.0.0
>
>
> Join fails when rows count of RHS table is >250K. Detail on table schema is 
> and performance numbers with different LHS/RHS row count is on 
> http://phoenix-bin.github.io/client/performance/phoenix-20140210023154.htm.
> James comment:
> So that's with a 4GB heap allowing Phoenix to use 50% of it. With a pretty 
> narrow table: 3 KV columns of 30bytes. Topping out at 250K is a bit low. I 
> wonder if our memory estimation matches reality.
> What do you think Maryann?
> How about filing a JIRA, Mujtaba. This is a good conversation to have on the 
> dev list. Can we move it there, please? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to