[ 
https://issues.apache.org/jira/browse/PHOENIX-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288998#comment-14288998
 ] 

sunnychen commented on PHOENIX-1179:
------------------------------------

Dear M,
       Thank you so much for solving my problem,the advice you gave is really 
helpful,i am very appreciated.
       I changed the hbase-site config:
--<property>
--  <name>hbase.rpc.timeout</name>
-- <value>9000000</value> 
--</property>
and run the sql i have mentioned ,
select BIG.id from MAX_CT_STANDARD_TEST_TABLE1 as BIG JOIN CT_4 AS SMALL ON 
BIG.ID=SMALL.ID;
it works,Time: 2040.769 sec(s), records:100000L

it seems i have solved my problem with your kind help,but i run the other sql 
with the config below,
select BIG.id from MAX_CT_STANDARD_TEST_TABLE1 as BIG JOIN TEST_YQW_1 AS SMALL 
ON BIG.ID=SMALL.ID;
MAX_CT_STANDARD_TEST_TABLE1 has 60million rows which size is 120G,and 
TEST_YQW_1 has 10million rows which size is 20G,it still comes out with the rpc 
error,so the next step i should do is to still to increase the 
hbase.rpc.timeout value? but the time i had set is so larger, so i am wondering 
if the problem is the rhs table is too large to fit my memory? 
the phoenix version i am using is version3.1,so SKIP_SCAN_HASH_JOIN is not 
support in this version,so i am willing to improve my version to 4.2 to test 
your advice, if the phoenix3.1 do not support the rhs table exceed the memory?
Again,Thank you for your help~ I am really appreciated with it~

the error message,it's a little bit different~
ava.sql.SQLException: Encountered exception in hash plan [0] execution.
        at 
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:185)
        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:164)
        at 
org.apache.phoenix.util.phoenixContextExecutor.call(phoenixContextExecutor.java:54)
        at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:164)
        at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:153)
        at 
org.apache.phoenix.jdbc.phoenixPreparedStatement.execute(phoenixPreparedStatement.java:147)
        at 
org.apache.phoenix.jdbc.phoenixPreparedStatement.execute(phoenixPreparedStatement.java:152)
        at 
org.apache.phoenix.jdbc.phoenixConnection.executeStatements(phoenixConnection.java:220)
        at 
org.apache.phoenix.util.phoenixRuntime.executeStatements(phoenixRuntime.java:193)
        at org.apache.phoenix.util.phoenixRuntime.main(phoenixRuntime.java:140)
Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException: 
java.lang.reflect.UndeclaredThrowableException
        at 
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
        at 
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
        at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
        at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.reflect.UndeclaredThrowableException
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:202)
        at 
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
        ... 7 more
Caused by: java.lang.reflect.UndeclaredThrowableException
        at com.sun.proxy.$Proxy10.addServerCache(Unknown Source)
        at 
org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
        at 
org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
        ... 4 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
after attempts=2, exceptions:
Fri Jan 23 16:34:00 CST 2015, 
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@4f7be11c, java.io.IOException: 
Call to nobida122/10.60.1.122:60020 failed on local exception: 
java.io.EOFException
Fri Jan 23 16:34:00 CST 2015, 
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@4f7be11c, 
org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
in the failed servers list: nobida122/10.60.1.122:60020

        at 
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
        at 
org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
        ... 7 more

> Support many-to-many joins
> --------------------------
>
>                 Key: PHOENIX-1179
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1179
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Maryann Xue
>             Fix For: 4.3, 3.3
>
>         Attachments: 1179.patch
>
>
> Enhance our join capabilities to support many-to-many joins where the size of 
> both sides of the join are too big to fit into memory (and thus cannot use 
> our hash join mechanism). One technique would be to order both sides of the 
> join by their join key and merge sort the results on the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to