[jira] [Resolved] (SPARK-18804) Join doesn't work in Spark on Bigger tables

2016-12-13 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-18804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-18804.
---
Resolution: Not A Problem

(Please don't reopen if the discussion has not meaningfully changed. JIRA isn't 
for questions/discussion -- mailing lists are.) I already indicated that you 
need to investigate basics of why the failure occurred. That detail is not here.

> Join doesn't work in Spark on Bigger tables
> ---
>
> Key: SPARK-18804
> URL: https://issues.apache.org/jira/browse/SPARK-18804
> Project: Spark
>  Issue Type: Question
>  Components: Input/Output
>Affects Versions: 1.6.1
>Reporter: Gopal Nagar
>
> Hi All,
> Spark1.6.1 has been installed on a AWS EMR 3 node cluster which has 32 GB RAM 
> and 80 GB storage each node. I am trying to join two tables (1.2 GB & 900 MB 
> ) have rows 4607818 & 14273378 respectively. It's running in client mode on 
> Yarn cluster manager.
> If i put the limit as 100 in select query it works fine. But if i try to join 
> on entire data set, Query runs for 3-4 hours and finally gets terminated. I 
> can see always 18 GB free on each nodes.
> I have tried increasing no of executers/cores/partitions. But still doesn't 
> work. This has been tried in PySpark and submitted using Spark Submit command 
> but doesn't run. Please advise.
> Join Query 
> --
> select * FROM table1 as t1 join table2 as t2 on t1.col = t2.col limit 100;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-18804) Join doesn't work in Spark on Bigger tables

2016-12-12 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-18804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-18804.
---
Resolution: Not A Problem

There are so many things that could make this not work that are not bugs in 
Spark. You haven't provided key detail about what actually fails. I would have 
to close this as unactionable. Most likely you have data skew or insufficient 
resources.

> Join doesn't work in Spark on Bigger tables
> ---
>
> Key: SPARK-18804
> URL: https://issues.apache.org/jira/browse/SPARK-18804
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 1.6.1
>Reporter: Gopal Nagar
>
> Hi All,
> Spark1.6.1 has been installed on a AWS EMR 3 node cluster which has 32 GB RAM 
> and 80 GB storage each node. I am trying to join two tables (1.2 GB & 900 MB 
> ) have rows 4607818 & 14273378 respectively. It's running in client mode on 
> Yarn cluster manager.
> If i put the limit as 100 in select query it works fine. But if i try to join 
> on entire data set, Query runs for 3-4 hours and finally gets terminated. I 
> can see always 18 GB free on each nodes.
> I have tried increasing no of executers/cores/partitions. But still doesn't 
> work. This has been tried in PySpark and submitted using Spark Submit command 
> but doesn't run. Please advise.
> Join Query 
> --
> select * FROM table1 as t1 join table2 as t2 on t1.col = t2.col limit 100;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org