[ 
https://issues.apache.org/jira/browse/SPARK-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830091#comment-15830091
 ] 

Takeshi Yamamuro commented on SPARK-19255:
------------------------------------------

I've never heard that such a large data is applied into spark though, is what 
you described in this ticket only an issue for processing that big data?
ISTM many other metadata also eat much memory in a driver.

> SQL Listener is causing out of memory, in case of large no of shuffle 
> partition
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-19255
>                 URL: https://issues.apache.org/jira/browse/SPARK-19255
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>         Environment: Linux
>            Reporter: Ashok Kumar
>            Priority: Minor
>         Attachments: spark_sqllistener_oom.png
>
>
> Test steps.
> 1.CREATE TABLE sample(imei string,age int,task bigint,num double,level 
> decimal(10,3),productdate timestamp,name string,point int)USING 
> com.databricks.spark.csv OPTIONS (path "data.csv", header "false", 
> inferSchema "false");
> 2. set spark.sql.shuffle.partitions=100000;
> 3. select count(*) from (select task,sum(age) from sample group by task) t;
> After running above query, number of objects in map variable 
> _stageIdToStageMetrics has increase to very high number , this increment is 
> proportional to number of shuffle partition.
> Please have a look at attached screenshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to