[ 
https://issues.apache.org/jira/browse/SPARK-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-22144.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 2.4.0

Issue resolved by pull request 19364
[https://github.com/apache/spark/pull/19364]

> ExchangeCoordinator will not combine the partitions of an 0 sized pre-shuffle
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-22144
>                 URL: https://issues.apache.org/jira/browse/SPARK-22144
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.0
>         Environment: spark: version:Spark 2.2
> master: yarn
> deploy-mode: cluster
>            Reporter: Lijia Liu
>            Assignee: Lijia Liu
>            Priority: Major
>             Fix For: 2.4.0
>
>
> A simple case:
> spark.conf.set("spark.sql.adaptive.enabled", "true")
> val df = spark.range(0, 0, 1, 10).selectExpr("id as key1") 
> .groupBy("key1").count()
> val exchange = df.queryExecution.executedPlan.collect{case e: 
> org.apache.spark.sql.execution.exchange.ShuffleExchange => e}(0)
> println(exchange.outputPartitioning.numPartitions) // The value will be 
> spark.sql.shuffle.partitions and ExchangeCoordinator did not took effect. At 
> the same time, a job with some(spark.sql.shuffle.partitions) tasks will be 
> submited. 
> In my opinion, when data is empty, this job is useless and superfluous.
> This job cause waste of resources, in special when 
> spark.sql.shuffle.partitions was set very large.
> So, as far as I'm concerned, when the length of pre-shuffle's partitions is 
> 0, the length of post-shuffle's partitions should be 0 instead of 
> spark.sql.shuffle.partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to