[ 
https://issues.apache.org/jira/browse/SPARK-32184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Zhou updated SPARK-32184:
------------------------------
    Summary: Performance regression on TPCH Q18 in 3.0 - ~20% slower than 2.4  
(was: Performance regression on TPCH Q18)

> Performance regression on TPCH Q18 in 3.0 - ~20% slower than 2.4
> ----------------------------------------------------------------
>
>                 Key: SPARK-32184
>                 URL: https://issues.apache.org/jira/browse/SPARK-32184
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0
>         Environment: spark 2.4 and spark 3.0 are using the same configurations
>  * spark.driver.memory 20g
>  * spark.executor.memory 20g
>  * spark.executor.cores 7
>  * spark.executor.memoryOverhead 3g
>  * spark.sql.shuffle.partitions 384
>            Reporter: Yuan Zhou
>            Priority: Major
>         Attachments: 2.4.png, 3.0.png
>
>
> Hi Spark developers,
> Testing with the new Spark 3.0.0 here and found some performance regression 
> on TPCH Q18. Spark 2.4 seems can "reuse" the HashAgg results in two SMJ, 
> while Spark 3.0.0 needs to calculate this results twice. I tried to search 
> the documentation but no turnings found on this. 
> Here's the SQL diagram for 2.4. The 2nd exchange on the top right corner was 
> reused.   !2.4.png!
> Here's the diagram for 3.0. The 1st exchange on top-right corner was reused. 
> !3.0.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to