[ 
https://issues.apache.org/jira/browse/SPARK-37567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

junbiao chen updated SPARK-37567:
---------------------------------
    Description: 
use case:query2 in TPC-DS.There are three exchange subquery will scan the same 
table "store_sales" in logical plan,these subqueries meet exchange reuse rule.I 
confirm that the exchange use rule  work in physical plan.But when spark 
execute the physical plan,I find out 

exchange reuse failed,reused exchange has been executed twice.

 

  was:
use case:query2 in TPC-DS.There are three exchange subquery will scan the same 
table "store_sales" in logical plan,these subqueries meet exchange reuse rule.I 
confirm that the exchange use rule  work in physical plan.But when spark 
execute the physical plan,I find out 

exchange reuse failed,reused exchange has been executed twice.

physical plan:

!image-2021-12-07-17-39-10-548.png!

 

executed stage:

!image-2021-12-07-17-43-51-449.png!

 

!image-2021-12-07-17-43-19-401.png!


> reuse Exchange failed 
> ----------------------
>
>                 Key: SPARK-37567
>                 URL: https://issues.apache.org/jira/browse/SPARK-37567
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.1.1
>            Reporter: junbiao chen
>            Priority: Major
>              Labels: performance
>         Attachments: execution stage-query2.png
>
>
> use case:query2 in TPC-DS.There are three exchange subquery will scan the 
> same table "store_sales" in logical plan,these subqueries meet exchange reuse 
> rule.I confirm that the exchange use rule  work in physical plan.But when 
> spark execute the physical plan,I find out 
> exchange reuse failed,reused exchange has been executed twice.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to