[ https://issues.apache.org/jira/browse/SPARK-37652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
mcdull_zhang updated SPARK-37652: --------------------------------- Description: `OptimizeSkewedJoin` rule will take effect only when the plan has two ShuffleQueryStageExec。 With `Union`, it might break the assumption. For example, the following plans *scenes 1* {noformat} Union SMJ ShuffleQueryStage ShuffleQueryStage SMJ ShuffleQueryStage ShuffleQueryStage {noformat} *scenes 2* {noformat} Union SMJ ShuffleQueryStage ShuffleQueryStage HashAggregate {noformat} when one or more of the SMJ data in the above plan is skewed, it cannot be processed at present. It's better to support partial optimize with Union. was: `OptimizeSkewedJoin` rule will take effect only when the plan has two ShuffleQueryStageExec。 With `Union`, it might break the assumption. For example, the following plans {code: none} Union SMJ SMJ {code} > Support optimize skewed join through union > ------------------------------------------ > > Key: SPARK-37652 > URL: https://issues.apache.org/jira/browse/SPARK-37652 > Project: Spark > Issue Type: Sub-task > Components: SQL > Affects Versions: 3.2.0 > Reporter: mcdull_zhang > Priority: Minor > > `OptimizeSkewedJoin` rule will take effect only when the plan has two > ShuffleQueryStageExec。 > With `Union`, it might break the assumption. For example, the following plans > *scenes 1* > {noformat} > Union > SMJ > ShuffleQueryStage > ShuffleQueryStage > SMJ > ShuffleQueryStage > ShuffleQueryStage > {noformat} > *scenes 2* > {noformat} > Union > SMJ > ShuffleQueryStage > ShuffleQueryStage > HashAggregate > {noformat} > when one or more of the SMJ data in the above plan is skewed, it cannot be > processed at present. > It's better to support partial optimize with Union. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org