[GitHub] [spark] cloud-fan commented on a change in pull request #32602: [SPARK-35455][SQL] Unify empty relation optimization between normal and AQE optimizer
cloud-fan commented on a change in pull request #32602: URL: https://github.com/apache/spark/pull/32602#discussion_r780377367 ## File path: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/PropagateEmptyRelation.scala ## @@ -137,3 +125,55 @@ object PropagateEmptyRelation extends Rule[LogicalPlan] with PredicateHelper wit } } } + +/** + * This rule runs in the normal optimizer and optimizes more cases + * compared to [[PropagateEmptyRelationBase]]: + * 1. Higher-node Logical Plans + *- Union with all empty children. + * 2. Unary-node Logical Plans + *- Project/Filter/Sample with all empty children. + * + * The reason why we don't apply this rule at AQE optimizer side is: the benefit is not big enough + * and it may introduce extra exchanges. Review comment: After more thought, I think this is a big performance issue if we can't propagate empty relations through project/filter which are quite common. The risk of introducing new shuffles is relatively small compared to this. @ulysses-you can we move all the logic to `PropagateEmptyRelationBase`? `PropagateEmptyRelation` should not have any extra logic. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] cloud-fan commented on a change in pull request #32602: [SPARK-35455][SQL] Unify empty relation optimization between normal and AQE optimizer
cloud-fan commented on a change in pull request #32602: URL: https://github.com/apache/spark/pull/32602#discussion_r780377367 ## File path: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/PropagateEmptyRelation.scala ## @@ -137,3 +125,55 @@ object PropagateEmptyRelation extends Rule[LogicalPlan] with PredicateHelper wit } } } + +/** + * This rule runs in the normal optimizer and optimizes more cases + * compared to [[PropagateEmptyRelationBase]]: + * 1. Higher-node Logical Plans + *- Union with all empty children. + * 2. Unary-node Logical Plans + *- Project/Filter/Sample with all empty children. + * + * The reason why we don't apply this rule at AQE optimizer side is: the benefit is not big enough + * and it may introduce extra exchanges. Review comment: After more thought, I think this is a big performance issue if we can't propagate empty relations through project/filter which are quite common. The risk of introducing new shuffles is relatively small compared to this. @ulysses-you can we move all the logic to `PropagateEmptyRelationBase`? `PropagateEmptyRelation` should not have any extra logic. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] cloud-fan commented on a change in pull request #32602: [SPARK-35455][SQL] Unify empty relation optimization between normal and AQE optimizer
cloud-fan commented on a change in pull request #32602: URL: https://github.com/apache/spark/pull/32602#discussion_r642749109 ## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AQEOptimizer.scala ## @@ -27,7 +28,9 @@ import org.apache.spark.util.Utils */ class AQEOptimizer(conf: SQLConf) extends RuleExecutor[LogicalPlan] { private val defaultBatches = Seq( -Batch("Eliminate Unnecessary Join", Once, EliminateUnnecessaryJoin), +Batch("Propagate Empty Relations", Once, + AQEPropagateEmptyRelation, + UpdateAttributeNullability), Review comment: It's a bit different: ``` Project Shuffle Stage ``` For the above case, we don't want to optimize it as the benefit is too small (removing a shuffle stage may cause regression) ``` Project Sort Shuffle Stage ``` For the above case, we will optimize Sort -> Shuffle Stage to empty relation first. Then it makes sense to optimize further and optimize out project, as the shuffle stage is already gone. So adding `ConvertToLocalRelation` looks the best solution here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] cloud-fan commented on a change in pull request #32602: [SPARK-35455][SQL] Unify empty relation optimization between normal and AQE optimizer
cloud-fan commented on a change in pull request #32602: URL: https://github.com/apache/spark/pull/32602#discussion_r642749109 ## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AQEOptimizer.scala ## @@ -27,7 +28,9 @@ import org.apache.spark.util.Utils */ class AQEOptimizer(conf: SQLConf) extends RuleExecutor[LogicalPlan] { private val defaultBatches = Seq( -Batch("Eliminate Unnecessary Join", Once, EliminateUnnecessaryJoin), +Batch("Propagate Empty Relations", Once, + AQEPropagateEmptyRelation, + UpdateAttributeNullability), Review comment: It's a bit different: ``` Project Shuffle Stage ``` For the above case, we don't want to optimize it as the benefit is too small ``` Project Sort Shuffle Stage ``` For the above case, we will optimize Sort -> Shuffle Stage to empty relation first. Then it makes sense to optimize further and optimize out project, as the shuffle stage is already gone. So adding `ConvertToLocalRelation` looks the best solution here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] cloud-fan commented on a change in pull request #32602: [SPARK-35455][SQL] Unify empty relation optimization between normal and AQE optimizer
cloud-fan commented on a change in pull request #32602: URL: https://github.com/apache/spark/pull/32602#discussion_r642596841 ## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AQEOptimizer.scala ## @@ -27,7 +28,9 @@ import org.apache.spark.util.Utils */ class AQEOptimizer(conf: SQLConf) extends RuleExecutor[LogicalPlan] { private val defaultBatches = Seq( -Batch("Eliminate Unnecessary Join", Once, EliminateUnnecessaryJoin), +Batch("Propagate Empty Relations", Once, + AQEPropagateEmptyRelation, + UpdateAttributeNullability), Review comment: shall we also put `ConvertToLocalRelation` in this batch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org