[GitHub] [spark] cloud-fan commented on a diff in pull request #38557: [SPARK-38959][SQL][FOLLOWUP] Optimizer batch `PartitionPruning` should optimize subqueries

2022-11-08 Thread GitBox


cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017361159


##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class 
RowLevelOperationRuntimeGroupFiltering(optimizeSubqueries: Rule[Logic
   buildKeys: Seq[Attribute],
   pruningKeys: Seq[Attribute]): Expression = {
 
-val buildQuery = Project(buildKeys, matchingRowsPlan)
-val dynamicPruningSubqueries = pruningKeys.zipWithIndex.map { case (key, 
index) =>
-  DynamicPruningSubquery(key, buildQuery, buildKeys, index, 
onlyInBroadcast = false)
-}
-dynamicPruningSubqueries.reduce(And)
+val buildQuery = Aggregate(buildKeys, buildKeys, matchingRowsPlan)

Review Comment:
   My rationale is, what we really need is a subquery here. This is completely 
different from dynamic partition pruning. One limitation is DS v2 runtime 
filter pushdown only applies to `DynamicPruningExpression`. We can probably fix 
that and accept normal non-correlated subqueries as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a diff in pull request #38557: [SPARK-38959][SQL][FOLLOWUP] Optimizer batch `PartitionPruning` should optimize subqueries

2022-11-08 Thread GitBox


cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017318193


##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -320,6 +320,9 @@ abstract class Optimizer(catalogManager: CatalogManager)
 }
 def apply(plan: LogicalPlan): LogicalPlan = 
plan.transformAllExpressionsWithPruning(
   _.containsPattern(PLAN_EXPRESSION), ruleId) {
+  // Do not optimize DPP subquery, as it was created from optimized plan 
and we should not
+  // optimize it again, to save optimization time and avoid breaking 
broadcast/subquery reuse.
+  case d: DynamicPruningSubquery => d

Review Comment:
   Yes, because this PR adds `OptimizeSubqueries` to the batch 
`PartitionPruning` and we should not break 
https://github.com/apache/spark/pull/33664



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a diff in pull request #38557: [SPARK-38959][SQL][FOLLOWUP] Optimizer batch `PartitionPruning` should optimize subqueries

2022-11-08 Thread GitBox


cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017300121


##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class 
RowLevelOperationRuntimeGroupFiltering(optimizeSubqueries: Rule[Logic
   buildKeys: Seq[Attribute],
   pruningKeys: Seq[Attribute]): Expression = {
 
-val buildQuery = Project(buildKeys, matchingRowsPlan)
-val dynamicPruningSubqueries = pruningKeys.zipWithIndex.map { case (key, 
index) =>
-  DynamicPruningSubquery(key, buildQuery, buildKeys, index, 
onlyInBroadcast = false)
-}
-dynamicPruningSubqueries.reduce(And)
+val buildQuery = Aggregate(buildKeys, buildKeys, matchingRowsPlan)

Review Comment:
   I don't see any downside. We can only reuse broadcast if the DPP filter is 
derived from a join, which doesn't apply here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a diff in pull request #38557: [SPARK-38959][SQL][FOLLOWUP] Optimizer batch `PartitionPruning` should optimize subqueries

2022-11-08 Thread GitBox


cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017300121


##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class 
RowLevelOperationRuntimeGroupFiltering(optimizeSubqueries: Rule[Logic
   buildKeys: Seq[Attribute],
   pruningKeys: Seq[Attribute]): Expression = {
 
-val buildQuery = Project(buildKeys, matchingRowsPlan)
-val dynamicPruningSubqueries = pruningKeys.zipWithIndex.map { case (key, 
index) =>
-  DynamicPruningSubquery(key, buildQuery, buildKeys, index, 
onlyInBroadcast = false)
-}
-dynamicPruningSubqueries.reduce(And)
+val buildQuery = Aggregate(buildKeys, buildKeys, matchingRowsPlan)

Review Comment:
   I don't see any downside. We can only reuse broadcast if the DPP filter is 
derived from a join, which doesn't apply here. What's better, now this is a 
normal subquery and we can trigger subquery reuse which is not possible for DPP 
subqueries.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org