cloud-fan commented on code in PR #45125: URL: https://github.com/apache/spark/pull/45125#discussion_r1515561516
########## sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala: ########## @@ -34,7 +34,7 @@ import org.apache.spark.sql.catalyst.trees.TreePattern.{COMMON_EXPR_REF, WITH_EX */ object RewriteWithExpression extends Rule[LogicalPlan] { override def apply(plan: LogicalPlan): LogicalPlan = { - plan.transformWithPruning(_.containsPattern(WITH_EXPRESSION)) { + plan.transformDownWithSubqueriesAndPruning(_.containsPattern(WITH_EXPRESSION)) { Review Comment: Then this rule becomes O(n^2) complexity as every level of subqueries runs this rule for all subqueries under its level. Thinking about it more, why do we handle the count bug in two places that are far away? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org