cloud-fan commented on a change in pull request #26409: [SPARK-29655][SQL] Enable adaptive execution should not add more ShuffleExchange URL: https://github.com/apache/spark/pull/26409#discussion_r344796903
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala ########## @@ -83,7 +83,25 @@ case class EnsureRequirements(conf: SQLConf) extends Rule[SparkPlan] { numPartitionsSet.headOption } - val targetNumPartitions = requiredNumPartitions.getOrElse(childrenNumPartitions.max) + // maxNumPostShufflePartitions is usually larger than numShufflePartitions, + // which causes some bucket map join lose efficacy after enabling adaptive execution. + // Please see SPARK-29655 for more details. + val expectedChildrenNumPartitions = if (conf.adaptiveExecutionEnabled) { Review comment: The logical is convoluted here as we've already added the shuffles with `maxNumPostShufflePartitions`, and we need to revert it. Can we make the implementation clearer? Basically we are picking the `targetNumPartitions` as: 1. if there are no non-shuffle children, keep the previous behavior. 2. for non-shuffle children, get the max num partitions among them. 2.1. if the max num partitions is larger than `conf.numShufflePartitions`, pick it as `targetNumPartitions`. 2.2. otherwise, pick `conf.numShufflePartitions` as `targetNumPartitions` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org