rdblue commented on a change in pull request #3203:
URL: https://github.com/apache/iceberg/pull/3203#discussion_r720593233



##########
File path: 
spark3/src/main/scala/org/apache/spark/sql/catalyst/utils/PlanUtils.scala
##########
@@ -54,14 +55,28 @@ object PlanUtils {
         Integer.TYPE)
       .build()
 
+  def numShufflePartitions(conf: SQLConf): Option[Int] = {
+    if (Spark3VersionUtil.isSpark30) {
+      Some(conf.numShufflePartitions)
+    } else {
+      // SPARK-32056: Coalesce partitions for repartition by expressions when 
AQE is enabled
+      None
+    }
+  }
+
   def createRepartitionByExpression(
       partitionExpressions: Seq[Expression],
       child: LogicalPlan,
-      numPartitions: Int): RepartitionByExpression = {
+      numPartitions: Option[Int]): RepartitionByExpression = {
     if (Spark3VersionUtil.isSpark30) {
-      repartitionByExpressionCtor.newInstance(partitionExpressions, child, 
Integer.valueOf(numPartitions))
+      numPartitions match {
+        case Some(num) =>
+          repartitionByExpressionCtor.newInstance(partitionExpressions, child, 
Integer.valueOf(num))
+        case None =>
+          throw new IllegalArgumentException("numPartitions is required before 
SPARK-32056")

Review comment:
       It doesn't make sense to me that there is a case where we would allow 
passing an incorrect value here. Rather than creating the 
`numShufflePartitions` method, checking for Spark 3.0 in both places, and then 
adding this check, why not just change the behavior so that the number of 
shuffle partitions is an int, but is ignored unless this is Spark 3.0?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to