HeartSaVioR commented on a change in pull request #31355:
URL: https://github.com/apache/spark/pull/31355#discussion_r576407822



##########
File path: 
sql/catalyst/src/main/java/org/apache/spark/sql/connector/distributions/ClusteredDistribution.java
##########
@@ -32,4 +32,13 @@
    * Returns clustering expressions.
    */
   Expression[] clustering();
+
+  /**
+   * Returns the number of partitions required by this write.
+   * <p>
+   * Implementations may want to override this if it requires the specific 
number of partitions.
+   *
+   * @return the required number of partitions, non-positive values mean no 
requirement.
+   */
+  default int requiredNumPartitions() { return 0; }

Review comment:
       Yeah I thought this a bit more, and agree restricting the "parallelism" 
is a valid use case regardless of distribution/ordering.
   
   "coalesce" vs "repartition" remains the question; when distribution is 
specified I expect "repartition" will take effect, so probably "repartition" is 
more consistent. And what I observed from troubleshooting is that coalesce 
could bring an edge-case if used blindly, like when sink places with the same 
stage shuffle was done and takes whole stage quite slow.
   (That was `coalesce(1)` and we simply fixed the problem via changing it to 
`repartition(1)`.)
   
   I'd feel safer if writer requirement just affects the writer operation and 
doesn't affect other operations. What do you think?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to