Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20091#discussion_r162549412
  
    --- Diff: core/src/main/scala/org/apache/spark/Partitioner.scala ---
    @@ -67,31 +69,32 @@ object Partitioner {
           None
         }
     
    -    if (isEligiblePartitioner(hasMaxPartitioner, rdds)) {
    +    val defaultNumPartitions = if 
(rdd.context.conf.contains("spark.default.parallelism")) {
    +      rdd.context.defaultParallelism
    +    } else {
    +      rdds.map(_.partitions.length).max
    +    }
    +
    +    // If the existing max partitioner is an eligible one, or its 
partitions number is larger
    +    // than the default number of partitions, use the existing partitioner.
    +    if (hasMaxPartitioner.nonEmpty && 
(isEligiblePartitioner(hasMaxPartitioner.get, rdds) ||
    +        defaultNumPartitions < hasMaxPartitioner.get.getNumPartitions)) {
    --- End diff --
    
    It depends on how you define "default". In this case, if we can benefit 
from reusing an existing partitioner, we should pick that partitioner. If we 
want to respect `spark.default.parallelism` strictly, we should not reuse 
partitioner at all.
    
    For this particular case, picking the existing partitioner is obviously a 
better choice and it was the behavior before #20002 , so I'm +1 on this change.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to