Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20002#discussion_r158108582
  
    --- Diff: core/src/main/scala/org/apache/spark/Partitioner.scala ---
    @@ -57,7 +60,8 @@ object Partitioner {
       def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = {
         val rdds = (Seq(rdd) ++ others)
         val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions 
> 0))
    -    if (hasPartitioner.nonEmpty) {
    +    if (hasPartitioner.nonEmpty
    +      && isEligiblePartitioner(hasPartitioner.maxBy(_.partitions.length), 
rdds)) {
    --- End diff --
    
    `hasPartitioner.maxBy(_.partitions.length)` is used repeatedly, pull that 
into a variable ?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to