cloud-fan commented on a change in pull request #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size URL: https://github.com/apache/spark/pull/26434#discussion_r362863789
########## File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ########## @@ -343,15 +343,17 @@ private[spark] abstract class MapOutputTracker(conf: SparkConf) extends Logging /** * Called from executors to get the server URIs and output sizes for each shuffle block that * needs to be read from a given range of map output partitions (startPartition is included but - * endPartition is excluded from the range) and is produced by a specific mapper. + * endPartition is excluded from the range) and is produced by + * a range of mappers (startMapId, endMapId, startMapId is included and the endMapId is excluded). * * @return A sequence of 2-item tuples, where the first item in the tuple is a BlockManagerId, * and the second item is a sequence of (shuffle block id, shuffle block size, map index) * tuples describing the shuffle blocks that are stored at that block manager. */ - def getMapSizesByMapIndex( + def getMapSizesByRange( shuffleId: Int, - mapIndex: Int, Review comment: `mapIndex` is more corrected. We should use `startMapIndex` and `endMapIndex` as parameter names. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org