mridulm commented on a change in pull request #31876:
URL: https://github.com/apache/spark/pull/31876#discussion_r617231423



##########
File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala
##########
@@ -52,8 +53,20 @@ private[spark] sealed trait MapStatus {
    * partitionId of the task or taskContext.taskAttemptId is used.
    */
   def mapId: Long
-}
 
+  protected def loadLocation(in: ObjectInput): Location = {
+    val conf = SparkEnv.get.conf
+    conf.get(config.SHUFFLE_LOCATION_PLUGIN_CLASS).map { locClass =>
+      val loc = Utils.loadExtensions(
+        classOf[Location],
+        Seq(locClass),
+        conf
+      ).head
+      loc.readExternal(in)
+      loc

Review comment:
       Why do we need a fresh Location instance ?
   
   To answer the query, something like 
`SparkEnv.get.getShuffleLocationFactory.deserialize(in)` (or some such - which 
defaults to using BlockManagerId) should make it a method dispatch - with 
ability for LocationFactory to cache the (immutable) Location object.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to