Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12527#discussion_r60695719
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
 ---
    @@ -131,4 +134,23 @@ class FileScanRDD(
       }
     
       override protected def getPartitions: Array[RDDPartition] = 
filePartitions.toArray
    +
    +  override protected def getPreferredLocations(split: RDDPartition): 
Seq[String] = {
    +    val files = split.asInstanceOf[FilePartition].files
    +
    +    // Computes total number of bytes can be retrieved from each host.
    +    val hostToNumBytes = mutable.HashMap.empty[String, Long]
    +    files.foreach { file =>
    +      file.locations.filter(_ != "localhost").foreach { host =>
    --- End diff --
    
    We should filter them out. For a partition that doesn't have any preferred 
locations, it can be bundled with any other tasks and scheduled to any 
executor. But once it's marked with "localhost", delayed scheduling may be 
triggered because they have different host name as other tasks. Further more, 
"localhost" isn't a valid location for the `DAGScheduler` when deciding which 
executors to run the tasks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to