[ https://issues.apache.org/jira/browse/SPARK-20560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993323#comment-15993323 ]
Steve Loughran commented on SPARK-20560: ---------------------------------------- To follow this up, I've now got a test which verifies that (a) s3a returns "localhost" and (b) spark discards it. This'll catch any regressions in the s3a client. {code} val source = CSV_TESTFILE.get val fs = getFilesystem(source) val blockLocations = fs.getFileBlockLocations(source, 0, 1) assert(1 === blockLocations.length, s"block location array size wrong: ${blockLocations}") val hosts = blockLocations(0).getHosts assert(1 === hosts.length, s"wrong host size ${hosts}") assert("localhost" === hosts(0), "hostname") val path = source.toString val rdd = sc.hadoopFile[LongWritable, Text, TextInputFormat](path, 1) val input = rdd.asInstanceOf[HadoopRDD[_, _]] val partitions = input.getPartitions val locations = input.getPreferredLocations(partitions.head) assert(locations.isEmpty, s"Location list not empty ${locations}") {code} > Review Spark's handling of filesystems returning "localhost" in > getFileBlockLocations > ------------------------------------------------------------------------------------- > > Key: SPARK-20560 > URL: https://issues.apache.org/jira/browse/SPARK-20560 > Project: Spark > Issue Type: Bug > Components: Scheduler > Affects Versions: 2.1.0 > Reporter: Steve Loughran > Priority: Minor > > Some filesystems (S3a, Azure WASB) return "localhost" as the response to > {{FileSystem.getFileBlockLocations(path)}}. If this is then used as the > preferred host when scheduling work, there's a risk that work will be queued > on one host, rather than spread across the cluster. > HIVE-14060 and TEZ-3291 have both seen it in their schedulers. > I don't know if Spark does it, someone needs to look at the code, maybe write > some tests -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org