rrusso2007 commented on a change in pull request #24672: [SPARK-27801] 
InMemoryFileIndex.listLeafFiles should use listLocatedStatus for 
DistributedFileSystem
URL: https://github.com/apache/spark/pull/24672#discussion_r286323325
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InMemoryFileIndex.scala
 ##########
 @@ -274,7 +275,21 @@ object InMemoryFileIndex extends Logging {
     // [SPARK-17599] Prevent InMemoryFileIndex from failing if path doesn't 
exist
     // Note that statuses only include FileStatus for the files and dirs 
directly under path,
     // and does not include anything else recursively.
-    val statuses = try fs.listStatus(path) catch {
+    val statuses: Array[FileStatus] = try {
+      fs match {
+        // DistributedFileSystem overrides listLocatedStatus to make 1 single 
call to namenode
+        // to retrieve the file status with the file block location. The 
reason to still fallback
+        // to listStatus is because the default implementation would 
potentially throw a
+        // FileNotFoundException which is better handled by doing the lookups 
manually below.
 
 Review comment:
   I believe that should not matter here as I tried to preserve the original 
logic (which would be your new logic now) when using listStatus which has that 
issue. The only real change here is using the 
DistributedFileSystem.listLocatedStatus and there should be no case where 
FileNotFoundException happens when using that as they will all be 
LocatedFileStatus that are returned in the array.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to