mridulm commented on code in PR #36775:
URL: https://github.com/apache/spark/pull/36775#discussion_r890765675


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala:
##########
@@ -253,6 +253,9 @@ class FileScanRDD(
                   // Throw FileNotFoundException even if `ignoreCorruptFiles` 
is true
                   case e: FileNotFoundException if !ignoreMissingFiles => 
throw e
                   case e @ (_: RuntimeException | _: IOException) if 
ignoreCorruptFiles =>
+                    if (e.getMessage.contains("Filesystem closed")) {

Review Comment:
   +1 to @JoshRosen's proposal here.
   Given that hadoop is throwing a generic exception here, and given the lack 
of principled alternatives available - walking the stack allows us to 
reasonably detect if the cause is due to hadoop filesystem being closed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to