Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/14038
  
    Path filtering in Hadoop FS calls on anything other than filename is very 
suboptimal; in #14731 you can see where the filtering has been postoned until 
after the listing, when the full `FileStatus` entry list has been returned.
    
    As filtering is the last operation in the various listFiles calls, there's 
no penalty to doing the filtering after the results come in. In 
`FileSytem.globStatus()` the filtering takes place after the glob match, but 
during the scan...a larger list will be built and returned, but that is all.
    
    I think a new filter should be executed after these operations, taking the 
`FileStatus` object, this provides a superset of filtering possible within the 
Hadoop calls (timestamp, filetype, ...), with no performance penalty. It's more 
flexible than the simple `accept(path)`, and will guarantee that nobody using 
the API will implement a suboptimal filter.
    
    Consider also taking a predicate `Filesystem => Boolean`, rather than 
requiring callers to implement new classes. It can be fed straight into 
`Iterator.filter()`.
    
    I note you are making extensive use of `listLeafFiles`; that's a 
potentially inefficent implementation against object stores. Keep using it 
—I'll patch it to use `FileSystem.listFiles(path, true)` for in FS recursion 
and O(files/5000) listing against S3A in Hadoop 2.8; eventually Azure and swoft


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to