Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/22952
  
    Hadoop FS glob filtering is pathologically bad  on object stores. 
    I have tried in the past to do an ~O(1) impl for S3 
[HADOOP-13371](https://issues.apache.org/jira/browse/HADOOP-13371). While I 
could produce one which was efficient for test cases, it would suffer in the 
use case "selective pattern match at the top of a very wide tree", where you 
really do want to filter down aggressively for the topmost 
directory/directories.
    
    I think there you'd want to have a threshold as to how many path elements 
up you'd switch from ls dir + match into the full deep listfiles(recursive) 
scan 
    
    Not looked at it for ages. If someone does want to play there, welcome to 
take it up


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to