[ https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated HADOOP-13208: ------------------------------------ Attachment: HADOOP-13208-branch-2-007.patch This is patch -007, which I'm numbering to keep in sync with the HADOOP-13207 patch of the same version. This patch has the filesystem.md doc changes of that patch, and fixes the javadoc merge error that had crept in during rebasing. Test: S3 Ireland > S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the > pseudo-tree of directories > -------------------------------------------------------------------------------------------------------- > > Key: HADOOP-13208 > URL: https://issues.apache.org/jira/browse/HADOOP-13208 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 2.8.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Minor > Attachments: HADOOP-13208-branch-2-001.patch, > HADOOP-13208-branch-2-007.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > A major cost in split calculation against object stores turns out be listing > the directory tree itself. That's because against S3, it takes S3A two HEADs > and two lists to list the content of any directory path (2 HEADs + 1 list for > getFileStatus(); the next list to query the contents). > Listing a directory could be improved slightly by combining the final two > listings. However, a listing of a directory tree will still be > O(directories). In contrast, a recursive {{listFiles()}} operation should be > implementable by a bulk listing of all descendant paths; one List operation > per thousand descendants. > As the result of this call is an iterator, the ongoing listing can be > implemented within the iterator itself -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org