[ https://issues.apache.org/jira/browse/HDFS-202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12890916#action_12890916 ]
Hairong Kuang commented on HDFS-202: ------------------------------------ Sorry the new FileSystem/FileContext API should be {code} public Iterator<LocatedFileStatus> listLocatedFileStatus(Path path, boolean isRecursive); {code} > Add a bulk FIleSystem.getFileBlockLocations > ------------------------------------------- > > Key: HDFS-202 > URL: https://issues.apache.org/jira/browse/HDFS-202 > Project: Hadoop HDFS > Issue Type: New Feature > Reporter: Arun C Murthy > Assignee: Hairong Kuang > Fix For: 0.22.0 > > > Currently map-reduce applications (specifically file-based input-formats) use > FileSystem.getFileBlockLocations to compute splits. However they are forced > to call it once per file. > The downsides are multiple: > # Even with a few thousand files to process the number of RPCs quickly > starts getting noticeable > # The current implementation of getFileBlockLocations is too slow since > each call results in 'search' in the namesystem. Assuming a few thousand > input files it results in that many RPCs and 'searches'. > It would be nice to have a FileSystem.getFileBlockLocations which can take in > a directory, and return the block-locations for all files in that directory. > We could eliminate both the per-file RPC and also the 'search' by a 'scan'. > When I tested this for terasort, a moderate job with 8000 input files the > runtime halved from the current 8s to 4s. Clearly this is much more important > for latency-sensitive applications... -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.