[ 
https://issues.apache.org/jira/browse/HADOOP-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HADOOP-4565:
-------------------------------------

    Attachment: CombineMultiFile8.patch

Incorprates all review comments.

@Zheng: I removed the recursion. Can you pl review this method once again? 
Thanks.
@Joydeep: I changed the definition of the constructor. I also implemented code 
to try to first make splits that are node local (subject to maxSplitSize). Once 
this is done, then all remaining blocks are combined to create rack-local 
splits. The idea behind this is that if you specify a maxSplitSize to be the 
block size, then you practically get the existing defalt behaviour for all 
node-local data.
@Enis:  I moved the new files to mapred.lib, added JavaDocs

> MultiFileInputSplit can use data locality information to create splits
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-4565
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4565
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: CombineMultiFile.patch, CombineMultiFile2.patch, 
> CombineMultiFile3.patch, CombineMultiFile4.patch, CombineMultiFile5.patch, 
> CombineMultiFile7.patch, CombineMultiFile8.patch
>
>
> The MultiFileInputFormat takes a set of paths and creates splits based on 
> file sizes. Each splits contains a few files an each split are roughly equal 
> in size. It would be efficient if we can extend this InputFormat to create 
> splits such each all the blocks in one split and either node-local or 
> rack-local.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to