[ 
https://issues.apache.org/jira/browse/HADOOP-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HADOOP-4565:
-------------------------------------

    Attachment: CombineMultiFile7.patch

1. Made setXXXsize methods override the parameters specified in the config 
file. (the earlier version used to take the more stringent constraint if it was 
set in both the config file and via setXXXsize methods).
2. I did not change OneFileInfo because it seems to encapsulate the information 
for each file. However, i cleaned up one unused field in this object. This is 
used only by the client at the time of creating splits.
3. Removed the variable doIterate and replaced it to loop as long as there are 
blocks to process
4. CombineFileSplit is now a superclass of MultiFileSplit. This allows 
CombineFileRecordReader to operate on MultiFileSplit as well.

> MultiFileInputSplit can use data locality information to create splits
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-4565
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4565
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: CombineMultiFile.patch, CombineMultiFile2.patch, 
> CombineMultiFile3.patch, CombineMultiFile4.patch, CombineMultiFile5.patch, 
> CombineMultiFile7.patch
>
>
> The MultiFileInputFormat takes a set of paths and creates splits based on 
> file sizes. Each splits contains a few files an each split are roughly equal 
> in size. It would be efficient if we can extend this InputFormat to create 
> splits such each all the blocks in one split and either node-local or 
> rack-local.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to