[ 
http://issues.apache.org/jira/browse/HADOOP-331?page=comments#action_12456153 ] 
            
Owen O'Malley commented on HADOOP-331:
--------------------------------------

Change the name of the class SorterBase to BufferSorter

Instead of "initialize(JobConf)"  in the file BasicTypeSorterBase.java, it 
should implement JobConfigurable.

The BasicTypeSorterBase should be an abstract class since it itself 
doesn't implement the sort algorithm

The sorter should not be concerned about partitions (makes the code 
hard to read). So have one sorter per partition instead of having one sorter 
for all partitions. In the case, where there is one sorter per partition, we 
don't require the callback mechanism (the OutputWriter interface in 
MapTask.java). Also, the ListOfArrays class is not required if this is done.

sort method in the SorterBase interface should return a RawKeyValueIterator
object rather than deriving from it import DataInputBuffer is not used in
BasicTypeSorterBase.java 

SequenceFile.java has some whitespace-only changes

Use readLong/writeLong instead of LongWritables for reading/writing longs

Remove the empty finally block from MapTask.run

Synchronize the collect methods in MapTask.java (move the 'synchronized'
from the method declaration to inside the method as 'synchronized(this)' )

Close the combiner after iterating a sorted partition while spilling to disk.
This is to ensure that in case of streaming, where the combiner is a separate
process, it processes the input key/vals and writes the output key/vals before
the spill file is updated with the partition information.

The occurrences of "sort" strings in ReduceTask.java may remain; they need not
be replaced with "merge" strings (although merge is the more accurate word, but
effectively we are sorting. Also, there may be code somewhere that relies on
status strings being "sort").

In TestMapRed.java, the check whether map output a compressed file or not is
commented out. We need to do the compression check.

> map outputs should be written to a single output file with an index
> -------------------------------------------------------------------
>
>                 Key: HADOOP-331
>                 URL: http://issues.apache.org/jira/browse/HADOOP-331
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.3.2
>            Reporter: eric baldeschwieler
>         Assigned To: Devaraj Das
>         Attachments: 331-design.txt, 331-initial3.patch, 331.txt
>
>
> The current strategy of writing a file per target map is consuming a lot of 
> unused buffer space (causing out of memory crashes) and puts a lot of burden 
> on the FS (many opens, inodes used, etc).  
> I propose that we write a single file containing all output and also write an 
> index file IDing which byte range in the file goes to each reduce.  This will 
> remove the issue of buffer waste, address scaling issues with number of open 
> files and generally set us up better for scaling.  It will also have 
> advantages with very small inputs, since the buffer cache will reduce the 
> number of seeks needed and the data serving node can open a single file and 
> just keep it open rather than needing to do directory and open ops on every 
> request.
> The only issue I see is that in cases where the task output is substantiallyu 
> larger than its input, we may need to spill multiple times.  In this case, we 
> can do a merge after all spills are complete (or during the final spill).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to