[ 
http://issues.apache.org/jira/browse/HADOOP-331?page=comments#action_12443086 ] 
            
eric baldeschwieler commented on HADOOP-331:
--------------------------------------------

Sounds good.  I think we are converging on something.

A couple of points:

1) When we spill, we don't need to keep any record of the spilled data.  
Devaraj maintained operating on these arrays after spills.  We should not do 
that.  Everything should be cleared out.  (Except possibly the index of 
partitions to file offsets).

2) I like the idea of extending the block compressed sequence file to directly 
support flushes at partition boundaries and merges of ranges.  This will be 
reused in reduce as doug observed.

3) We should not spill based on a number of records.  Don't see any value in 
that.  We should just spill based on RAM used.

4) Related, we need to track total RAM used, not just for values, but also for 
keys and arrays.  We don't want the system to blow up in the degenerate cases 
of huge keys or null values and many, many keys.


> map outputs should be written to a single output file with an index
> -------------------------------------------------------------------
>
>                 Key: HADOOP-331
>                 URL: http://issues.apache.org/jira/browse/HADOOP-331
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.3.2
>            Reporter: eric baldeschwieler
>         Assigned To: Devaraj Das
>
> The current strategy of writing a file per target map is consuming a lot of 
> unused buffer space (causing out of memory crashes) and puts a lot of burden 
> on the FS (many opens, inodes used, etc).  
> I propose that we write a single file containing all output and also write an 
> index file IDing which byte range in the file goes to each reduce.  This will 
> remove the issue of buffer waste, address scaling issues with number of open 
> files and generally set us up better for scaling.  It will also have 
> advantages with very small inputs, since the buffer cache will reduce the 
> number of seeks needed and the data serving node can open a single file and 
> just keep it open rather than needing to do directory and open ops on every 
> request.
> The only issue I see is that in cases where the task output is substantiallyu 
> larger than its input, we may need to spill multiple times.  In this case, we 
> can do a merge after all spills are complete (or during the final spill).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to