My understanding is that the mapper will cache the output in memory until its memory buffer fills up, at which point it will sort the data and spill it to disk. Once a given number of spill files are created they will be merged together into a larger spill file. Once the mapper finishes then the output is totally merged into a single file that can be served to the Reducer through the TaskTracker, or NodeManger under YARN. The reducer does a similar thing as it merges the output form all of the mappers. I don't understand all of the reasons behind this, but I think much of it is to optimize the time it takes to sort the data. If you try to merge too many files then you waste a lot of time doing seeks and less time reading data. But I was not involved with developing it so I don't know for sure.
--Bobby Evans On 1/12/12 10:27 AM, "Bai Shen" <baishen.li...@gmail.com> wrote: Can someone explain how the map reduce merge is done? As far as I can tell, it appears to pull all of the spill files into one giant file to send to the reducer. Is this correct? Even if you set smaller spill files and a lower sort factor, the eventual merge is still the same. It just takes more passes to get there. Thanks.