No, but if you want a "reducer like" functionality on the same node, have a
look at combiners. To get exact functionality you might need to tweak around a
little wrt buffers, flush etc.
Cheers!
Amogh
From: fan wei fang [mailto:eagleeye8...@gmail.com]
Sent: Monda
Hi Amogh,
I appreciate your quick response.
Please correct me if I'm wrong. If the workload of reducers is transferred
to combiners, does it mean every map node must hold a copy of my config.
data? If this is the case, it is completely unacceptable for my app.
Let me further explain the situation
re you would be writing twice to the hdfs.
Hope this helps, just the first thing that came to my mind.
Thanks,
Amogh
From: fan wei fang [mailto:eagleeye8...@gmail.com]
Sent: Monday, August 24, 2009 12:03 PM
To: mapreduce-user@hadoop.apache.org
Subject: Re: Locat
apred jobs
> , where you would be writing twice to the hdfs.
>
> Hope this helps, just the first thing that came to my mind.
>
>
>
> Thanks,
>
> Amogh
>
>
> --
>
> *From:* fan wei fang [mailto:eagleeye8...@gmail.com]
> *Sent