See the paper at: http://labs.google.com/papers/mapreduce.html 

"MapReduce is a programming model and an associated implementation for
processing and generating large data sets. Users specify a map
function that processes a key/value pair to generate a set of
intermediate key/value pairs, and a reduce function that merges all
intermediate values associated with the same intermediate key. Many
real world tasks are expressible in this model, as shown in the paper.
"

cool stuff

On 7/18/05, Peter Gelderbloem <[EMAIL PROTECTED]> wrote:
> I am thinking of having a cluster of one indexer and a few searchers 1
> to n.
> The indexer will consist of a number of stages as defined in SEDA. I
> must still do this decomposition.  the resulting index will be published
> via message q to the searchers that will stop doing searches long enough
> to update the local index.
> What is the purpose behind the nutch mapreduce component?
> Would it be useful for me to look at it in order to better decompose the
> indexer component?
> Cheers,
> Peter Gelderbloem
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to