Thank you very much for you response :)
-Suzanne

On Tue, Sep 9, 2008 at 12:13 PM, Owen O'Malley <[EMAIL PROTECTED]> wrote:

> On Mon, Sep 8, 2008 at 4:26 PM, Sandy <[EMAIL PROTECTED]> wrote:
>
> > In all seriousness though, why is this not possible? Is there something
> > about the MapReduce model of parallel computation that I am not
> > understanding? Or this more of an arbitrary implementation choice made by
> > the Hadoop framework? If so, I am curious why this is the case. What are
> > the
> > benefits?
>
>
> It is possible to do with changes to Hadoop. There was a jira filed for it,
> but I don't think anyone has worked on it. (HADOOP-2573) For Map/Reduce it
> is a design goal that number of tasks not nodes are the important metric.
> You want a job to be able to run with any given cluster size. For
> scalability testing, you could just remove task trackers...
>
> -- Owen
>

Reply via email to