That means you are having Hadoop run at most 1 reducer at a time across the
whole cluster. In any Hadoop job this needs to be set to about the number
of open reduce slots.


On Sat, Nov 10, 2012 at 7:28 PM, pricila rr <pricila...@gmail.com> wrote:

> No, as is the default
>
> 2012/11/10 Sean Owen <sro...@gmail.com>
>
> > Did you set -Dmapred.reduce.tasks ? it defaults to 1.
> >
> >
> > On Sat, Nov 10, 2012 at 7:22 PM, pricila rr <pricila...@gmail.com>
> wrote:
> >
> > > I am running kmeans algorithm.
> > > Increasing the number of tasktrackers and datanodes, increase the
> speed?
> > >
> > > Thank you
> > >
> > > 2012/11/10 Dmitriy Lyubimov <dlie...@gmail.com>
> > >
> > > > I would imagine optimizing Mahout jobs are not fundamentally
> different
> > > from
> > > > optiimizing any Hadoop job. Make sure you have optimal amount of task
> > per
> > > > node configured, as well as optimal amount of memory to prevent GC
> > > > thrashing. (Iterative Mahout batches tend to create GC churn at
> > somewhat
> > > > respectable rate). When optimized correctly, individual Mahout tasks
> > tend
> > > > to be CPU bound.
> > > >
> > > > Could you tell which Mahout method specifically you are talking
> about?
> > > >
> > > >
> > > > On Sat, Nov 10, 2012 at 11:11 AM, pricila rr <pricila...@gmail.com>
> > > wrote:
> > > >
> > > > > Hello,
> > > > > How to run jobs on Hadoop-Mahout, using processor full capacity?
> > > > > I have 10 slaves and 1 master, with i5 CPU. But the jobs
> > Hadoop-Mahout
> > > > not
> > > > > use all this capacity.
> > > > >
> > > > > Thank you,
> > > > > Pricila
> > > > >
> > > >
> > >
> >
>

Reply via email to