Thanks for the help!  That bug looks like the one I'm running into and the
code in my src/ directory matches the older one on the patch svn diff.
 Here's a beginner question: how do I apply just that patch to my
installation?  Thanks.Mac


On Tue, Jul 14, 2009 at 10:03 AM, Tamir Kamara <tamirkam...@gmail.com>wrote:

> Hi,
>
> To restart hadoop on a specific node use a command like this:
> "hadoop-daemon.sh stop tasktracker" and after that the same command with
> start. You can also do the same with the datanode but it doesn't look like
> there's a problem there.
>
> I had the same problem with missing slots but I don't remeber if it
> happened
> on maps. The fix in my case was this patch:
> https://issues.apache.org/jira/browse/HADOOP-5269
>
>
> Tamir
>
> On Tue, Jul 14, 2009 at 7:25 PM, KTM <mcmcay...@gmail.com> wrote:
>
> > Hi, I'm running Hadoop 0.19.1 on a cluster with 8 machines, 7 of which
> are
> > used as slaves and the other the master, each with 2 dual-core AMD CPUs
> and
> > generous amounts of RAM.  I am running map-only jobs and have the slaves
> > set
> > up to have 4 mappers each, for a total of 28 available mappers.  When I
> > first start up my cluster, I am able to use all 28 mappers.  However,
> after
> > a short bit of time (~12 hours), jobs that I submit start using fewer
> > mappers.  I restarted my cluster last night, and currently only 19
> mappers
> > are running tasks even though more tasks are pending, with at least 2
> tasks
> > running per machine - so no machine has gone down.  I have checked that
> the
> > unused cores are actually sitting idle.  Any ideas for why this is
> > happening?  Is there a way to restart Hadoop on the individual slaves?
> >  Thanks! Mac
> >
>

Reply via email to