Most likely the 3rd mapper ran as a speculative execution, and it is
possible that all of your keys hashed to a single partition. Also, if you
don't specify the default is to run a single reduce task.

>From JobConf,
/**
   * Get configured the number of reduce tasks for this job. Defaults to
   * <code>1</code>.
   *
   * @return the number of reduce tasks for this job.
   */
  public int getNumReduceTasks() { return getInt("mapred.reduce.tasks", *1*);
}


On Thu, May 7, 2009 at 3:54 AM, Miles Osborne <mi...@inf.ed.ac.uk> wrote:

> with such a small data set who knows what will happen:  you are
> probably hitting minimal limits of some kind
>
> repeat this with more data
>
> Miles
>
> 2009/5/7 Foss User <foss...@gmail.com>:
> > I have two reducers running on two different machines. I ran the
> > example word count program with some of my own System.out.println()
> > statements to see what is going on.
> >
> > There were 2 slaves each running datanode as well as tasktracker.
> > There was one namenode and one jobtracker. I know there is a very
> > elaborate setup for such a small cluster but I did it only to learn.
> >
> > I gave two input files, a.txt and b.txt with a few lines of english
> > text. Now, here are my questions.
> >
> > (1) I found that three mapper tasks ran, all in the first slave. The
> > first task processed the first file. The second task processed the
> > second file. The third task didn't process anything. Why is it that
> > the third task did not process anything? Why was this task created in
> > the first place?
> >
> > (2) I found only one reducer task, on the second slave. It processed
> > all the values for keys. keys were words in this case of Text type. I
> > tried printing out the key.hashCode() for each key and some of them
> > were even and some of them were odd. I was expecting the keys with
> > even hashcodes to go to one slave and the others to go to another
> > slave. Why didn't this happen?
> >
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to