If you have it available still, via the job tracker web interface, attach
the per job xml configuration

On Thu, May 7, 2009 at 8:39 AM, Foss User <foss...@gmail.com> wrote:

> On Thu, May 7, 2009 at 8:51 PM, jason hadoop <jason.had...@gmail.com>
> wrote:
> > Most likely the 3rd mapper ran as a speculative execution, and it is
> > possible that all of your keys hashed to a single partition. Also, if you
> > don't specify the default is to run a single reduce task.
>
> As I mentioned in my first mail, I tried printiing out the hashCode()
> for the keys myself in a manner like this:
>
> System.out.println(key.hashCode());
>
> key was of type Text. Some of the keys were even and some of them were
> odd. So, I was expecting the odd ones to go to one slave and the even
> ones to another. Is my expectation correct? Could you now throw some
> light what else might have happened?
>
> >
> > From JobConf,
> > /**
> >   * Get configured the number of reduce tasks for this job. Defaults to
> >   * <code>1</code>.
> >   *
> >   * @return the number of reduce tasks for this job.
> >   */
> >  public int getNumReduceTasks() { return getInt("mapred.reduce.tasks",
> *1*);
> > }
> >
>
> I configured mapred.reduce.tasks as 2. I did this configuration in
> hadoop-site.xml of the job-tracker. Is this fine?
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to