Map tasks are generated based on InputSplits. An InputSplit is a logical
description of the work that a task should use. The array of InputSplit
objects is created on the client by the InputFormat.
org.apache.hadoop.mapreduce.InputSplit has an abstract method:

  /**
   * Get the list of nodes by name where the data for the split would be
local.
   * The locations do not need to be serialized.
   * @return a new array of the node nodes.
   * @throws IOException
   * @throws InterruptedException
   */
  public abstract·
    String[] getLocations() throws IOException, InterruptedException;


So the InputFormat needs to do something when it's creating its list of work
items, to hint where these should go. If you take a look at FileInputFormat,
you can see how it does this based on stat'ing the files and determining the
block locations for each one and using those as node hints. Other
InputFormats may ignore this entirely in which case there is no locality.

The scheduler itself then does its "best job" of lining up tasks to nodes,
but it's usually pretty naive. Basically, tasktrackers send heartbeats back
to the JT wherein they may request another task. The scheduler then responds
with a task. If there's a local task available, it'll send that one. If not,
it'll send a non-local task instead.

- Aaron

On Fri, Oct 2, 2009 at 12:24 PM, Esteban Molina-Estolano <
eesto...@cs.ucsc.edu> wrote:

> Hi,
>
> I'm running Hadoop 0.19.1 on 19 nodes. I've been benchmarking a Hadoop
> workload with 115 Map tasks, on two different distributed filesystems (KFS
> and PVFS); in some tests, I also have a write-intensive non-Hadoop job
> running in the background (an HPC checkpointing benchmark). I've found that
> Hadoop sometimes makes most of the Map tasks data-local, and sometimes makes
> none of the Map tasks data-local; this depends both on which filesystem I
> use, and on whether the background task is running. (I never run multiple
> Hadoop jobs concurrently in these tests.)
>
> I'd like to learn how the Hadoop scheduler places Map tasks, and how
> locality is taken into account, so I can figure out why this is happening.
> (I'm using the default FIFO scheduler.) Is there some documentation
> available that would explain this?
>
> Thanks!
>

Reply via email to