I think that job configuration does not allow you such setup, however maybe
I missed something..

 Probably I would tackle this problem from the scheduler source. The
default one is JobQueueTaskScheduler which preserves a fifo based queue.
When a tasktracker (your slave) tells the jobtracker that it has some free
slots to run, JT in the heartbeat method calls the scheduler assignTasks
method where tasks are assigned on local basis. In other words, scheduler
tries to find tasks on the tasktracker which data resides on it. If the
scheduler will not find a local map/reduce task to run it will try to find
a non local one. Probably here is the point where you should do something
with your jobs and wait for the tasktrackers heartbeat.. Instead of waiting
for the TT heartbeat, maybe there is another option to force an
heartbeatResponse, despite the TT has not send a heartbeat but I am not
aware of it..


On 21 February 2012 19:27, theta <glynisdso...@email.arizona.edu> wrote:

>
> Hi,
>
> I am working on a project which requires a setup as follows:
>
> One master with four slaves.However, when a map only program is run, the
> master dynamically selects the slave to run the map. For example, when the
> program is run for the first time, slave 2 is selected to run the map and
> reduce programs, and the output is stored on dfs. When the program is run
> the second time, slave 3 is selected and son on.
>
> I am currently using Hadoop 0.20.2 with Ubuntu 11.10.
>
> Any ideas on creating the setup as described above?
>
> Regards
>
> --
> View this message in context:
> http://old.nabble.com/Dynamic-changing-of-slaves-tp33365922p33365922.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to