These configurations cannot be changed dynamically. We need to configure these 
values for Tasks Tracker's before starting and cannot be changed after that. If 
we want to change these then TT's need to be restarted. You can configure the 
cluster based on the resources available. You can tune your Job configuration 
according your cluster configurations.

Thanks
Devaraj k

From: Shekhar Sharma [mailto:shekhar2...@gmail.com]
Sent: 15 July 2013 07:32
To: user@hadoop.apache.org
Subject: Re: Map slots and Reduce slots

Sorry for the wrong properties name, i meant the same..
I understand the properties functionality, can i add the slots at run time to a 
particular task trackers depending on the load, because as you suggested we can 
determine the slots depending on the load..and since the load can be dynamic, 
so can i dynamically allocate the task tracker based on information lets say 
depending on the availablity of the resources on the task tracker machine..

Regards,
Som Shekhar Sharma
+91-8197243810

On Mon, Jul 15, 2013 at 7:27 AM, Devaraj k 
<devara...@huawei.com<mailto:devara...@huawei.com>> wrote:
Hi Shekar,

   I assume you are trying with Hadoop-1. There are no properties with the 
names 'mapred.map.max.tasks' and 'mapred.reduce.max.tasks'.

We have these configuration to control the max no of map/reduce tasks run 
simultaneously.
mapred.tasktracker.map.tasks.maximum - The maximum number of map tasks that 
will be run simultaneously by a task tracker.
mapred.tasktracker.reduce.tasks.maximum - The maximum number of reduce tasks 
that will be run simultaneously by a task tracker.

For ex: If we declare mapred.tasktracker.map.tasks.maximum=3  and 
mapred.tasktracker.reduce.tasks.maximum=4 for a task tracker, means the TT has 
3 map slots and 4 reduce slots.

> Let's say on a machine if i have 8GB RAM and dual core machine, how can i 
> determine that what would be the optimal number of map and reducer slots for 
> this machine
It purely depends on which type of tasks you are going to run and load of the 
task. Normally each task requires one core to execute, no of concurrent tasks 
can be configured based on this. And memory required for the task depends on 
how much data it is going to process.


Thanks
Devaraj k

From: Shekhar Sharma 
[mailto:shekhar2...@gmail.com<mailto:shekhar2...@gmail.com>]
Sent: 14 July 2013 23:15
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Map slots and Reduce slots

Does the properties mapred.map.max.tasks=3 and mapred.reduce.max.tasks=4 means 
that machine has 3 map slots and 4 reduce slots?

Or is there any way i can determine the number of map slots and reduce slots 
that i can allocate for a machine?

Let's say on a machine if i have 8GB RAM and dual core machine, how can i 
determine that what would be the optimal number of map and reducer slots for 
this machine




Regards,
Som Shekhar Sharma
+91-8197243810<tel:%2B91-8197243810>

Reply via email to