Hi Marcos,

You need to consider the slots based on the available memory

Available Memory = Total RAM - (Memory for OS + Memory for Hadoop Daemons
like DN,TT + Memory for other servicess if any running in that node)

Now you need to consider the generic MR jobs planned on your cluster. Say
if your tasks need 1G of JVM to run gracefully, then

Possible number of slots = Available Memory / JVM size of each task

Now divide the slots between mappers and reducers.



On Mon, Apr 15, 2013 at 11:38 PM, Amal G Jose <amalg...@gmail.com> wrote:

> It depends on the type of job that is frequently submitting.
> RAM size of the machine.
> Heap size of tasktracker= (mapslots+reduceslots)*jvm size
> We can adjust this according to our requirement to fine tune our cluster.
> This is my thought.
>
>
> On Mon, Apr 15, 2013 at 4:40 PM, MARCOS MEDRADO RUBINELLI <
> marc...@buscapecompany.com> wrote:
>
>> Hi,
>>
>> I am currently tuning a cluster, and I haven't found much information on
>> what factors to consider while adjusting the heap size of tasktrackers.
>> Is it a direct multiple of the number of map+reduce slots? Is there
>> anything else I should consider?
>>
>> Thank you,
>> Marcos
>
>
>

Reply via email to