Hi Rahul,
Can you please be more specific? Do you want to control mappers running 
simultaneously for your job ( I guess ) or the cluster as a whole?
If for your job, and you want to control it on a per node basis, one way is to 
allocate more memory to each of your mapper so it occupies more than one slot. 
If a slot is free, a task will be scheduled on it and that's more or less out 
of your control, especially so in pig.
In case you want a global cap on simultaneous mappers, its a little more 
complicated and inefficient too. A little more detail on your use case should 
get you better response on the list.
Sorry if I misunderstood your quesiton.

Amogh


On 9/15/10 3:02 AM, "Rahul Malviya" <rmalv...@apple.com> wrote:

Hi,

I want to control the number of mappers tasks running simultaneously. Is there 
a way to do that if I run Pig jobs on hadoop ?

Any input is helpful.

Thanks,
Rahul

Reply via email to