This functionality may not be readily available with Hadoop.

But it would be appreciable if anyone can help me in understanding how to go
about developing this feature. 


Regards,
Naveen Kumar
HUAWEI TECHNOLOGIES CO.,LTD.  huawei_logo


Address: Huawei Industrial Base
Bantian Longgang
Shenzhen 518129, P.R.China
www.huawei.com
----------------------------------------------------------------------------
-------------------------------------
This e-mail and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

-----Original Message-----
From: Allen Wittenauer [mailto:awittena...@linkedin.com] 
Sent: Monday, January 25, 2010 12:44 PM
To: general@hadoop.apache.org
Subject: Re: set how much CPU to be utilised by a MapReduce job

On 1/24/10 10:33 PM, "Naveen Kumar Prasad" <naveenkum...@huawei.com> wrote:
> If many jobs are running concurrently in Hadoop, how can we set CPU 
> usage for individual tasks.

That functionality does not exist.

Reply via email to