ontainers per 8-core node?
>
> John
>
> ** **
>
> ** **
>
> *From:* Sandy Ryza [mailto:sandy.r...@cloudera.com]
> *Sent:* Tuesday, July 02, 2013 1:26 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: Containers and CPU
>
> ** **
>
> Use
1:26 PM
To: user@hadoop.apache.org
Subject: Re: Containers and CPU
Use of cgroups for controlling CPU is off by default, but can be turned on as a
nodemanager configuration with
yarn.nodemanager.linux-container-executor.resources-handler.class. So it is
site-wide. If you want tasks to purely
have access to all CPU cores and simply fight it
> out in the OS thread scheduler.
>
> Thanks,
>
> john
>
> ** **
>
> *From:* Sandy Ryza [mailto:sandy.r...@cloudera.com]
> *Sent:* Tuesday, July 02, 2013 11:56 AM
> *To:* user@hadoop.apache.org
> *Subjec
org
Subject: RE: Containers and CPU
Sandy,
Sorry, I don't completely follow.
When you say "with cgroups on", is that an attribute of the AM, the Scheduler,
or the Site/RM? In other words is it site-wide or something that my
application can control?
With cgroups on, is there still a
or? I'd really
like all tasks to have access to all CPU cores and simply fight it out in the
OS thread scheduler.
Thanks,
john
From: Sandy Ryza [mailto:sandy.r...@cloudera.com]
Sent: Tuesday, July 02, 2013 11:56 AM
To: user@hadoop.apache.org
Subject: Re: Containers and CPU
CPU limits are only
CPU limits are only enforced if cgroups is turned on. With cgroups on,
they are only limited when there is contention, in which case tasks are
given CPU time in proportion to the number of cores requested for/allocated
to them. Does that make sense?
-Sandy
On Tue, Jul 2, 2013 at 9:50 AM, Chuan
I believe this is the default behavior.
By default, only memory limit on resources is enforced.
The capacity scheduler will use DefaultResourceCalculator to compute resource
allocation for containers by default, which also does not take CPU into account.
-Chuan
From: John Lilley [mailto:john.lil