um-capacity
>>
>> Maximum queue capacity in percentage (%) as a float. This limits the
>> *elasticity* for applications in the queue. Defaults to -1 which
>> disables it.
>>
>>
>>
>> 2. Preemption of containers.
>>
>>
>>
>>
From:* Rafał Radecki [mailto:radecki.ra...@gmail.com]
> *Sent:* 10 November 2016 17:26
> *To:* Bibinchundatt
> *Cc:* Ravi Prakash; user
>
> *Subject:* Re: Yarn 2.7.3 - capacity scheduler container allocation to
> nodes?
>
>
>
> We have 4 nodes and 4 large (~30GB each tasks), add
of containers.
Regards
Bibin
From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
Sent: 10 November 2016 17:26
To: Bibinchundatt
Cc: Ravi Prakash; user
Subject: Re: Yarn 2.7.3 - capacity scheduler container allocation to nodes?
We have 4 nodes and 4 large (~30GB each tasks), additionally we have
usecase specifically for per node allocation based on
> percentage?
>
>
>
>
>
> *From:* Rafał Radecki [mailto:radecki.ra...@gmail.com]
> *Sent:* 10 November 2016 14:59
> *To:* Ravi Prakash
> *Cc:* user
> *Subject:* Re: Yarn 2.7.3 - capacity scheduler container allocation
percentage?
From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
Sent: 10 November 2016 14:59
To: Ravi Prakash
Cc: user
Subject: Re: Yarn 2.7.3 - capacity scheduler container allocation to nodes?
Hi Ravi.
I did not specify labels this time ;) I just created two queues as it is
visible in the
Hi Ravi.
I did not specify labels this time ;) I just created two queues as it is
visible in the configuration.
Overall queues work but allocation of jobs is different then expected by me
as I wrote at the beginning.
BR,
Rafal.
2016-11-10 2:48 GMT+01:00 Ravi Prakash :
> Hi Rafal!
>
> Have you b
Hi Rafal!
Have you been able to launch the job successfully first without configuring
node-labels? Do you really need node-labels? How much total memory do you
have on the cluster? Node labels are usually for specifying special
capabilities of the nodes (e.g. some nodes could have GPUs and your
ap
Hi All.
I have a 4 node cluster on which I run yarn. I created 2 queues "long" and
"short", first with 70% resource allocation, the second with 30%
allocation. Both queues are configured on all available nodes by default.
My memory for yarn per node is ~50GB. Initially I thought that when I will
sending heartbeat to RM via
allocate() call.
Container allocation will happen when NodeMager sends heartbeats to RM. This is
the reason for you allocation time reduced when you decrease
heartbet-interval-ms.
Why the application is not provided with all requested containers in first
allocate call
"yarn.resourcemanager.nodemanagers.heartbeat-interval-ms" from 1000ms to
100ms .Now at 100ms heatbeat interval the container allocation time has
reduced, but still the AM has to make the same number of allocate calls as
it was done before when the heartbeat interval was 1000ms.
ssues.apache.org/jira/browse/YARN-1053"; ? and
>>> check if you can see any message?
>>>
>>> Thanks,
>>> Omkar Joshi
>>> *Hortonworks Inc.* <http://www.hortonworks.com>
>>>
>>>
>>> On Thu, Sep 12, 2013 at 6:15
applying patch "https://issues.apache.org/jira/browse/YARN-1053"; ? and
>> check if you can see any message?
>>
>> Thanks,
>> Omkar Joshi
>> *Hortonworks Inc.* <http://www.hortonworks.com>
>>
>>
>> On Thu, Sep 12, 2013 at 6:15 AM,
hna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi,
>> I am using 2.1.0-beta and have seen container allocation failing
>> randomly even when running the same application in a loop. I know that the
>> cluster has enough resources to give, because
nc.* <http://www.hortonworks.com>
On Thu, Sep 12, 2013 at 6:15 AM, Krishna Kishore Bonagiri <
write2kish...@gmail.com> wrote:
> Hi,
> I am using 2.1.0-beta and have seen container allocation failing
> randomly even when running the same application in a loop. I know that the
> cl
Hi,
I am using 2.1.0-beta and have seen container allocation failing randomly
even when running the same application in a loop. I know that the cluster
has enough resources to give, because it gave the resources for the same
application all the other times in the loop and ran it successfully
e the set of available containers?
>
> john
>
> *From:* Arun C Murthy [mailto:a...@hortonworks.com]
> *Sent:* Thursday, June 13, 2013 12:27 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: container allocation
>
> ** **
>
> By default, the ResourceManag
about this all wrong... perhaps I should ask for containers,
see what nodes they are on, and then assign the data splits to them once I see
the set of available containers?
john
From: Arun C Murthy [mailto:a...@hortonworks.com]
Sent: Thursday, June 13, 2013 12:27 AM
To: user@hadoop.apache.org
Su
By default, the ResourceManager will try give you a container on that node,
rack or anywhere (in that order).
We recently added ability to whitelist or blacklist nodes to allow for more
control.
Arun
On Jun 12, 2013, at 8:03 AM, John Lilley wrote:
> If I request a container on a node, and tha
If I request a container on a node, and that node is busy, will the request
fail, or will it give me a container on a different node? In other words is
the node name a requirement or a hint?
Thanks
John
Hi Harsh,
What will happen when I specify local host as the required host? Doesn't
the resource manager give me all the containers on the local host? I don't
want to constrain myself to the local host, which might be busy while other
nodes in the cluster have enough resources available for me.
You can request containers with the local host name as the required
host, and perhaps reject and re-request if they aren't designated to
be on that one until you have sufficient. This may take a while
though.
On Wed, Jun 12, 2013 at 6:25 PM, Krishna Kishore Bonagiri
wrote:
> Hi,
>
> I want to g
Hi,
I want to get some containers for my application on the same node, is
there a way to make such a request.
For example, I have an application which needs 10 containers, but have a
constraint that a set of those containers need to be running on the same
node. Can I ask my resource manager t
22 matches
Mail list logo