[ 
https://issues.apache.org/jira/browse/YARN-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985045#comment-15985045
 ] 

Varun Saxena commented on YARN-6197:
------------------------------------

Sorry I had missed this.

[~leftnoteasy] makes a fair point about AM limit doubling up as a way to 
control the number of concurrent applications launched for each queue.
The point about avoiding all resources in a queue being consumed by AM, well 
even the current solution of considering a default minimum value is not quite 
correct as unmanaged AM may actually be using more resources. So, the most 
effective way of accounting for those resources would be to deduct the max 
resources required for this AM from resources of the node where the AM is 
launched while configuring the NM capability.
Assuming that's done, the main issue would be with regards to controlling apps.
However, I wonder if defining a configurable running app limit at a queue level 
won't be a better way to control the number of concurrent apps instead of using 
AM limit? 

> CS Leaf queue am usage gets updated for unmanaged AM
> ----------------------------------------------------
>
>                 Key: YARN-6197
>                 URL: https://issues.apache.org/jira/browse/YARN-6197
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Bibin A Chundatt
>            Assignee: Bibin A Chundatt
>
> {{LeafQueue#activateApplication()}} for unmanaged AM  the am_usage is updated 
> with scheduler minimum allocation size. Cluster resource/AM limit headroom 
> for other apps in queue will get reduced .
> Solution: FicaScheduler unManagedAM flag can be used to check AM type.
> Based on flag the queueusage need to be updated during activation and removal
> Thoughts??



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to