[ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16324014#comment-16324014
 ] 

Jason Lowe commented on YARN-7739:
----------------------------------

I'm not a fan of silently capping the app's request.  If the app says it needs 
12GB then it needs 12GB.  I think it is unhelpful more often than not to assume 
the app can work with less.  In the case where an app wants "the biggest you 
can offer up to this amount" kind of allocations then it should be able to 
query the RM for the current maximum capability.  It's already told the max 
allocation during app registration, but currently this can be dynamically 
updated (e.g.: queue refresh) without the app's knowledge.

As far as dynamically setting the maximum allocation, some of that stems from 
the desire to keep apps from hanging forever if they ask for a container that 
is bigger than any node can satisfy.  See YARN-2604.  I'm personally torn on 
the behavior.  In many cases it could be very useful to proactively tell an app 
that its container cannot be satisfied by any node in the cluster, but on the 
other hand we don't know if such requests would be satisfied just a little bit 
later because a large node that was temporarily offline rejoins the cluster.  
If we do allow the maximum allocation to fluctuate based on current node 
capabilities then I think there needs to be a way for the AM to either query 
for the current max allocation or be proactively told about the max allocation 
change in the allocation response.


> Revisit scheduler resource normalization behavior for max allocation
> --------------------------------------------------------------------
>
>                 Key: YARN-7739
>                 URL: https://issues.apache.org/jira/browse/YARN-7739
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Wangda Tan
>            Priority: Critical
>
> Currently, YARN Scheduler normalizes requested resource based on the maximum 
> allocation derived from configured maximum allocation and maximum registered 
> node resources. Basically, the scheduler will silently cap asked resource by 
> maximum allocation.
> This could cause issues for applications, for example, a Spark job which 
> needs 12 GB memory to run, however in the cluster, registered NMs have at 
> most 8 GB mem on each node. So scheduler allocates 8GB memory container to 
> the requested application.
> Once app receives containers from RM, if it doesn't double check allocated 
> resources, it will lead to OOM and hard to debug because scheduler silently 
> caps maximum allocation.
> When non-mandatory resources introduced, this becomes worse. For resources 
> like GPU, we typically set minimum allocation to 0 since not all nodes have 
> GPU devices. So it is possible that application asks 4 GPUs but get 0 GPU, it 
> gonna be a big problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to