[ 
https://issues.apache.org/jira/browse/YARN-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13784906#comment-13784906
 ] 

Wangda Tan commented on YARN-1197:
----------------------------------

{quote}
For decreasing resources, if the RM is to consider the free resource available 
only after the AM informs the NM and the NM heartbeats with the RM then this 
change may become more complicated since the current schedulers dont expect any 
lag in their allocations. This will also delay the allocation of the free space 
to others. Also this delay is determined by when the AM syncs with the NM. 
Thats not a good property. We should probably assume the decrease to be 
effective immediately and RM-NM sync should enforce that. The downside is that 
for the duration of the heartbeat interval, the node may get overbooked but 
that should not be a problem in practice since the container would already be 
using a lower value of resources before the AM asked its capacity to be 
decreased.
{quote}

I think it make sense, AM tell NM first will make RM cannot leverage freed 
resources, it's not good for heavy-loaded cluster. I'll update document as our 
discussion and start break down tasks. Please let me know if you have any other 
comments.

> Support changing resources of an allocated container
> ----------------------------------------------------
>
>                 Key: YARN-1197
>                 URL: https://issues.apache.org/jira/browse/YARN-1197
>             Project: Hadoop YARN
>          Issue Type: Task
>          Components: api, nodemanager, resourcemanager
>    Affects Versions: 2.1.0-beta
>            Reporter: Wangda Tan
>         Attachments: yarn-1197.pdf
>
>
> Currently, YARN cannot support merge several containers in one node to a big 
> container, which can make us incrementally ask resources, merge them to a 
> bigger one, and launch our processes. The user scenario is described in the 
> comments.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to