[ 
https://issues.apache.org/jira/browse/YARN-110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049301#comment-15049301
 ] 

Arun Suresh commented on YARN-110:
----------------------------------

[~ka...@cloudera.com], [~vinodkv], I understand from MAPREDUCE-4671 that  the 
accounting burden for this has been pushed to the AM and it will not pose a 
latency issue for the AM requesting the resources, but it looks like this 
increases latencies for competing AMs (they might have to wait for subsequent 
allocate call for the resources). Also Custom AMs would need to be cognizant of 
this.

It also looks like [~giovanni.fumarola] is hitting this on some of the clusters 
he is working on. If [~acmurthy] is not actively looking into this, he would 
like to volunteer a patch.

Thoughts ?

> AM releases too many containers due to the protocol
> ---------------------------------------------------
>
>                 Key: YARN-110
>                 URL: https://issues.apache.org/jira/browse/YARN-110
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager, scheduler
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>         Attachments: YARN-110.patch
>
>
> - AM sends request asking 4 containers on host H1.
> - Asynchronously, host H1 reaches RM and gets assigned 4 containers. RM at 
> this point, sets the value against H1 to
> zero in its aggregate request-table for all apps.
> - In the mean-while AM gets to need 3 more containers, so a total of 7 
> including the 4 from previous request.
> - Today, AM sends the absolute number of 7 against H1 to RM as part of its 
> request table.
> - RM seems to be overriding its earlier value of zero against H1 to 7 against 
> H1. And thus allocating 7 more
> containers.
> - AM already gets 4 in this scheduling iteration, but gets 7 more, a total of 
> 11 instead of the required 7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to