[ 
https://issues.apache.org/jira/browse/FLINK-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736940#comment-16736940
 ] 

Till Rohrmann commented on FLINK-10848:
---------------------------------------

Some more information: It seems that the problem is connected to Yarn's 
capacity scheduler. When using the {{DefaultResourceCalculator}}, it will 
ignore the requested vCores and simply return a container with a single vCore. 
See 
[here|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
 and YARN-2413 for more information. 

When requesting a container with {{<memory: X, vCores: Y>}} capacity it will be 
registered internally. If we now `removeContainerRequest` with a different 
capacity {{<memory: X, vCores: 1>>}}, then we will run into a 
{{NullPointerException}} since there is no entry with vCores equal to 1. It is 
possible to reproduce this problem locally by removing 
{{YarnTestBase.java:168-169}}.

Since this change breaks existing Flink setups, I have to revert the commits 
and reopen this issue.

> Flink's Yarn ResourceManager can allocate too many excess containers
> --------------------------------------------------------------------
>
>                 Key: FLINK-10848
>                 URL: https://issues.apache.org/jira/browse/FLINK-10848
>             Project: Flink
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.3, 1.4.2, 1.5.5, 1.6.2
>            Reporter: Shuyi Chen
>            Assignee: Shuyi Chen
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, both the YarnFlinkResourceManager and YarnResourceManager do not 
> call removeContainerRequest() on container allocation success. Because the 
> YARN AM-RM protocol is not a delta protocol (please see YARN-1902), 
> AMRMClient will keep all ContainerRequests that are added and send them to RM.
> In production, we observe the following that verifies the theory: 16 
> containers are allocated and used upon cluster startup; when a TM is killed, 
> 17 containers are allocated, 1 container is used, and 16 excess containers 
> are returned; when another TM is killed, 18 containers are allocated, 1 
> container is used, and 17 excess containers are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to