[ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16263756#comment-16263756
 ] 

Yufei Gu commented on YARN-7541:
--------------------------------

Thanks for working on this, [~templedf]. The logic looks perfect to me. There 
are some nits:
- Better to have a unit test for max allowed allocation resource calculation if 
there is none.
- Not like the name "maxResources", "maxAllowedAllocation" may be better. In 
that case, comment "// Max allocation" isn't necessary.
- Why not use an {{Resource}} object for "maxResources"? Is it a consideration 
of performance?

> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> ------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7541
>                 URL: https://issues.apache.org/jira/browse/YARN-7541
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: resourcemanager
>    Affects Versions: 3.0.0-beta1, 3.1.0
>            Reporter: Daniel Templeton
>            Assignee: Daniel Templeton
>            Priority: Critical
>         Attachments: YARN-7541.001.patch, YARN-7541.002.patch, 
> YARN-7541.003.patch
>
>
> When I submit an MR job that asks for too much memory or CPU for the map or 
> reduce, the AM will fail because it recognizes that the request is too large. 
>  With any other resources, however, the resource requests will instead be 
> made and remain pending forever.  Looks like we forgot to update the code 
> that tracks the maximum container allocation in {{ClusterNodeTracker}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to