[ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129138#comment-16129138
 ] 

Wangda Tan commented on YARN-5764:
----------------------------------

[~devaraj.k],

bq. I think it would be useful when the user uses default container executor 
with DominantResourceCalculator, please correct me if I am wrong. Thanks
Prior to this feature, the only way to assign cpu share is to use 
LinuxContainerExecutor: 
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html.
I will be fine if you want to continue supporting this case, however since 
ResourceHandler API is not added to DefaultContainerExecutor, it needs some 
extra effort to bring ResourceHandlerModule API to DefaultContainerExecutor, 
which I'm not sure if it worths.

And forgot to mention: common libraries for NM recovery assigned resources 
added to the same patch of YARN-6620, which needs some time to finish GPU 
testing, etc. If you plan to work on this feature in short term (say 1 month), 
we may need to split common libraries to a separate JIRA and commit to trunk 
first to unblock this one. I can do it two weeks after, if you want to speed it 
up, please feel free to take it up.

> NUMA awareness support for launching containers
> -----------------------------------------------
>
>                 Key: YARN-5764
>                 URL: https://issues.apache.org/jira/browse/YARN-5764
>             Project: Hadoop YARN
>          Issue Type: New Feature
>          Components: nodemanager, yarn
>            Reporter: Olasoji
>            Assignee: Devaraj K
>         Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to