[ 
https://issues.apache.org/jira/browse/SPARK-17984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15584968#comment-15584968
 ] 

quanfuwang commented on SPARK-17984:
------------------------------------

NUMA is a model for cores and memory. For modern CPU there are many cores, to 
avoid heavy memory bus contention, vendors usually divide these cores and 
memory into groups, these groups are so called nodes. The memory inside node is 
faster than outside, and inside memory access has no impact on other 
node.(There are more info in the PR https://github.com/apache/spark/pull/15524)
Usually modern servers are NUMA.
This jira plan to make the NUMA feature configurable, user can disable it when 
the HW does not support it or even they don't want to enable it.

Thanks,
Quanfu

> Add support for numa aware feature
> ----------------------------------
>
>                 Key: SPARK-17984
>                 URL: https://issues.apache.org/jira/browse/SPARK-17984
>             Project: Spark
>          Issue Type: New Feature
>          Components: Deploy, Mesos, YARN
>    Affects Versions: 2.0.1
>         Environment: Cluster Topo: 1 Master + 4 Slaves
> CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz(72 Cores)
> Memory: 128GB(2 NUMA Nodes)
> SW Version: Hadoop-5.7.0 + Spark-2.0.0
>            Reporter: quanfuwang
>             Fix For: 2.0.1
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> This Jira is target to add support numa aware feature which can help improve 
> performance by making core access local memory rather than remote one. 
>  A patch is being developed, see https://github.com/apache/spark/pull/15524.
> And the whole task includes 3 subtasks and will be developed iteratively:
> Numa aware support for Yarn based deployment mode
> Numa aware support for Mesos based deployment mode
> Numa aware support for Standalone based deployment mode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to