hi, all

I also noticed this problem. The reason is that Yarn accounts each executor
for only 1, no matter how many cores you configured. 
Because Yarn only uses memory as the primary metrics for resource
allocation. It means that Yarn will pack as many as executors on each node
as long as the node has 
free memory space.

If you want to enable vcores to be accounted for resource allocation, you
can configure the resource calculator as DominantResoruceCalculator, as
following:

Property        Description
yarn.scheduler.capacity.resource-calculator     The ResourceCalculator
implementation to be used to compare Resources in the scheduler. The default
i.e. org.apache.hadoop.yarn.util.resource.DefaultResourseCalculator only
uses Memory while DominantResourceCalculator uses Dominant-resource to
compare multi-dimensional resources such as Memory, CPU etc. A Java
ResourceCalculator class name is expected.


Please also refer this article:
https://hortonworks.com/blog/managing-cpu-resources-in-your-hadoop-yarn-clusters/


Thanks!

Wei Chen



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to