If you are using capacity scheduler in yarn: By default yarn capacity
scheduler uses DefaultResourceCalculator. DefaultResourceCalculator
considerĀ¹s only memory while allocating contains.
You can use DominantResourceCalculator, it considers memory and cpu.
In capacity-scheduler.xml set
yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.res
ource.DefaultResourceCalculator
On 04/11/14 3:03 am, Gen gen.tan...@gmail.com wrote:
Hi,
Well, I doesn't find original documentation, but according to
http://qnalist.com/questions/2791828/about-the-cpu-cores-and-cpu-usage
http://qnalist.com/questions/2791828/about-the-cpu-cores-and-cpu-usage
,
the vcores is not for physics cpu core but for virtual cores.
And I used top command to monitor the cpu utilization during the spark
task.
The spark can use all cpu even I leave --executor-cores as default(1).
Hope that it can be a help.
Cheers
Gen
Gen wrote
Hi,
Maybe it is a stupid question, but I am running spark on yarn. I request
the resources by the following command:
{code}
./spark-submit --master yarn-client --num-executors #number of worker
--executor-cores #number of cores. ...
{code}
However, after launching the task, I use
/
yarn node -status ID
/
to monitor the situation of cluster. It shows that the number of Vcores
used for each container is always 1 no matter what number I pass by
--executor-cores.
Any ideas how to solve this problem? Thanks a lot in advance for your
help.
Cheers
Gen
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/executor-cores-cannot-
change-vcores-in-yarn-tp17883p17992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org