Hi,

Maybe it is a stupid question, but I am running spark on yarn. I request the
resources by the following command:
{code}
./spark-submit --master yarn-client --num-executors #number of worker
--executor-cores #number of cores. ...
{code}
However, after launching the task, I use /yarn node -status ID / to monitor
the situation of cluster. It shows that the number of Vcores used for each
container is always 1 no matter what number I pass by --executor-cores. 
Any ideas how to solve this problem? Thanks a lot in advance for your help.

Cheers
Gen




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/executor-cores-cannot-change-vcores-in-yarn-tp17883.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to