Hi all,

I am running spark with the default settings in yarn client mode. For some
reason yarn always allocates three containers to the application (wondering
where it is set?), and only uses two of them.

Also the cpus on the cluster never go over 50%, I turned off the fair
scheduler and set high spark.cores.max. Is there some additional settings I
am missing?

thanks,



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to