Maybe I am wrong, but how many resource that a spark application can use
depends on the mode of deployment(the type of resource manager), you can
take a look at  https://spark.apache.org/docs/latest/job-scheduling.html
<https://spark.apache.org/docs/latest/job-scheduling.html>  . 

For your case, I think mesos is better which can realize the dynamic sharing
of CPU cores

Best



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/partitions-number-with-variable-number-of-cores-tp15367p15710.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to