think mesos is better which can realize the dynamic sharing
of CPU cores
Best
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/partitions-number-with-variable-number-of-cores-tp15367p15710.html
Sent from the Apache Spark User List mailing list archive
I use Spark in a cluster shared with other applications. The number of
nodes (and cores) assigned to my job varies depending on how many unrelated
jobs are running in the same cluster.
Is there any way for me to determine at runtime how many cores have been
allocated to my job, so I can select an