Hi all, I can't seem to find a clear answer on the documentation.
Does the standalone cluster support dynamic assigment of nr of allocated cores to an application once another app stops? I'm aware that we can have core sharding if we use Mesos between active applications depending on the nr of parallel tasks I believe my question is slightly simpler. For example: 1 - There are 12 cores available in the cluster 2 - I start app A with 2 cores - gets 2 3 - I start app B - gets remaining 10 4 - If I stop app A, app B *does not* get the now available remaining 2 cores. Should I expect Mesos to have this scenario working? Also, the same question applies to when we add more cores to a cluster. Let's say ideally I want 12 cores for my app, although there are only 10. As I add more workers, they should get assigned to my app dynamically. I haven't tested this in a while but I think the app will not even start and complain about not enough resources... Would very much appreciate any knowledge share on this! tnks, Rod -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Dynamically-switching-Nr-of-allocated-core-tp17955.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org