Hello list,

We are running Apache Spark on a Mesos cluster and we face a weird behavior of 
executors. When we submit an app with e.g 10 cores and 2GB of memory and max 
cores 30, we expect to see 3 executors running on the cluster. However, 
sometimes there are only 2... Spark applications are not the only one that run 
on the cluster. I guess that Spark starts executors on the available offers 
even if it does not satisfy our needs. Is there any configuration that we can 
use in order to prevent Spark from starting when there are no resource offers 
for the total number of executors?

Thank you 
- Thodoris 

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to