Hello Thodoris!
Have you checked this:
 - does mesos cluster have available resources?
  - if spark have waiting tasks in queue more than
spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
 - And then, have you checked that mesos send offers to spark app mesos
framework at least with 10 cores and 2GB RAM?

If mesos have not available offers with 10 cores, for example, but have
with 8 or 9, so you can use smaller executers for better fit for available
resources on nodes for example with 4 cores and 1 GB RAM, for example

Cheers,
Pavel

On Mon, Jul 9, 2018 at 9:05 PM Thodoris Zois <z...@ics.forth.gr> wrote:

> Hello list,
>
> We are running Apache Spark on a Mesos cluster and we face a weird
> behavior of executors. When we submit an app with e.g 10 cores and 2GB of
> memory and max cores 30, we expect to see 3 executors running on the
> cluster. However, sometimes there are only 2... Spark applications are not
> the only one that run on the cluster. I guess that Spark starts executors
> on the available offers even if it does not satisfy our needs. Is there any
> configuration that we can use in order to prevent Spark from starting when
> there are no resource offers for the total number of executors?
>
> Thank you
> - Thodoris
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to