We see users run both in the dispatcher and marathon.  I generally prefer
marathon, because there's a higher likelihood it's going to have some
feature you need that the dispatcher lacks (like in this case).

It doesn't look like we support overhead for the driver.

On Thu, Oct 13, 2016 at 10:42 AM, drewrobb <drewr...@gmail.com> wrote:

> When using spark on mesos and deploying a job in cluster mode using
> dispatcher, there appears to be no memory overhead configuration for the
> launched driver processes ("--driver-memory" is the same as Xmx which is
> the
> same as the memory quota). This makes it almost a guarantee that a long
> running driver will be OOM killed by mesos. Yarn cluster mode has an
> equivalent option -- spark.yarn.driver.memoryOverhead. Is there some way
> to
> configure driver memory overhead that I'm missing?
>
> Bigger picture question-- Is it even best practice to deploy long running
> spark streaming jobs using dispatcher? I could alternatively launch the
> driver by itself using marathon for example, where it would be trivial to
> grant the process additional memory.
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/No-way-to-set-mesos-cluster-driver-
> memory-overhead-tp27897.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to