Thats' really depends on what you're doing.
I've been running Spark in production with Mesos for as far as Spark ever
got open sourced.
Earlier this year, we added Cassandra in the mix by running it through
Docker and Marathon in host network mode with volume. Nothing fancy, since
it was for non cr
Make sure to check out Cook--this is the exact reason we built it! I gave a
talk on it at Mesoscon Europe, so that'll be available online soon :-)
On Sat, Oct 17, 2015 at 1:40 PM Bharath Ravi Kumar
wrote:
> To be precise, the MesosExecutorBackend's Xms & Xmx equal
> spark.executor.memory. So ther
To be precise, the MesosExecutorBackend's Xms & Xmx equal
spark.executor.memory. So there's no question of expanding or contracting
the memory held by the executor.
On Sat, Oct 17, 2015 at 5:38 PM, Bharath Ravi Kumar
wrote:
> David, Tom,
>
> Thanks for the explanation. This confirms my suspicion
David, Tom,
Thanks for the explanation. This confirms my suspicion that the executor
was holding on to memory regardless of tasks in execution once it expands
to occupy memory in keeping with spark.executor.memory. There certainly is
scope for improvement here, though I realize there will substan
Hi Bharath,
When running jobs in fine grained mode, each Spark task is sent to mesos as a
task which allows the offers system to maintain fairness between different
spark application (as you've described). Having said that, unless your memory
per-node is hugely undersubscribed when running thes
Spark doesn't automatically cooperate with other frameworks on the cluster.
Have a look at Cook (github.com/twosigma/cook) for a spark scheduler on
Mesos that is able to react to changing cluster conditions and will scale
down the low priority jobs as more high priority ones appear.
On Sat, Oct 17,
6 matches
Mail list logo