Hey,

Not sure whether it's best to ask this on the spark mailing list or the
mesos one, so I'll try here first :-)

I'm having a bit of trouble with out of memory errors in my spark jobs...
it seems fairly odd to me that memory resources can only be set at the
executor level, and not also at the task level. For example, as far as I
can tell there's only a *spark.executor.memory* config option.

Surely the memory requirements of a single executor are quite dramatically
influenced by the number of concurrent tasks running? Given a shared
cluster, I have no idea what % of an individual slave my executor is going
to get, so I basically have to set the executor memory to a value that's
correct when the whole machine is in use...

Has anyone else running Spark on Mesos come across this, or maybe someone
could correct my understanding of the config options?

Thanks!

Tom.

Reply via email to