Hello,
when I set spark.executor.cores e.g. to 8 cores and spark.executor.memory
to 8GB. It can allocate more executors with less cores for my app but each
executors gets 8GB RAM.

It is a problem because I can allocate more memory across cluster than
expected, the worst case is 8x 1core executors, each with 8GB => 64GB RAM,
instead of about 8GB I need for app.

If I would plan spark.executor.memory to some lower amount, than I can end
up with less executors, even a single one (if other nodes are full) which
wouldn't have enough memory. I don't know how to configure executor memory
in a predictable way.

The only predictable way we found is to set 1 core to spark.executor.cores.
And divide required memory for app by spark.cores.max. But having many JVMs
for small executors doesn't look optimal to me.

Is it a known issue or do I miss something?

Many thanks,
Petr

Reply via email to