Hello,

In a 2-worker cluster: 6 cores/30 GB RAM, 24cores/60GB RAM,

how can I tell my executor to use all 90 GB of available memory?

In the configuration you set e.g. "spark.cores.max" to 30 (24+6),

but cannot set "spark.executor.memory" to 90g (30+60).

Kind regards,
George

Reply via email to