Hi Zsolt,
spark.executor.memory, spark.executor.cores, and spark.executor.instances
are only honored when launching through spark-submit. Marcelo is working
on a Spark launcher (SPARK-4924) that will enable using these
programmatically.
That's correct that the error comes up when
yarn.scheduler.
One more question: Is there reason why Spark throws an error when
requesting too much memory instead of capping it to the maximum value (as
YARN would do by default)?
Thanks!
2015-02-10 17:32 GMT+01:00 Zsolt Tóth :
> Hi,
>
> I'm using Spark in yarn-cluster mode and submit the jobs programmatical
Hi,
I'm using Spark in yarn-cluster mode and submit the jobs programmatically
from the client in Java. I ran into a few issues when tried to set the
resource allocation properties.
1. It looks like setting spark.executor.memory, spark.executor.cores and
spark.executor.instances have no effect bec