If you are running spark in local mode, executor parameters are not used as
there is no executor. You should try to set corresponding driver parameter
to effect it.

On Mon, Jan 19, 2015, 00:21 Sean Owen <so...@cloudera.com> wrote:

> OK. Are you sure the executor has the memory you think? "-Xmx24g" in
> its command line? It may be that for some reason your job is reserving
> an exceptionally large amount of non-heap memory. I am not sure that's
> to be expected with the ALS job though. Even if the settings work,
> considering using the explicit command line configuration.
>
> On Sat, Jan 17, 2015 at 12:49 PM, Antony Mayi <antonym...@yahoo.com>
> wrote:
> > the values are for sure applied as expected - confirmed using the spark
> UI
> > environment page...
> >
> > it comes from my defaults configured using
> > 'spark.yarn.executor.memoryOverhead=8192' (yes, now increased even
> more) in
> > /etc/spark/conf/spark-defaults.conf and 'export
> SPARK_EXECUTOR_MEMORY=24G'
> > in /etc/spark/conf/spark-env.sh
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to