Hi Antony,

If you look in the YARN NodeManager logs, do you see that it's killing the
executors?  Or are they crashing for a different reason?

-Sandy

On Tue, Jan 27, 2015 at 12:43 PM, Antony Mayi <antonym...@yahoo.com.invalid>
wrote:

> Hi,
>
> I am using spark.yarn.executor.memoryOverhead=8192 yet getting executors
> crashed with this error.
>
> does that mean I have genuinely not enough RAM or is this matter of config
> tuning?
>
> other config options used:
> spark.storage.memoryFraction=0.3
> SPARK_EXECUTOR_MEMORY=14G
>
> running spark 1.2.0 as yarn-client on cluster of 10 nodes (the workload is
> ALS trainImplicit on ~15GB dataset)
>
> thanks for any ideas,
> Antony.
>

Reply via email to