Can you attach the logs where this is failing?

From:  Sven Krasser <kras...@gmail.com>
Date:  Tuesday, January 27, 2015 at 4:50 PM
To:  Guru Medasani <gdm...@outlook.com>
Cc:  Sandy Ryza <sandy.r...@cloudera.com>, Antony Mayi 
<antonym...@yahoo.com>, "user@spark.apache.org" <user@spark.apache.org>
Subject:  Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Since it's an executor running OOM it doesn't look like a container being 
killed by YARN to me. As a starting point, can you repartition your job 
into smaller tasks?
-Sven

On Tue, Jan 27, 2015 at 2:34 PM, Guru Medasani <gdm...@outlook.com> wrote:
Hi Anthony,

What is the setting of the total amount of memory in MB that can be 
allocated to containers on your NodeManagers?

yarn.nodemanager.resource.memory-mb

Can you check this above configuration in yarn-site.xml used by the node 
manager process?

-Guru Medasani

From:  Sandy Ryza <sandy.r...@cloudera.com>
Date:  Tuesday, January 27, 2015 at 3:33 PM
To:  Antony Mayi <antonym...@yahoo.com>
Cc:  "user@spark.apache.org" <user@spark.apache.org>
Subject:  Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Hi Antony,

If you look in the YARN NodeManager logs, do you see that it's killing the 
executors?  Or are they crashing for a different reason?

-Sandy

On Tue, Jan 27, 2015 at 12:43 PM, Antony Mayi 
<antonym...@yahoo.com.invalid> wrote:
Hi,

I am using spark.yarn.executor.memoryOverhead=8192 yet getting executors 
crashed with this error.

does that mean I have genuinely not enough RAM or is this matter of config 
tuning?

other config options used:
spark.storage.memoryFraction=0.3
SPARK_EXECUTOR_MEMORY=14G

running spark 1.2.0 as yarn-client on cluster of 10 nodes (the workload is 
ALS trainImplicit on ~15GB dataset)

thanks for any ideas,
Antony.




-- 
http://sites.google.com/site/krasser/?utm_source=sig

Reply via email to