henry <nancyhenry6...@gmail.com>
Date: Tuesday, February 14, 2017 at 1:04 AM
To: Conversant <jthak...@conversantmedia.com>
Cc: Jon Gregg <coble...@gmail.com>, "user @spark" <user@spark.apache.org>
Subject: Re: Lost executor 4 Container killed by YARN for exceeding
GB.
>
> You may want to consult with your DevOps/Operations/Spark Admin team first.
>
>
>
> *From: *Jon Gregg <coble...@gmail.com>
> *Date: *Monday, February 13, 2017 at 8:58 AM
> *To: *nancy henry <nancyhenry6...@gmail.com>
> *Cc: *"user @spark&qu
<coble...@gmail.com>
Date: Monday, February 13, 2017 at 8:58 AM
To: nancy henry <nancyhenry6...@gmail.com>
Cc: "user @spark" <user@spark.apache.org>
Subject: Re: Lost executor 4 Container killed by YARN for exceeding memory
limits.
Setting Spark's memoryOverhead configura
Setting Spark's memoryOverhead configuration variable is recommended in
your logs, and has helped me with these issues in the past. Search for
"memoryOverhead" here:
http://spark.apache.org/docs/latest/running-on-yarn.html
That said, you're running on a huge cluster as it is. If it's possible
Hi All,,
I am getting below error while I am trying to join 3 tables which are in
ORC format in hive from 5 10gb tables through hive context in spark
Container killed by YARN for exceeding memory limits. 11.1 GB of 11 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.