Thank you for your reply.
I'm sorry confirmation is slow.
I'll try the tuning 'spark.yarn.executor.memoryOverhead'.
Thanks,
Yuichiro Sakamoto
On 2015/03/25 0:56, Sandy Ryza wrote:
Hi Yuichiro,
The way to avoid this is to boost spark.yarn.executor.memoryOverhead until the
executors have
Hi Yuichiro,
The way to avoid this is to boost spark.yarn.executor.memoryOverhead until
the executors have enough off-heap memory to avoid going over their limits.
-Sandy
On Tue, Mar 24, 2015 at 11:49 AM, Yuichiro Sakamoto ks...@muc.biglobe.ne.jp
wrote:
Hello.
We use ALS(Collaborative