Hi all,

We recently upgraded to our jobs to spark 2.4.4 from 2.3 and noticed that
some jobs are failing due to lack of resources - particularly lack of
executor memory causing some executors to fail. However, no code change was
made other than the upgrade. Does spark 2.4.4 require more executor memory
than previous versions of spark? I’d be interested to know if anyone else
has this issue. We are on scala 2.11.12 on java 8
-- 
Cheers,
Ruijing Li

Reply via email to