Hi,
Plz give a try by changing the worker memory such that worker memoryexecutor
memory
Thanks Regards,
Meethu M
On Friday, 22 August 2014 5:18 PM, Yadid Ayzenberg ya...@media.mit.edu wrote:
Hi all,
I have a spark cluster of 30 machines, 16GB / 8 cores on each running in
standalone
Hi all,
I have a spark cluster of 30 machines, 16GB / 8 cores on each running in
standalone mode. Previously my application was working well ( several
RDDs the largest being around 50G).
When I started processing larger amounts of data (RDDs of 100G) my app
is losing executors. Im currently