Hi everyone,

I am using Zeppelin in AWS EMR (Zeppelin 0.6.1, spark 2.0 on Yarn RM)
Basically Zeppelin spark interpreter's spark job is not finishing after
executing a notebook.
It looks like the spark job still occupying memory a lot in my Yarn cluster.
Is there a way restart spark interpreter automatically(or pragmatically)
every time I run a notebook in order to release that memory in my Yarn
cluster?

Regards,
Soonoh

Reply via email to