It looks like one of our big problems is that zeppelin doesn’t always kill all
completed processes. Is there an accepted way to kill a spark instance? Most of
the time executing sys.exit in a paragraph will kill the spark instance in yarn
and I believe also kill the corresponding zeppelin
We are using zeppelin with multiple users on the same server and often run out
of memory on the dedicated VM that zeppelin runs on. It isn’t uncommon to run
out of memory on the VM zeppelin is running on when just 3-4 users are using
zeppelin. Is this normal behavior? Does each user’s spark