Hitesh,
Each Toree kernel is its own JVM process. Like you state... it is not easy
to have multiple SparkContext in one VM. It's been done before by Ooyala
Job Server, but there was quite a bit of hackery to get that to work.
So in summary:
Each kernel is a standalone process. Each kernel holds
Thanks Corey.
I understand that quite a few folks are looking to share the spark context to
share RDDs across different notebooks. I was looking at this more from an
isolation/security point of view to ensure that each user has its own spark
context especially in the case where spark is being
Hello Hitesh,
In regards to one Spark Context per kernel:
In general, the rule of thumb: One notebook/application -> One Kernel ->
One Spark Context
There is no isolation of spark contexts within the kernel. You are, as you
said, only able to create one context per kernel. We do have APIs and
com
Hello
As far as I can find from various searches of the old docs on spark-kernel wiki
and issues on JIRA, atleast for jupyter, it seems like each notebook has its
own spark-kernel which in turns wraps its own spark context. I am curious about
how this isolation works considering that folks hav