Somewhat related - What's the correct implementation when you have a single
cluster to support multiple jobs that are unrelated and NOT sharing data? I
was directed to figure out, via job server, to support "multiple contexts"
and explained that multiple contexts per JVM is not really supported. So,
via job server, how does one support multiple contexts in DIFFERENT JVM's?
I specify multiple contexts in the conf file and the initialization of the
subsequent contexts fail.



On Fri, Dec 4, 2015 at 3:37 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> On Fri, Dec 4, 2015 at 11:24 AM, Anfernee Xu <anfernee...@gmail.com>
> wrote:
>
>> If multiple users are looking at the same data set, then it's good choice
>> to share the SparkContext.
>>
>> But my usercases are different, users are looking at different data(I use
>> custom Hadoop InputFormat to load data from my data source based on the
>> user input), the data might not have any overlap. For now I'm taking below
>> approach
>>
>
> Still if you want fine grained sharing of compute resources as well, you
> want to using single SparkContext.
>

Reply via email to