Hi, Mark, sorry for the confusion.

Let me clarify, when an application is submitted, the master will tell each 
Spark worker to spawn an executor JVM process. All the task sets  of the 
application will be executed by the executor. After the application runs to 
completion. The executor process will be killed.
But I hope that all applications submitted can run in the same executor, can 
JobServer do that? If so, it’s really good news!

Best Regards,
Jia

On Jan 17, 2016, at 3:09 PM, Mark Hamstra <m...@clearstorydata.com> wrote:

> You've still got me confused.  The SparkContext exists at the Driver, not on 
> an Executor.
> 
> Many Jobs can be run by a SparkContext -- it is a common pattern to use 
> something like the Spark Jobserver where all Jobs are run through a shared 
> SparkContext.
> 
> On Sun, Jan 17, 2016 at 12:57 PM, Jia Zou <jacqueline...@gmail.com> wrote:
> Hi, Mark, sorry, I mean SparkContext.
> I mean to change Spark into running all submitted jobs (SparkContexts) in one 
> executor JVM.
> 
> Best Regards,
> Jia
> 
> On Sun, Jan 17, 2016 at 2:21 PM, Mark Hamstra <m...@clearstorydata.com> wrote:
> -dev
> 
> What do you mean by JobContext?  That is a Hadoop mapreduce concept, not 
> Spark.
> 
> On Sun, Jan 17, 2016 at 7:29 AM, Jia Zou <jacqueline...@gmail.com> wrote:
> Dear all,
> 
> Is there a way to reuse executor JVM across different JobContexts? Thanks.
> 
> Best Regards,
> Jia
> 
> 
> 

Reply via email to