By looking at the code of JobScheduler, I find a parameter of below:

 

  private val numConcurrentJobs = 
ssc.conf.getInt("spark.streaming.concurrentJobs", 1)

  private val jobExecutor = Executors.newFixedThreadPool(numConcurrentJobs)

 

Does that mean each App can have only one active stage?

 

In my psydo-code below:

 

         S1 = viewDStream.forEach( collect() )..

         S2 = viewDStream.forEach( collect() )..

 

There should be two “collect()” jobs for each batch interval, right? Are they 
running in parallel?

 

Thank you!

Reply via email to