If you are running in Local mode, then you can submit many jobs. As long as
your hardware has resources to do multiple jobs there won't be any
dependency. in other words each app (spark-submit) will run in its own JVM
unaware of others. Local mode is good for testing.

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 8 July 2016 at 14:09, Jacek Laskowski <ja...@japila.pl> wrote:

> On 8 Jul 2016 2:03 p.m., "Mazen" <mazen.ezzedd...@gmail.com> wrote:
> >
> > Does Spark handle simulate nous execution of jobs within an application
>
> Yes. Run as many Spark jobs as you want and Spark will queue them given
> CPU and RAM available for you in the cluster.
>
> > job execution is blocking i.e. a new job can not be initiated until the
> > previous one commits.
>
> That's how usually people code their apps. The more advanced approach is
> to use SparkContext from multiple threads and execute actions (that will
> submit jobs).
>
> >  What does it mean that :  "Spark’s scheduler is fully thread-safe"
>
> You can use a single SparkContext from multiple threads.
>
> Jacek
>

Reply via email to