Hi all,
So far as I known, a SparkContext instance take in charge of some resources of 
a cluster the master assigned to.  And It is hardly shared with different 
sparkcontexts. meanwhile, schedule between applications is also not easier.
To address this without introducing extra resource schedule system such as 
yarn/mesos, I suppose to create a special SparkContext that can be shared 
across nodes/drivers, that is,  submitting jobs from different nodes, but share 
same rdd definition and task-scheduler. 
Is this idea valuable? Is this possible to implemented? or it is value of 
nothing?
 
Thanks for any advices.
 
lin wukang

Reply via email to