Thanks a lot for the suggestion! This approach makes perfect sense. I think
this what is being addressed by spark-jobserver project:
https://github.com/ooyala/spark-jobserver. Do you know any other
production-ready similar implementations?

On Thu, Jan 8, 2015 at 1:47 PM, Silvio Fiorito <
silvio.fior...@granturing.com> wrote:

>   Rather than having duplicate Spark apps and the web app having a direct
> reference  to the SparkContext, why not use a queue or message bus to
> submit your requests. This way you're not wasting resources caching the
> same data in Spark and you can scale your web tier independently of the
> Spark tier.
>  ------------------------------
> From: preeze <etan...@gmail.com>
> Sent: ‎1/‎8/‎2015 5:59 AM
> To: user@spark.apache.org
> Subject: Several applications share the same Spark executors (or their
> cache)
>
>   Hi all,
>
> We have a web application that connects to a Spark cluster to trigger some
> calculation there. It also caches big amount of data in the Spark
> executors'
> cache.
>
> To meet high availability requirements we need to run 2 instances of our
> web
> application on different hosts. Doing this straightforward will mean that
> the second application fires another set of executors that will initialize
> their own huge cache totally identical to that for the first application.
>
> Ideally we would like to reuse the cache in Spark for the needs of all
> instances of our applications.
>
> I am aware of the possibility to use Tachyon to externalize executors'
> cache. Currently exploring other options.
>
> Is there any way to allow several instances of the same application to
> connect to the same set of Spark executors?
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Several-applications-share-the-same-Spark-executors-or-their-cache-tp21031.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to