Another option is using Tachyon to cache the RDD, then the cache can be shared by different applications. See how to use Spark with Tachyon: http://tachyon-project.org/Running-Spark-on-Tachyon.html
Davies On Sun, Aug 17, 2014 at 4:48 PM, ryaminal <tacmot...@gmail.com> wrote: > You can also look into using ooyala's job server at > https://github.com/ooyala/spark-jobserver > > This already has a spary server built in that allows you to do what has > already been explained above. Sounds like it should solve your problem. > > Enjoy! > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/application-as-a-service-tp12253p12267.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org