Hi there,

I’m pretty new to Spark, and so far I’ve written my jobs the same way I wrote 
Scalding jobs - one-off, read data from HDFS, count words, write counts back to 
HDFS.

Now I want to display these counts in a dashboard. Since Spark allows to cache 
RDDs in-memory and you have to explicitly terminate your app (and there’s even 
a new JDBC server in 1.1), I’m assuming it’s possible to keep an app running 
indefinitely and query an in-memory RDD from the outside (via SparkSQL for 
example).

Is this how others are using Spark? Or are you just dumping job results into 
message queues or databases?


Thanks
- Marius


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to