You can serve queries over your RDD data yes, and return results to the
user/client as long as your driver is alive.
For example, I have built a play! application that acts as a driver
(creating a spark context), loads up data from my database, organize it and
subsequently receive and process
based on Spark. Perhaps
Spark SQL is that general way and I'll soon find out. Thanks.
From: mich...@databricks.com
Date: Mon, 27 Oct 2014 14:35:46 -0700
Subject: Re: Spark to eliminate full-table scan latency
To: ronalday...@live.com
CC: user@spark.apache.org
You can access cached data
You can access cached data in spark through the JDBC server:
http://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server
On Mon, Oct 27, 2014 at 1:47 PM, Ron Ayoub ronalday...@live.com wrote:
We have a table containing 25 features per item id along with
be cool if there
was some general way to create a server app based on Spark. Perhaps Spark SQL
is that general way and I'll soon find out. Thanks.
From: mich...@databricks.com
Date: Mon, 27 Oct 2014 14:35:46 -0700
Subject: Re: Spark to eliminate full-table scan latency
To: ronalday...@live.com
CC