RE: Spark to eliminate full-table scan latency

2014-11-19 Thread bchazalet
/Spark-to-eliminate-full-table-scan-latency-tp17395p19261.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail

Re: Spark to eliminate full-table scan latency

2014-10-28 Thread Matt Narrell
based on Spark. Perhaps Spark SQL is that general way and I'll soon find out. Thanks. From: mich...@databricks.com Date: Mon, 27 Oct 2014 14:35:46 -0700 Subject: Re: Spark to eliminate full-table scan latency To: ronalday...@live.com CC: user@spark.apache.org You can access cached data

Spark to eliminate full-table scan latency

2014-10-27 Thread Ron Ayoub
We have a table containing 25 features per item id along with feature weights. A correlation matrix can be constructed for every feature pair based on co-occurrence. If a user inputs a feature they can find out the features that are correlated with a self-join requiring a single full table

Re: Spark to eliminate full-table scan latency

2014-10-27 Thread Michael Armbrust
You can access cached data in spark through the JDBC server: http://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server On Mon, Oct 27, 2014 at 1:47 PM, Ron Ayoub ronalday...@live.com wrote: We have a table containing 25 features per item id along with

RE: Spark to eliminate full-table scan latency

2014-10-27 Thread Ron Ayoub
be cool if there was some general way to create a server app based on Spark. Perhaps Spark SQL is that general way and I'll soon find out. Thanks. From: mich...@databricks.com Date: Mon, 27 Oct 2014 14:35:46 -0700 Subject: Re: Spark to eliminate full-table scan latency To: ronalday...@live.com CC