Spark SQL in Spark 1.1 will include all the functionality in Shark; take a look 
at 
http://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html.
 We decided to do this because at the end of the day, the only code left in 
Shark was the JDBC / Thrift server, which is a very small amount of code. 
There’s also a branch of Spark 1.0 that includes this server if you want to 
replace Shark on Spark 1.0: 
https://github.com/apache/spark/tree/branch-1.0-jdbc. The server runs in a very 
similar way to how Shark did.

Matei

On Jul 2, 2014, at 3:57 PM, Shrikar archak <shrika...@gmail.com> wrote:

> As of the spark summit 2014 they mentioned that there will be no active 
> development on shark.
> 
> Thanks,
> Shrikar
> 
> 
> On Wed, Jul 2, 2014 at 3:53 PM, Subacini B <subac...@gmail.com> wrote:
> Hi,
> 
> http://mail-archives.apache.org/mod_mbox/spark-user/201403.mbox/%3cb75376b8-7a57-4161-b604-f919886cf...@gmail.com%3E
> 
> This talks about  Shark backend will be replaced with Spark SQL engine in 
> future.
> Does that mean Spark will continue to support Shark + Spark SQL for long 
> term? OR
> After some period, Shark will be decommissioned ??
> 
> Thanks
> Subacini
> 

Reply via email to