Hi All,

An year ago we started this journey and laid the path for Spark + Cassandra
stack. We established the ground work and direction for Spark Cassandra
connectors and we have been happy seeing the results.

With Spark 1.1.0 and SparkSQL release, we its time to take Calliope
<http://tuplejump.github.io/calliope/> to the logical next level also
paving the way for much more advanced functionality to come.

Yesterday we released Calliope 1.1.0 Community Tech Preview
<https://twitter.com/tuplejump/status/517739186124627968>, which brings
Native SparkSQL support for Cassandra. The further details are available
here <http://tuplejump.github.io/calliope/tech-preview.html>.

This release showcases in core spark-sql
<http://tuplejump.github.io/calliope/start-with-sql.html>, hiveql
<http://tuplejump.github.io/calliope/start-with-hive.html> and
HiveThriftServer <http://tuplejump.github.io/calliope/calliope-server.html>
support.

I differentiate it as "native" spark-sql integration as it doesn't rely on
Cassandra's hive connectors (like Cash or DSE) and saves a level of
indirection through Hive.

It also allows us to harness Spark's analyzer and optimizer in future to
work out the best execution plan targeting a balance between Cassandra's
querying restrictions and Sparks in memory processing.

As far as we know this it the first and only third party datastore
connector for SparkSQL. This is a CTP release as it relies on Spark
internals that still don't have/stabilized a developer API and we will work
with the Spark Community in documenting the requirements and working
towards a standard and stable API for third party data store integration.

On another note, we no longer require you to signup to access the early
access code repository.

Inviting all of you try it and give us your valuable feedback.

Regards,

Rohit
*Founder & CEO, **Tuplejump, Inc.*
____________________________
www.tuplejump.com
*The Data Engineering Platform*

Reply via email to