http://planetcassandra.org/getting-started-with-apache-spark-and-cassandra/
http://planetcassandra.org/blog/holy-momentum-batman-spark-and-cassandra-circa-2015-w-datastax-connector-and-java/
https://github.com/datastax/spark-cassandra-connector



From: Cody Koeninger [mailto:c...@koeninger.org]
Sent: Wednesday, April 29, 2015 12:15 PM
To: Matthew Johnson
Cc: user@spark.apache.org
Subject: Re: Spark on Cassandra

Hadoop version doesn't matter if you're just using cassandra.

On Wed, Apr 29, 2015 at 12:08 PM, Matthew Johnson 
<matt.john...@algomi.com<mailto:matt.john...@algomi.com>> wrote:
Hi all,

I am new to Spark, but excited to use it with our Cassandra cluster. I have 
read in a few places that Spark can interact directly with Cassandra now, so I 
decided to download it and have a play – I am happy to run it in standalone 
cluster mode initially. When I go to download it 
(http://spark.apache.org/downloads.html) I see a bunch of pre-built versions 
for Hadoop and MapR, but no mention of Cassandra – if I am running it in 
standalone cluster mode, does it matter which pre-built package I download? 
Would all of them work? Or do I have to build it myself from source with some 
special config for Cassandra?

Thanks!
Matt

Reply via email to