Hi all,


I am new to Spark, but excited to use it with our Cassandra cluster. I have
read in a few places that Spark can interact directly with Cassandra now,
so I decided to download it and have a play – I am happy to run it in
standalone cluster mode initially. When I go to download it (
http://spark.apache.org/downloads.html) I see a bunch of pre-built versions
for Hadoop and MapR, but no mention of Cassandra – if I am running it in
standalone cluster mode, does it matter which pre-built package I download?
Would all of them work? Or do I have to build it myself from source with
some special config for Cassandra?



Thanks!

Matt

Reply via email to