I've got a Cassandra 2.1.1 + Spark 1.1.0 cluster running. I'm using 
sbt-assembly to create a uber jar to submit to the stand alone master. I'm 
using the hadoop 1 prebuilt binaries for Spark. As soon as I try to do 
sc.CassandraTable(...) I get an error that's likely to be a Guava versioning 
issue. I'm using the Spark Cassandra connector v 1.1.0-rc2 which just came out, 
though the issue was in rc1 as well. I can't see the cassandra connector using 
Guava directly, so I guess it's a dependency for some other thing that the 
cassandra spark connector is using. Does anybody have a workaround for this?

The sbt file and the exception are given below.

Regards,
Ashic.


sbt file:

import sbt._
import Keys._
import sbtassembly.Plugin._
import AssemblyKeys._

assemblySettings

name := "foo"

version := "0.1.0"

scalaVersion := "2.10.4"

libraryDependencies ++= Seq (
  "org.apache.spark" %% "spark-core" % "1.1.0" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.1.0" % "provided",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0-rc2" 
withSources() withJavadoc(),
  "org.specs2" %% "specs2" % "2.4" % "test" withSources()
)

//allow provided for run
run in Compile <<= Defaults.runTask(fullClasspath in Compile, mainClass in 
(Compile, run), runner in (Compile, run))

mergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) =>
    (xs map {_.toLowerCase}) match {
      case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: 
Nil) => MergeStrategy.discard
      case _ => MergeStrategy.discard
    }
  case _ => MergeStrategy.first
}

resolvers += "Akka Repository" at "http://repo.akka.io/releases/";

test in assembly := {}

Exception:
14/11/24 14:20:11 INFO client.AppClient$ClientActor: Executor updated: 
app-20141124142008-0001/0 is now RUNNING
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.common.collect.Sets.newConcurrentHashSet()Ljava/util/Set;
        at 
com.datastax.driver.core.Cluster$ConnectionReaper.<init>(Cluster.java:2065)
        at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:1163)
        at com.datastax.driver.core.Cluster$Manager.<init>(Cluster.java:1110)
        at com.datastax.driver.core.Cluster.<init>(Cluster.java:118)
        at com.datastax.driver.core.Cluster.<init>(Cluster.java:105)
        at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:174)
        at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1075)
        at 
com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:81)
        at 
com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:165)
        at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:160)
        at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:160)
        at 
com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
        at 
com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
        at 
com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:71)
        at 
com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:97)
        at 
com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:108)
        at 
com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:134)
        at 
com.datastax.spark.connector.rdd.CassandraRDD.tableDef$lzycompute(CassandraRDD.scala:227)
        at 
com.datastax.spark.connector.rdd.CassandraRDD.tableDef(CassandraRDD.scala:226)
        at 
com.datastax.spark.connector.rdd.CassandraRDD.verify$lzycompute(CassandraRDD.scala:266)
        at 
com.datastax.spark.connector.rdd.CassandraRDD.verify(CassandraRDD.scala:263)
        at 
com.datastax.spark.connector.rdd.CassandraRDD.getPartitions(CassandraRDD.scala:292)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:767)
        at Main$.main(Main.scala:33)
        at Main.main(Main.scala)

                                          

Reply via email to