unsubscribe

2020-04-26 Thread siqi chen



java.lang.NoClassDefFoundError: org/apache/spark/SparkConf

2015-02-16 Thread siqi chen
Hello,

I have a simple Kafka Spark Streaming example which I am still developing
in the standalone mode.

Here is what is puzzling me,

If I build the assembly jar, use bin/spark-submit to run it, it works fine.
But if I want to run the code from within Intellij IDE, then it will cry
for this error

Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/spark/SparkConf

...
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf

Here is my build.sbt file


import _root_.sbt.Keys._
import _root_.sbtassembly.Plugin.AssemblyKeys._
import _root_.sbtassembly.Plugin.MergeStrategy
import _root_.sbtassembly.Plugin._
import AssemblyKeys._

assemblySettings

name := "test-kafka"

version := "1.0"

scalaVersion := "2.10.4"

jarName in assembly := "test-kafka-1.0.jar"

assemblyOption in assembly ~= { _.copy(includeScala = false) }

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.2.1" % "provided",
  "org.apache.spark" %% "spark-streaming" % "1.2.1" % "provided",
  ("org.apache.spark" %% "spark-streaming-kafka" % "1.2.1").
exclude("commons-beanutils", "commons-beanutils").
exclude("commons-collections", "commons-collections").
exclude("com.esotericsoftware.minlog", "minlog").
exclude("commons-logging", "commons-logging")
)

mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
  case x if x.startsWith("META-INF/ECLIPSEF.RSA") => MergeStrategy.last
  case x if x.startsWith("META-INF/mailcap") => MergeStrategy.last
  case x if x.startsWith("plugin.properties") => MergeStrategy.last
  case x => old(x)
}
}

I also have this in my project/plugins.sbt

resolvers += Resolver.url("sbt-plugin-releases-scalasbt",
url("http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/";))

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.7.4")

*What is even more interesting is that if I pin the Spark jar to 1.1.1
instead of 1.2.1, then I can successfully run it within IntelliJ. *



​