Hello, I am just starting out with spark streaming and Hbase/hadoop, i m writing a simple app to read from kafka and store to Hbase, I am having trouble submitting my job to spark.
I 've downloaded Apache Spark 1.4.1 pre-build for hadoop 2.6 I am building the project with mvn package and submitting the jar file with ~/Desktop/spark/bin/spark-submit --class org.example.main.scalaConsumer scalConsumer-0.0.1-SNAPSHOT.jar And then i am getting the error you see in the subject line. Is this a problem with my maven dependencies? do i need to install hadoop locally? And if so how can i add the hadoop classpath to the spark job? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Error-Exception-in-thread-main-java-lang-NoClassDefFoundError-org-apache-hadoop-hbase-HBaseConfiguran-tp24266.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org