Hi, I would like to know how is the correct way to add kafka to my project in
StandAlone YARN, given that now it's in a different artifact than the Spark
core.

I tried adding the dependency to my project but I get a
NotClassFoundException to my main class. Also, that makes my Jar file very
big, so it's not convenient.

The next code is how I run it:

SPARK_JAR=/opt/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar
/opt/spark/bin/spark-class org.apache.spark.deploy.yarn.Client --jar
project.jar --class StarterClass --args yarn-standalone  --num-workers 1
--worker-cores 1 --master-memory 1536m --worker-memory 1536m

Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-in-Yarn-tp2673.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to