I want to ask this, not because I can't read endless documentation and
several tutorials, but because there seems to be many ways of doing things
and I keep having issues. How do you run /your /spark app?

I had it working when I was only using yarn+hadoop1 (Cloudera), then I had
to get Spark and Shark working and ended upgrading everything and dropped
CDH support. Anyways, this is what I used with master=yarn-client and
app_jar being Scala code compiled with Maven.

java -cp $CLASSPATH -Dspark.jars=$APP_JAR -Dspark.master=$MASTER $CLASSNAME
$ARGS 

Do you use this? or something else? I could never figure out this method.
SPARK_HOME/bin/spark jar APP_JAR ARGS

For example:
bin/spark-class jar
/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi 10 10

Do you use SBT or Maven to compile? or something else?


** It seams that I can't get subscribed to the mailing list and I tried both
my work email and personal.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-do-you-run-your-spark-app-tp7935.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to