Yes, that is correct. You can use this boiler plate to avoid spark-submit.
//The configurations
val sconf = new SparkConf()
.setMaster(spark://spark-ak-master:7077)
.setAppName(SigmoidApp)
.set(spark.serializer, org.apache.spark.serializer.KryoSerializer)
Hi,
I am a bit confused about the steps I need to take to start a Spark application
on a cluster.
So far I had this impression from the documentation that I need to explicitly
submit the application using for example spark-submit.
However, from the SparkContext constructur signature I get the
Hi Jan,
Most SparkContext constructors are there for legacy reasons. The point of
going through spark-submit is to set up all the classpaths, system
properties, and resolve URIs properly *with respect to the deployment mode*.
For instance, jars are distributed differently between YARN cluster