Hi,

I am a bit confused about the steps I need to take to start a Spark application 
on a cluster.

So far I had this impression from the documentation that I need to explicitly 
submit the application using for example spark-submit.

However, from the SparkContext constructur signature I get the impression that 
maybe I do not have to do that after all:

In http://spark.apache.org/docs/latest/api/scala/#org.apache.spark.SparkContext 
the first constructor has (among other things) a parameter 'jars' which 
indicates the "Collection of JARs to send to the cluster".

To me this suggests that I can simply start the application anywhere and that 
it will deploy itself to the cluster in the same way a call to spark-submit 
would.

Is that correct?

If not, can someone explain why I can / need to provide master and jars etc. in 
the call to SparkContext because they essentially only duplicate what I would 
specify in the call to spark-submit.

Jan



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to