I've found that the jar will be copied to the worker from hdfs fine, but it is 
not added to the spark context for you. You have to know that the jar will end 
up in the driver's working dir, and so you just add a the file name if the jar 
to the context in your program. 

In your example below, just add to the context "test.jar". 

Btw, the context will not have the master URL either, so add that while you are 
at it. 

This is a big issue. I've posted about it a week ago and no replies. Hopefully 
it gets more attention as more people start hitting this. Basically, 
spark-submit on standalone cluster with cluster deploy is broken. 

Gino B.

> On Jun 20, 2014, at 2:46 AM, randylu <randyl...@gmail.com> wrote:
> 
> in addition, jar file can be copied to driver node automatically.
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/problem-about-cluster-mode-of-spark-1-0-0-tp7982p7984.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to