Hi Saif!

Unfortunately, I don't think this is possible for YARN driver-cluster mode. 
Regarding the JARs you're referring to, can you place them on HDFS so you can 
then have them in a central location and refer to them that way for 
dependencies?


http://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management


Thanks,

Silvio

________________________________
From: saif.a.ell...@wellsfargo.com <saif.a.ell...@wellsfargo.com>
Sent: Monday, November 21, 2016 2:04:06 PM
To: user@spark.apache.org
Subject: Cluster deploy mode driver location

Hello there,

I have a Spark program in 1.6.1, however, when I submit it to cluster, it 
randomly picks the driver.

I know there is a driver specification option, but along with it it is 
mandatory to define many other options I am not familiar with. The trouble is, 
the .jars I am launching need to be available at the driver host, and I would 
like to have this jars in just a specific host, which I like it to be the 
driver.

Any help?

Thanks!
Saif

Reply via email to