you can build your uber jar file on an NFS mounted file system accessible
to all nodes in the cluster. Any node then can start-submit and run the app
referring to the jar file.

sounds doable.

Having thought about it, it is feasible to place Spark binaries on the NFS
mount as well so any host can start it. The NFS directory will be treated
as a local mount.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 10 August 2016 at 22:16, Zlati Gardev <zgar...@us.ibm.com> wrote:

> Hello,
>
> Is there a way to run a spark submit job that points to the URL of a jar
> file (instead of pushing the jar from local)?
>
> The documentation  at
> *http://spark.apache.org/docs/latest/submitting-applications.html*
> <http://spark.apache.org/docs/latest/submitting-applications.html>
> implies that this may be possible.
>
> "*application-jar: Path to a bundled jar including your application and
> all dependencies. The URL must be globally visible inside of your cluster,
> for instance, an hdfs:// path or a file:// path that is present on all
> nodes*"
>
> Thank you,
> Zlati
>
>
>
> --------------------------------------------------------------------- To
> unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to