When using spark-submit, the application jar along with any jars included
with the --jars option will be automatically transferred to the cluster.
URLs supplied after --jars must be separated by commas. That list is
included on the driver and executor classpaths. Directory expansion does
not work with --jars.

Spark uses the following URL scheme to allow different strategies for
disseminating jars:

   - *file:* - Absolute paths and file:/ URIs are served by the driver’s
   HTTP file server, and every executor pulls the file from the driver HTTP
   server.
   - *hdfs:*, *http:*, *https:*, *ftp:* - these pull down files and JARs
   from the URI as expected
   - *local:* - a URI starting with local:/ is expected to exist as a local
   file on each worker node. This means that no network IO will be incurred,
   and works well for large files/JARs that are pushed to each worker, or
   shared via NFS, GlusterFS, etc.

>From the documentation,i wonder s3 url format may not have been support.

2017-07-29 4:52 GMT+08:00 Richard Xin <richardxin...@yahoo.com.invalid>:

> Can we add extra library (jars on S3) to spark-submit?
> if yes, how? such as --jars, extraClassPath, extraLibPath
> Thanks,
> Richard
>

Reply via email to