GitHub user mengxr opened a pull request:

    https://github.com/apache/spark/pull/848

    [SPARK-1870] Make spark-submit --jars work in yarn-cluster mode.

    Sent secondary jars to distributed cache of all containers and add the 
cached jars to classpath before executors start.
    
    `spark-submit --jars` also works in standalone server and `yarn-client`. 
Thanks for @andrewor14 for testing!
    
    I removed "Doesn't work for drivers in standalone mode with "cluster" 
deploy mode." from `spark-submit`'s help message, though we haven't tested 
mesos yet.
    
    CC: @dbtsai @sryza

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/mengxr/spark yarn-classpath

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/848.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #848
    
----
commit dc3c825934cbd62566d09d3f2b4334dcc444879a
Author: Xiangrui Meng <m...@databricks.com>
Date:   2014-05-21T17:51:43Z

    add secondary jars to classpath in yarn

commit 3e7e1c4a2fe1a9d8512c19e56df91b34bea58108
Author: Xiangrui Meng <m...@databricks.com>
Date:   2014-05-21T18:21:09Z

    use sparkConf instead of hadoop conf

commit 11e535434940d0809bd8c1380b2d4a92d87ebb6a
Author: Xiangrui Meng <m...@databricks.com>
Date:   2014-05-21T18:45:25Z

    minor changes

commit 65e04ad8296969445e4ecfaa8921d55fe1e39c74
Author: Xiangrui Meng <m...@databricks.com>
Date:   2014-05-21T18:52:02Z

    update spark-submit help message and add a comment for yarn-client

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to