Thanks, Jordi, your gist looks pretty much like what I have in my project
currently (with few exceptions that I'm going to borrow).

I like the idea of using "sbt package", since it doesn't require third
party plugins and, most important, doesn't create a mess of classes and
resources. But in this case I'll have to handle jar list manually via Spark
context. Is there a way to automate this process? E.g. when I was a Clojure
guy, I could run "lein deps" (lein is a build tool similar to sbt) to
download all dependencies and then just enumerate them from my app. Maybe
you have heard of something like that for Spark/SBT?

Thanks,
Andrei


On Thu, May 29, 2014 at 3:48 PM, jaranda <jordi.ara...@bsc.es> wrote:

> Hi Andrei,
>
> I think the preferred way to deploy Spark jobs is by using the sbt package
> task instead of using the sbt assembly plugin. In any case, as you comment,
> the mergeStrategy in combination with some dependency exlusions should fix
> your problems. Have a look at  this gist
> <https://gist.github.com/JordiAranda/bdbad58d128c14277a05>   for further
> details (I just followed some recommendations commented in the sbt assembly
> plugin documentation).
>
> Up to now I haven't found a proper way to combine my development/deployment
> phases, although I must say my experience in Spark is pretty poor (it
> really
> depends in your deployment requirements as well). In this case, I think
> someone else could give you some further insights.
>
> Best,
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Is-uberjar-a-recommended-way-of-running-Spark-Scala-applications-tp6518p6520.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to