Hi Andrei,

I think the preferred way to deploy Spark jobs is by using the sbt package
task instead of using the sbt assembly plugin. In any case, as you comment,
the mergeStrategy in combination with some dependency exlusions should fix
your problems. Have a look at  this gist
<https://gist.github.com/JordiAranda/bdbad58d128c14277a05>   for further
details (I just followed some recommendations commented in the sbt assembly
plugin documentation).

Up to now I haven't found a proper way to combine my development/deployment
phases, although I must say my experience in Spark is pretty poor (it really
depends in your deployment requirements as well). In this case, I think
someone else could give you some further insights.

Best,



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Is-uberjar-a-recommended-way-of-running-Spark-Scala-applications-tp6518p6520.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to