Try spark.yarn.user.classpath.first (see https://issues.apache.org/jira/browse/SPARK-2996 - only works for YARN). Also thread at http://apache-spark-user-list.1001560.n3.nabble.com/netty-on-classpath-when-using-spark-submit-td18030.html.

HTH,
Markus

On 02/03/2015 11:20 PM, Corey Nolet wrote:
I'm having a really bad dependency conflict right now with Guava versions between my Spark application in Yarn and (I believe) Hadoop's version.

The problem is, my driver has the version of Guava which my application is expecting (15.0) while it appears the Spark executors that are working on my RDDs have a much older version (assuming it's the old version on the Hadoop classpath).

Is there a property like "mapreduce.job.user.classpath.first' that I can set to make sure my own classpath is extablished first on the executors?

Reply via email to