Hello, (this is Yarn related) I'm able to load an external jar and use its classes within ApplicationMaster. I wish to use this jar within worker nodes, so I added sc.addJar(pathToJar) and ran.
I get the following exception: org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor) Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor) org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028) org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026) scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026) org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619) org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619) scala.Option.foreach(Option.scala:236) org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619) org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207) akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) akka.actor.ActorCell.invoke(ActorCell.scala:456) akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) akka.dispatch.Mailbox.run(Mailbox.scala:219) akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) And in worker node containers' stderr log (nothing in stdout log), I don't see any reference to loading jars: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started 14/03/26 13:12:18 INFO Remoting: Starting remoting 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down. Any idea what's going on?