TL;DR
Your classes are missing on the workers, pass the jar containing the
"class" main.scala.Utils to the SparkContext

Longer:
I miss some information, like how the SparkContext is configured but my
best guess is that you didn't provided the jars (addJars on SparkConf or
use the SC's constructor param).

Actually, the classes are not found on the slave which is another node,
another machine (or env), and so on. So to run your class it must be able
to load it -- which is handle by Spark but you simply have to pass it as an
argument...

This jar could be simply your current project packaged as a jar using
maven/sbt/...

HTH


On Wed, Apr 2, 2014 at 10:01 PM, yh18190 <yh18...@gmail.com> wrote:

> Hi Guys,
>
> Currently I am facing this issue ..Not able to find erros..
> here is sbt file.
> name := "Simple Project"
>
> version := "1.0"
>
> scalaVersion := "2.10.3"
>
> resolvers += "bintray/meetup" at "http://dl.bintray.com/meetup/maven";
>
> resolvers += "Akka Repository" at "http://repo.akka.io/releases/";
>
> resolvers += "Cloudera Repository" at
> "https://repository.cloudera.com/artifactory/cloudera-repos/";
>
> libraryDependencies += "org.apache.spark" %% "spark-core" %
> "0.9.0-incubating"
>
> libraryDependencies += "com.cloudphysics" % "jerkson_2.10" % "0.6.3"
>
> libraryDependencies += "org.apache.hadoop" % "hadoop-client" %
> "2.0.0-mr1-cdh4.6.0"
>
> retrieveManaged := true
>
> output..
>
> [error] (run-main) org.apache.spark.SparkException: Job aborted: Task 2.0:2
> failed 4 times (most recent failure: Exception failure:
> java.lang.NoClassDefFoundError: Could not initialize class
> main.scala.Utils$)
> org.apache.spark.SparkException: Job aborted: Task 2.0:2 failed 4 times
> (most recent failure: Exception failure: java.lang.NoClassDefFoundError:
> Could not initialize class main.scala.Utils$)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
>         at
>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at
> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>         at scala.Option.foreach(Option.scala:236)
>         at
>
> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
>         at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:456)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:219)
>         at
>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
>
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
>
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Need-suggestions-tp3650.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to