Hi wangyi:
have more detail information?
I guess it maybe caused by need a jars that havn't upload to the
workers,such as your main class.

./bin/spark-class org.apache.spark.deploy.Client launch
   [client-options] \
   <cluster-url> <application-jar-url> <main-class> \
   [application-options]


application-jar-url: Path to a bundled jar including your application
and all dependencies. Currently, the URL must be globally visible
inside of your cluster, for instance, an `hdfs://` path or a `file://`
path that is present on all nodes.





Hi ,
    I use spark 0.9 run a simple computation, but it failled when I use
standalone mode

code:
    * val sc = new SparkContext(args(0), "BayesAnalysis",
System.getenv("SPARK_HOME"), SparkContext.jarOfClass(this.getClass).toSeq) *




*    val dataSet = sc.textFile(args(1)).map(_.split(",")).filter(_.length
== 14).collect;     for(record <- dataSet){
    println(record.mkString(" "))     }*

run on local mode, it sucessfully done.
run on standalone mode, it failed
java -classpath realclaspath mainclass  sparkMaster hdfsFile


*14/08/11 15:10:07 INFO scheduler.DAGScheduler: Failed to run collect at
BayesAnalysis.scala:416 Exception in thread "main"
org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 4 times
(most recent failure: unknown) *
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.org
<http://org.apache.spark.scheduler.dagscheduler.org/>
$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


2014-08-11 15:41 GMT+08:00 jeanlyn92 <jeanly...@gmail.com>:

> Hi wangyi:
> have more detail information?
> I guess it maybe cause by need a jars that havn't upload to the works.
>
> ./bin/spark-class org.apache.spark.deploy.Client launch
>    [client-options] \
>    <cluster-url> <application-jar-url> <main-class> \
>    [application-options]
>
>
> application-jar-url: Path to a bundled jar including your application and all 
> dependencies. Currently, the URL must be globally visible inside of your 
> cluster, for instance, an `hdfs://` path or a `file://` path that is present 
> on all nodes.
>
>
>
>
>
> Hi ,
>     I use spark 0.9 run a simple computation, but it failled when I use
> standalone mode
>
> code:
>     * val sc = new SparkContext(args(0), "BayesAnalysis",
> System.getenv("SPARK_HOME"), SparkContext.jarOfClass(this.getClass).toSeq) *
>
>
>
>
> *    val dataSet = sc.textFile(args(1)).map(_.split(",")).filter(_.length
> == 14).collect;      for(record <- dataSet){
>     println(record.mkString(" "))      }*
>
> run on local mode, it sucessfully done.
> run on standalone mode, it failed
> java -classpath realclaspath mainclass  sparkMaster hdfsFile
>
>
> *14/08/11 15:10:07 INFO scheduler.DAGScheduler: Failed to run collect at
> BayesAnalysis.scala:416 Exception in thread "main"
> org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 4 times
> (most recent failure: unknown) *
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>

Reply via email to