What's the version of Spark you are using?

On Wed, Jan 14, 2015 at 12:00 AM, Linda Terlouw <linda.terl...@icris.nl> wrote:
> I'm new to Spark. When I use the Movie Lens dataset 100k
> (http://grouplens.org/datasets/movielens/), Spark crashes when I run the
> following code. The first call to movieData.first() gives the correct
> result. Depending on which machine I use, it crashed the second or third
> time. Does anybody know why? When I use scala in the Spark shell, this does
> not happen. Scala gives the correct result every time.
>
>
> import sys
>
> sys.path.append("d:/spark/python")
>
> sys.path.append("d:/spark/python/lib/py4j-0.8.2.1-src.zip")
>
> from pyspark import SparkContext
>
> import numpy
>
> import os
>
>
> os.environ["SPARK_HOME"] = "d:/spark"
>
> sc = SparkContext(appName="testapp")
>
>
> movieData = sc.textFile("d:/moviedata/u.data")
>
> movieData.first()
>
> movieData.first()
>
> movieData.first()
>
>
>
> Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0
> (TID 2, localhost): java.net.SocketException: Connection reset by peer:
> socket write error
>     at java.net.SocketOutputStream.socketWrite0(Native Method)
>     at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
>     at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
>     at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>     at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>     at java.io.DataOutputStream.write(DataOutputStream.java:107)
>     at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>     at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:592)
>     at
> org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$2.apply(PythonRDD.scala:389)
>     at
> org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$2.apply(PythonRDD.scala:388)
>     at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>     at
> org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:388)
>     at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
>     at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>     at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>     at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
>     at
> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
>
> Driver stacktrace:
>     at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
>     at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>     at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>     at scala.Option.foreach(Option.scala:236)
>     at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
>     at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
>     at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>     at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>     at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>     at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>     at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>     at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>     at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to