(This question is also present on StackOverflow http://stackoverflow.com/questions/30656083/spark-pyspark-errors-on-mysterious-missing-tmp-file )
I'm having issues with pyspark and a missing /tmp file. I've narrowed down the behavior to a short snippet. >>> a=sc.parallelize([(16646160,1)]) # yes, just a single element >>> b=stuff # this is read in from a text file above - the contents are shown below >>> # b=sc.parallelize(b.collect()) >>> a.join(b).take(10) This fails, but if I include the commented line (which should be the same thing), then it succeeds. Here is the error: --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-101-90fe86df7879> in <module>() 3 b=stuff.map(lambda x:(16646160,1)) 4 #b=sc.parallelize(b.collect()) ----> 5 a.join(b).take(10) 6 b.take(10) /usr/lib/spark/python/pyspark/rdd.py in take(self, num) 1109 1110 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts)) -> 1111 res = self.context.runJob(self, takeUpToNumLeft, p, True) 1112 1113 items += res /usr/lib/spark/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal) 816 # SparkContext#runJob. 817 mappedRDD = rdd.mapPartitions(partitionFunc) --> 818 it = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, javaPartitions, allowLocal) 819 return list(mappedRDD._collect_iterator_through_file(it)) 820 /usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args) 536 answer = self.gateway_client.send_command(command) 537 return_value = get_return_value(answer, self.gateway_client, --> 538 self.target_id, self.name) 539 540 for temp_arg in temp_args: /usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 298 raise Py4JJavaError( 299 'An error occurred while calling {0}{1}{2}.\n'. --> 300 format(target_id, '.', name), value) 301 else: 302 raise Py4JError( Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 210.0 failed 1 times, most recent failure: Lost task 1.0 in stage 210.0 (TID 884, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/lib/spark/python/pyspark/worker.py", line 92, in main command = pickleSer.loads(command.value) File "/usr/lib/spark/python/pyspark/broadcast.py", line 106, in value self._value = self.load(self._path) File "/usr/lib/spark/python/pyspark/broadcast.py", line 87, in load with open(path, 'rb', 1 << 20) as f: IOError: [Errno 2] No such file or directory: '/tmp/spark-4a8c591e-9192-4198-a608-c7daa3a5d494/tmpuzsAVM' at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137) at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:174) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1468) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) In case you're wondering >>> b.take(10) [(16744491, 1), (16203827, 1), (16695357, 1), (16958298, 1), (16400458, 1), (16810060, 1), (11452497, 1), (14803033, 1), (15630426, 1), (14917736, 1)] So maybe (I thought) there's some weird number in there that overflows or something, and collecting and re-parallelizing "fixes" the problem. This next bit of code proves this assumption wrong. >>> a=sc.parallelize([(16646160,1)]) >>> b=stuff.map(lambda x:(16646160,1)) >>> #b=sc.parallelize(b.collect()) >>> a.join(b).take(10) It still breaks. (Here again including the comment line fixes the problem.) So I'm apparently looking at some sort of spark/pyspark bug. Spark 1.2.0. Any idea? -John