Hi Team, I'm getting below exception for saving the results into hadoop.
*Code :* rdd.saveAsTextFile("hdfs://localhost:9000/home/rajesh/data/result.rdd") Could you please help me how to resolve this issue. 15/03/13 17:19:31 INFO spark.SparkContext: Starting job: saveAsTextFile at NativeMethodAccessorImpl.java:-2 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Got job 6 (saveAsTextFile at NativeMethodAccessorImpl.java:-2) with 4 output partitions (allowLocal=false) 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Final stage: Stage 10(saveAsTextFile at NativeMethodAccessorImpl.java:-2) 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Parents of final stage: List() 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Missing parents: List() 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Submitting Stage 10 (MappedRDD[31] at saveAsTextFile at NativeMethodAccessorImpl.java:-2), which has no missing parents 15/03/13 17:19:31 INFO storage.MemoryStore: ensureFreeSpace(98240) called with curMem=203866, maxMem=280248975 15/03/13 17:19:31 INFO storage.MemoryStore: Block broadcast_9 stored as values in memory (estimated size 95.9 KB, free 267.0 MB) 15/03/13 17:19:31 INFO storage.MemoryStore: ensureFreeSpace(59150) called with curMem=302106, maxMem=280248975 15/03/13 17:19:31 INFO storage.MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 57.8 KB, free 266.9 MB) 15/03/13 17:19:31 INFO storage.BlockManagerInfo: Added broadcast_9_piece0 in memory on localhost:57655 (size: 57.8 KB, free: 267.2 MB) 15/03/13 17:19:31 INFO storage.BlockManagerMaster: Updated info of block broadcast_9_piece0 15/03/13 17:19:31 INFO spark.SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:838 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Submitting 4 missing tasks from Stage 10 (MappedRDD[31] at saveAsTextFile at NativeMethodAccessorImpl.java:-2) 15/03/13 17:19:31 INFO scheduler.TaskSchedulerImpl: Adding task set 10.0 with 4 tasks 15/03/13 17:19:31 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 10.0 (TID 8, localhost, PROCESS_LOCAL, 1375 bytes) 15/03/13 17:19:31 INFO executor.Executor: Running task 0.0 in stage 10.0 (TID 8) 15/03/13 17:19:31 INFO executor.Executor: Fetching http://10.0.2.15:54815/files/sftordd_pickle with timestamp 1426247370763 15/03/13 17:19:31 INFO util.Utils: Fetching http://10.0.2.15:54815/files/sftordd_pickle to /tmp/fetchFileTemp7846328782039551224.tmp 15/03/13 17:19:31 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 15/03/13 17:19:31 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 15/03/13 17:19:31 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class 15/03/13 17:19:31 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir terminate called after throwing an instance of 'std::invalid_argument' what(): stoi 15/03/13 17:19:31 ERROR python.PythonRDD: Python worker exited unexpectedly (crashed) org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/home/rajesh/spark-1.2.0/python/pyspark/worker.py", line 90, in main command = pickleSer._read_with_length(infile) File "/home/rajesh/spark-1.2.0/python/pyspark/serializers.py", line 145, in _read_with_length length = read_int(stream) File "/home/rajesh/spark-1.2.0/python/pyspark/serializers.py", line 511, in read_int raise EOFError EOFError at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137) at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:174) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.Exception: Subprocess exited with status 134 at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:378) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) 15/03/13 17:19:31 ERROR python.PythonRDD: This may have been caused by a prior exception: java.lang.Exception: Subprocess exited with status 134 at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:378) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) 15/03/13 17:19:31 ERROR executor.Executor: Exception in task 0.0 in stage 10.0 (TID 8) java.lang.Exception: Subprocess exited with status 134 at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:378) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) 15/03/13 17:19:31 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 10.0 (TID 9, localhost, PROCESS_LOCAL, 1379 bytes) 15/03/13 17:19:31 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 10.0 (TID 8, localhost): java.lang.Exception: Subprocess exited with status 134 at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:378) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) 15/03/13 17:19:31 ERROR scheduler.TaskSetManager: Task 0 in stage 10.0 failed 1 times; aborting job 15/03/13 17:19:31 INFO executor.Executor: Running task 1.0 in stage 10.0 (TID 9) 15/03/13 17:19:31 INFO scheduler.TaskSchedulerImpl: Cancelling stage 10 15/03/13 17:19:31 INFO scheduler.TaskSchedulerImpl: Stage 10 was cancelled 15/03/13 17:19:31 INFO executor.Executor: Executor is trying to kill task 1.0 in stage 10.0 (TID 9) 15/03/13 17:19:31 INFO scheduler.DAGScheduler: Job 6 failed: saveAsTextFile at NativeMethodAccessorImpl.java:-2, took 0.637784 s Traceback (most recent call last): File "/home/rajesh/Downloads/PythonExamples/src/test13.py", line 90, in <module> rdd.saveAsTextFile("hdfs://localhost:9000/home/rajesh/graphdata/graphX/result.rdd") File "/home/rajesh/spark-1.2.0/python/pyspark/rdd.py", line 1288, in saveAsTextFile keyed._jrdd.map(self.ctx._jvm.BytesToString()).saveAsTextFile(path) File "/home/rajesh/spark-1.2.0/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/home/rajesh/spark-1.2.0/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o136.saveAsTextFile. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 8, localhost): java.lang.Exception: Subprocess exited with status 134 at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:378) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460) at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [INFO] Stopping the server connection. 15/03/13 17:19:32 WARN python.PythonRDD: Incomplete task interrupted: Attempting to kill Python Worker Exception in thread "stdin writer for ArrayBuffer(/usr/local/lib/python2.7/dist-packages/graphlab/sftordd_pickle, /var/tmp/graphlab-rajesh/13542/c4fc2a39-5e0b-4c31-9013-07fee2b961a4)" org.apache.spark.TaskKilledException at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.rdd.PipedRDD$$anon$3.run(PipedRDD.scala:140) Regards, Rajesh