[ https://issues.apache.org/jira/browse/SPARK-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144873#comment-15144873 ]
Christopher Bourez commented on SPARK-12261: -------------------------------------------- Here is what i see when i activate the logs : 16/02/12 18:09:22 ERROR TaskSetManager: Task 0 in stage 5.0 failed 1 times; abor ting job 16/02/12 18:09:22 INFO TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool 16/02/12 18:09:22 INFO TaskSchedulerImpl: Cancelling stage 5 16/02/12 18:09:22 INFO DAGScheduler: ResultStage 5 (runJob at PythonRDD.scala:39 3) failed in 0,280 s 16/02/12 18:09:22 INFO DAGScheduler: Job 5 failed: runJob at PythonRDD.scala:393 , took 0,308529 s Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Documents\c.bourez\Documents\spark-1.5.2-bin-hadoop2.6\spark-1.5.2-bi n-hadoop2.6\python\pyspark\rdd.py", line 1299, in take res = self.context.runJob(self, takeUpToNumLeft, p) File "C:\Documents\c.bourez\Documents\spark-1.5.2-bin-hadoop2.6\spark-1.5.2-bi n-hadoop2.6\python\pyspark\context.py", line 916, in runJob port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partition s) File "C:\Documents\c.bourez\Documents\spark-1.5.2-bin-hadoop2.6\spark-1.5.2-bi n-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py", line 538, in __call__ File "C:\Documents\c.bourez\Documents\spark-1.5.2-bin-hadoop2.6\spark-1.5.2-bi n-hadoop2.6\python\pyspark\sql\utils.py", line 36, in deco return f(*a, **kw) File "C:\Documents\c.bourez\Documents\spark-1.5.2-bin-hadoop2.6\spark-1.5.2-bi n-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py", line 300, in get_ return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark. api.python.PythonRDD.runJob. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in s tage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 5, localhost): java.net.SocketException: Connection reset by peer: socket write er ror at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) at java.net.SocketOutputStream.write(SocketOutputStream.java:159) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82 ) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:622) at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$Py thonRDD$$write$1(PythonRDD.scala:442) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$ 1.apply(PythonRDD.scala:452) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$ 1.apply(PythonRDD.scala:452) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRD D.scala:452) at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3. apply(PythonRDD.scala:280) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699) > pyspark crash for large dataset > ------------------------------- > > Key: SPARK-12261 > URL: https://issues.apache.org/jira/browse/SPARK-12261 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.5.2 > Environment: windows > Reporter: zihao > > I tried to import a local text(over 100mb) file via textFile in pyspark, when > i ran data.take(), it failed and gave error messages including: > 15/12/10 17:17:43 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; > aborting job > Traceback (most recent call last): > File "E:/spark_python/test3.py", line 9, in <module> > lines.take(5) > File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\rdd.py", line 1299, > in take > res = self.context.runJob(self, takeUpToNumLeft, p) > File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\context.py", line > 916, in runJob > port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, > partitions) > File "C:\Anaconda2\lib\site-packages\py4j\java_gateway.py", line 813, in > __call__ > answer, self.gateway_client, self.target_id, self.name) > File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\sql\utils.py", line > 36, in deco > return f(*a, **kw) > File "C:\Anaconda2\lib\site-packages\py4j\protocol.py", line 308, in > get_return_value > format(target_id, ".", name), value) > py4j.protocol.Py4JJavaError: An error occurred while calling > z:org.apache.spark.api.python.PythonRDD.runJob. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 > in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 > (TID 0, localhost): java.net.SocketException: Connection reset by peer: > socket write error > Then i ran the same code for a small text file, this time .take() worked fine. > How can i solve this problem? -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org