[ https://issues.apache.org/jira/browse/SPARK-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-17775: --------------------------------- Labels: bulk-closed (was: ) > pyspark: take(num) failed, but collect() worked for big dataset > --------------------------------------------------------------- > > Key: SPARK-17775 > URL: https://issues.apache.org/jira/browse/SPARK-17775 > Project: Spark > Issue Type: Bug > Environment: Spark:1.6.1 > Python 2.7.12 :: Anaconda 4.1.1 (64-bit) > Windows 7 > One machine > Reporter: Rick Lin > Priority: Major > Labels: bulk-closed > > Hi, all: > I ran one dataset with 39,501 data drew from the table of PostgreSQL DB in > pyspark. > The code was as: > cur1.execute("select id from users") > users = cur1.fetchall() > users_rdd = sc.parallelize(users) > users_rdd.take(1) > where the error message was as: > Py4JJavaError: An error occurred while calling > z:org.apache.spark.api.python.PythonRDD.runJob. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 > in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 > (TID 0, localhost): java.net.SocketException: Connection reset by peer: > socket write error > However, when i changed from take(1) to collect(), it worked, as: > [[25], > [1439], > ... > ] > When I ran the same code for a small dataset, here take(1) and collect() > worked. > I don't know why this happened and how to fix this problem for a big dataset? > Could you help me to deal with this problem? > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org