[ https://issues.apache.org/jira/browse/SPARK-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14065121#comment-14065121 ]
Aaron Davidson commented on SPARK-2282: --------------------------------------- This problem does look identical. I think I gave you the wrong netstat command, as "-l" only show listening sockets. Try with "-a" instead to see all open connections to confirm this, but the rest of your symptoms align perfectly. I did a little Googling around for your specific kernel version, and it turns out [someone else|http://lists.openwall.net/netdev/2011/07/13/39] has had success with tcp_tw_recycle on 2.6.32. Could you try to make absolutely sure that the sysctl is taking effect? Perhaps you can try adding "net.ipv4.tcp_tw_recycle = 1" to /etc/sysctl.conf and then running a "sysctl -p" before restarting pyspark. > PySpark crashes if too many tasks complete quickly > -------------------------------------------------- > > Key: SPARK-2282 > URL: https://issues.apache.org/jira/browse/SPARK-2282 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 0.9.1, 1.0.0, 1.0.1 > Reporter: Aaron Davidson > Assignee: Aaron Davidson > Fix For: 0.9.2, 1.0.0, 1.0.1 > > > Upon every task completion, PythonAccumulatorParam constructs a new socket to > the Accumulator server running inside the pyspark daemon. This can cause a > buildup of used ephemeral ports from sockets in the TIME_WAIT termination > stage, which will cause the SparkContext to crash if too many tasks complete > too quickly. We ran into this bug with 17k tasks completing in 15 seconds. > This bug can be fixed outside of Spark by ensuring these properties are set > (on a linux server); > echo "1" > /proc/sys/net/ipv4/tcp_tw_reuse > echo "1" > /proc/sys/net/ipv4/tcp_tw_recycle > or by adding the SO_REUSEADDR option to the Socket creation within Spark. -- This message was sent by Atlassian JIRA (v6.2#6252)