Hi Simon,

The drivers and executors currently choose random ports to talk to each
other, so the Spark nodes will have to have full TCP access to each other.
This is changed in a very recent commit, where all of these random ports
will become configurable:
https://github.com/apache/spark/commit/09f7e4587bbdf74207d2629e8c1314f93d865999.
This will be available in Spark 1.1, but for now you will have to open all
ports among the nodes in your cluster.

-Andrew


2014-08-06 10:23 GMT-07:00 durin <m...@simon-schaefer.net>:

> Update: I can get it to work by disabling iptables temporarily. I can,
> however, not figure out on which port I have to accept traffic. 4040 and
> any
> of the Master or Worker ports mentioned in the previous post don't work.
>
> Can it be one of the randomly assigned ones in the 30k to 60k range? Those
> appear to change every time, making it difficult to apply any sensible
> rules.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-fails-where-is-the-problem-tp11355p11556.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to