Good morning, I have taken the socketTextStream example and instead of running on a local Spark instance, I have pushed it to my Spark cluster in AWS (1 master with 5 slave nodes). I am getting the following error that appears to indicate that all the slaves are trying to read from localhost:9999 when all I really want is the single master node to read from it's localhost:9999 and batch up what it receives. Can anyone help me with what I might be missing with the way I am submitting the job?
14/06/10 13:12:49 INFO scheduler.ReceiverTracker: Registered receiver for stream 0 from akka.tcp://spark@SLAVE-INTERNAL-IP:39710 14/06/10 13:12:49 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error connecting to localhost:9999 - java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at java.net.Socket.<init>(Socket.java:425) at java.net.Socket.<init>(Socket.java:208) at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:71) at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:57) -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-socketTextStream-tp7326.html Sent from the Apache Spark User List mailing list archive at Nabble.com.