I've encountered similar problems.
Maybe you can try using hostname or FQDN (rather than IP address) of your node 
for the master URI.
In my case, AKKA picks the FQDN for master URI and worker has to use exactly 
the same string for connection.

From: Benny Thompson [mailto:ben.d.tho...@gmail.com]
Sent: Saturday, March 01, 2014 10:18 AM
To: u...@spark.incubator.apache.org
Subject: Connection Refused When Running SparkPi Locally

I'm trying to run a simple execution of the SparkPi example.  I started the 
master and one worker, then executed the job on my local "cluster", but end up 
getting a sequence of errors all ending with

"Caused by: 
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 
Connection refused: /127.0.0.1:39398<http://127.0.0.1:39398>"

I originally tried running my master and worker without configuration but ended 
up with the same error.  I tried to change to 127.0.0.1 to test if it was maybe 
just a firewall issue since the server is locked down from the outside world.

My conf/spark-conf.sh contains the following:
export SPARK_MASTER_IP=127.0.0.1

Here is the order and commands I run:
1) "sbin/start-master.sh" (to start the master)
2) "bin/spark-class org.apache.spark.deploy.worker.Worker 
spark://127.0.0.1:7077<http://127.0.0.1:7077> --ip 127.0.0.1 --port 1111" (in a 
different session on the same machine to start the slave)
3) "bin/run-example org.apache.spark.examples.SparkPi 
spark://127.0.0.1:7077<http://127.0.0.1:7077>" (in a different session on the 
same machine to start the job)

I find it hard to believe that I'm locked down enough that running locally 
would cause problems.

Any help is greatly appreciated!

Thanks,
Benny

Reply via email to