Re: Spark Job hangs up on multi-node cluster but passes on a single node

2014-12-23 Thread Akhil
That's because in your code at some place you have specified localhost
instead of the ip address of the machine running the service. When run it in
local mode it will work fine because everything happens on that machine and
hence it will be able to connect to localhost which runs the service, now on
the cluster mode, when you specify localhost, those workers will connect to
their localhost (which doesn't have that service running), So Instead of
localhost you can specify the ip address (either internal or public) of that
machine running that service.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Job-hangs-up-on-multi-node-cluster-but-passes-on-a-single-node-tp15886p20827.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark Job hangs up on multi-node cluster but passes on a single node

2014-12-23 Thread shaiw75
Hi,

I am having the same problem.
Any solution to that?

Thanks,
Shai



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Job-hangs-up-on-multi-node-cluster-but-passes-on-a-single-node-tp15886p20826.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org