I debugged it, and the remote actor can be fetched in the tryRegisterAllMasters() method in AppClient: def tryRegisterAllMasters() { for (masterAkkaUrl <- masterAkkaUrls) { logInfo("Connecting to master " + masterAkkaUrl + "...") val actor = context.actorSelection(masterAkkaUrl) actor ! RegisterApplication(appDescription) } } After actor send the RegisterApplication message, it seems like the message is not routed to the remote actor, so registering operation is not finished, then failed. But I don't know what is the reason. Who can help me?
On Friday, May 15, 2015 4:06 PM, Yi Zhang <zhangy...@yahoo.com.INVALID> wrote: Hi all, I run start-master.sh to start standalone Spark with spark://192.168.1.164:7077. Then, I use this command as below, and it's OK:./bin/spark-shell --master spark://192.168.1.164:7077 The console print correct message, and Spark context had been initialised correctly. However, when I run app in IntelliJ Idea using spark conf like this:val sparkConf = new SparkConf().setAppName("FromMySql") .setMaster("spark://192.168.1.164:7077") .set("spark.akka.heartbeat.interval", "100") val sc = new SparkContext(sparkConf) val sqlContext = new SQLContext(sc) It can't talk to spark and print these error messages:ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@192.168.1.164:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. If I changed the conf to local[*], it's ok. After I packaged my app then use spark-submit command, the communication between local and remote actor is OK. It's very strange! What happen? Regards,Yi