Hi all, I am currently making some changes in Spark in my research project.
In my development, after an application has been submitted to the spark master, I want to get the IP addresses of all the slaves used by that application, so that the spark master is able to talk to the slave machines through a proposed mechanism. I am wondering which class/object in spark master has such information and will it be a different case when the cluster is managed by a standalone scheduler, Yarn and Mesos. I saw something related to this question in the master's log in standalone mode as follows. However, in function executorAdded in Class SparkDeploySchedulerBackend. it just prints a log without adding the slave to anything. I am using spark 1.6.1. 16/09/12 11:34:41.262 INFO AppClient$ClientEndpoint: Connecting to master spark://192.168.50.105:7077... 16/09/12 11:34:41.283 DEBUG TransportClientFactory: Creating new connection to /192.168.50.105:7077 16/09/12 11:34:41.302 DEBUG ResourceLeakDetector: -Dio.netty.leakDetectionLevel: simple 16/09/12 11:34:41.307 DEBUG TransportClientFactory: Connection to / 192.168.50.105:7077 successful, running bootstraps... 16/09/12 11:34:41.307 DEBUG TransportClientFactory: Successfully created connection to /192.168.50.105:7077 after 23 ms (0 ms spent in bootstraps) 16/09/12 11:34:41.334 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 262144 16/09/12 11:34:41.458 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160912113441-0000 16/09/12 11:34:41.459 DEBUG BlockManager: BlockManager initialize is called 16/09/12 11:34:41.463 DEBUG TransportServer: Shuffle server started on port :35874 16/09/12 11:34:41.463 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35874. 16/09/12 11:34:41.464 INFO NettyBlockTransferService: Server created on 35874 16/09/12 11:34:41.465 INFO BlockManagerMaster: Trying to register BlockManager 16/09/12 11:34:41.468 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.50.105:35874 with 3.8 GB RAM, BlockManagerId(driver, 192.168.50.105, 35874) 16/09/12 11:34:41.470 INFO BlockManagerMaster: Registered BlockManager *16/09/12 11:34:41.486 INFO AppClient$ClientEndpoint: Executor added: app-20160912113441-0000/0 on worker-20160912113428-192.168.50.106-59927 (192.168.50.106:59927 <http://192.168.50.106:59927>) with 1 cores* *16/09/12 11:34:41.486 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160912113441-0000/0 on hostPort 192.168.50.106:59927 <http://192.168.50.106:59927> with 1 cores, 6.0 GB RAM* *16/09/12 11:34:41.487 INFO AppClient$ClientEndpoint: Executor added: app-20160912113441-0000/1 on worker-20160912113428-192.168.50.106-59927 (192.168.50.106:59927 <http://192.168.50.106:59927>) with 1 cores* *16/09/12 11:34:41.487 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160912113441-0000/1 on hostPort 192.168.50.106:59927 <http://192.168.50.106:59927> with 1 cores, 6.0 GB RAM* *16/09/12 11:34:41.488 INFO AppClient$ClientEndpoint: Executor added: app-20160912113441-0000/2 on worker-20160912113405-192.168.50.108-35454 (192.168.50.108:35454 <http://192.168.50.108:35454>) with 1 cores* *16/09/12 11:34:41.489 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160912113441-0000/2 on hostPort 192.168.50.108:35454 <http://192.168.50.108:35454> with 1 cores, 6.0 GB RAM* Thanks! Best, Xiaoye