Github user tnachen commented on the pull request: https://github.com/apache/spark/pull/4170#issuecomment-71773647 If you read the fine-grained mode source code, you'll notice that Spark is using the slave id as the executor id, which is what we discussed on the mesos mailing list, that the executor will be re-used if all tasks reuse the same executor id. Therefore, it's only launching one executor per slave, and if the executor dies Mesos will relaunch it when the task asks for it again.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org