Which mode are you using? For standalone, it's
org.apache.spark.deploy.worker.Worker. For Yarn and Mesos, Spark just
submits its request to them and they will schedule processes for Spark.

Best Regards,
Shixiong Zhu

2015-10-12 20:12 GMT+08:00 Muhammad Haseeb Javed <11besemja...@seecs.edu.pk>
:

> I understand that each executor that is processing a Spark job is emulated
> in Spark code by the Executor class in Executor.scala and
> CoarseGrainedExecutorBackend is the abstraction which facilitates
> communication between an Executor and the Driver. But what is the
> abstraction for a Worker process in Spark code which would a reference to
> all the Executors running in it.
>

Reply via email to