On Fri, Jan 24, 2014 at 11:29 PM, Manoj Samel <manojsamelt...@gmail.com>wrote:

> On cluster with HDFS + Spark (in standalone deploy mode), there is a
> master node + 4 worker nodes. When a spark-shell connects to master, it
> creates 4 executor JVMs on each of the 4 worker nodes.
>

No, It creates 1 (4 in total) executor JVM on each of the 4 worker nodes.

>
> When the application reads a HDFS files and does computations in RDDs,
> what work gets done on master, worker, executor and driver  ?
>
> Thanks,
>

Reply via email to