Hi Spico,

Yes, I think an "executor core" in Spark is basically a thread in a worker
pool. It's recommended to have one executor core per physical core on your
machine for best performance, but I think in theory you can create as many
threads as your OS allows.

For deployment:
There seems to be the actual worker JVM which coordinates the work on a
worker node. I don't think the actual thread pool lives in there, but a
separate JVM is created for each application that has cores allocated on
the node. Otherwise it would be rather hard to impose memory limits on
application level and it would have serious disadvantages regarding
stability.

You can check this behavior by looing at the processes on your machine:
ps aux | grep spark.deploy => will show  master, worker (coordinator) and
driver JVMs
ps aux | grep spark.executor => will show the actual worker JVMs

2015-02-25 14:23 GMT+01:00 Spico Florin <spicoflo...@gmail.com>:

> Hello!
>  I've read the documentation about the spark architecture, I have the
> following questions:
> 1: how many executors can be on a single worker process (JMV)?
> 2:Should I think executor like a Java Thread Executor where the pool size
> is equal with the number of the given cores (set up by the
> SPARK_WORKER_CORES)?
> 3. If the worker can have many executors, how this is handled by the
> Spark? Or can I handle by myself to set up the number of executors per JVM?
>
> I look forward for your answers.
>   Regards,
>   Florin
>

Reply via email to