On YARN, there is no concept of a Spark Worker.  Multiple executors will be
run per node without any effort required by the user, as long as all the
executors fit within each node's resource limits.

-Sandy

On Wed, Jun 10, 2015 at 3:24 PM, Evo Eftimov <evo.efti...@isecc.com> wrote:

> Yes  i think it is ONE worker ONE executor as executor is nothing but jvm
> instance spawned by the worker
>
> To run more executors ie jvm instances on the same physical cluster node
> you need to run more than one worker on that node and then allocate only
> part of the sys resourced to that worker/executot
>
>
> Sent from Samsung Mobile
>
>
> -------- Original message --------
> From: maxdml
> Date:2015/06/10 19:56 (GMT+00:00)
> To: user@spark.apache.org
> Subject: Re: Determining number of executors within RDD
>
> Actually this is somehow confusing for two reasons:
>
> - First, the option 'spark.executor.instances', which seems to be only
> dealt
> with in the case of YARN in the source code of SparkSubmit.scala, is also
> present in the conf/spark-env.sh file under the standalone section, which
> would indicate that it is also available for this mode
>
> - Second, a post from Andrew Or states that this properties define the
> number of workers in the cluster, not the number of executors on a given
> worker.
> (
> http://apache-spark-user-list.1001560.n3.nabble.com/clarification-for-some-spark-on-yarn-configuration-options-td13692.html
> )
>
> Could anyone clarify this? :-)
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Determining-number-of-executors-within-RDD-tp15554p23262.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to