you get a spark executor per yarn container. the spark executor can have
multiple cores, yes. this is configurable. so the number of partitions that
can be processed in parallel is num-executors * executor-cores. and for
processing a partition the available memory is executor-memory /
executor-cores (roughly, cores can of course borrow memory from each other
within executor).

the relevant setting for spark-submit are:
 --executor-memory
 --executor-cores
 --num-executors

On Fri, Mar 11, 2016 at 4:58 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Hi,
>
> Can these be clarified please
>
>
>    1. Can a YARN container use more than one core and if this is
>    configurable?
>    2. A YARN container is constraint to 8MB by
>    " yarn.scheduler.maximum-allocation-mb". If a YARN container is a Spark
>    process will that limit also include the memory Spark going to be using?
>
> Thanks,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>

Reply via email to