you need to set

yarn.scheduler.minimum-allocation-mb=32

otherwise Spark AM container will be running on dedicated box instead of
running together with the executor container on one of the boxes

for slaves I use Amazon EC2 r3.2xlarge box (61GB / 8 cores) - cost ~$0.10 /
hour (spot instance)



On Fri, Mar 11, 2016 at 3:17 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Thanks Koert and Alexander
>
> I think the yarn configuration parameters in yarn-site,xml are important.
> For those I have
>
>
> <property>
>   <name>yarn.nodemanager.resource.memory-mb</name>
>   <description>Amount of max physical memory, in MB, that can be allocated
> for YARN containers.</description>
>   <value>8192</value>
> </property>
> <property>
>    <name>yarn.nodemanager.vmem-pmem-ratio</name>
>     <description>Ratio between virtual memory to physical memory when
> setting memory limits for containers</description>
>     <value>2.1</value>
>   </property>
> <property>
>     <name>yarn.scheduler.maximum-allocation-mb</name>
>     <description>Maximum memory for each container</description>
>     <value>8192</value>
>   </property>
> <property>
>     <name>yarn.scheduler.minimum-allocation-mb</name>
>     <description>Minimum memory for each container</description>
>     <value>2048</value>
>   </property>
>
> However, I noticed that you Alexander have the following settings
>
> yarn.nodemanager.resource.memory-mb = 54272
> yarn.scheduler.maximum-allocation-mb = 54272
>
> With 8 Spark executor cores that gives you 6GB of memory per core. As a
> matter of interest how much memory and how many cores do you have for each
> node?
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 11 March 2016 at 23:01, Alexander Pivovarov <apivova...@gmail.com>
> wrote:
>
>> Forgot to mention. To avoid unnecessary container termination add the
>> following setting to yarn
>>
>> yarn.nodemanager.vmem-check-enabled = false
>>
>>
>

Reply via email to