Try to turn yarn.scheduler.capacity.resource-calculator on, then check again.

On Wed, Aug 3, 2016 at 4:53 PM, Saisai Shao <sai.sai.s...@gmail.com> wrote:
> Use dominant resource calculator instead of default resource calculator will
> get the expected vcores as you wanted. Basically by default yarn does not
> honor cpu cores as resource, so you will always see vcore is 1 no matter
> what number of cores you set in spark.
>
> On Wed, Aug 3, 2016 at 12:11 PM, satyajit vegesna
> <satyajit.apas...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I am trying to run a spark job using yarn, and i specify --executor-cores
>> value as 20.
>> But when i go check the "nodes of the cluster" page in
>> http://hostname:8088/cluster/nodes then i see 4 containers getting created
>> on each of the node in cluster.
>>
>> But can only see 1 vcore getting assigned for each containier, even when i
>> specify --executor-cores 20 while submitting job using spark-submit.
>>
>> yarn-site.xml
>> <property>
>>         <name>yarn.scheduler.maximum-allocation-mb</name>
>>         <value>60000</value>
>> </property>
>> <property>
>>         <name>yarn.scheduler.minimum-allocation-vcores</name>
>>         <value>1</value>
>> </property>
>> <property>
>>         <name>yarn.scheduler.maximum-allocation-vcores</name>
>>         <value>40</value>
>> </property>
>> <property>
>>         <name>yarn.nodemanager.resource.memory-mb</name>
>>         <value>70000</value>
>> </property>
>> <property>
>>         <name>yarn.nodemanager.resource.cpu-vcores</name>
>>         <value>20</value>
>> </property>
>>
>>
>> Did anyone face the same issue??
>>
>> Regards,
>> Satyajit.
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to