Re: Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.
Try to turn yarn.scheduler.capacity.resource-calculator on, then check again. On Wed, Aug 3, 2016 at 4:53 PM, Saisai Shao wrote: > Use dominant resource calculator instead of default resource calculator will > get the expected vcores as you wanted. Basically by default yarn does not > honor cpu cores as resource, so you will always see vcore is 1 no matter > what number of cores you set in spark. > > On Wed, Aug 3, 2016 at 12:11 PM, satyajit vegesna > wrote: >> >> Hi All, >> >> I am trying to run a spark job using yarn, and i specify --executor-cores >> value as 20. >> But when i go check the "nodes of the cluster" page in >> http://hostname:8088/cluster/nodes then i see 4 containers getting created >> on each of the node in cluster. >> >> But can only see 1 vcore getting assigned for each containier, even when i >> specify --executor-cores 20 while submitting job using spark-submit. >> >> yarn-site.xml >> >> yarn.scheduler.maximum-allocation-mb >> 6 >> >> >> yarn.scheduler.minimum-allocation-vcores >> 1 >> >> >> yarn.scheduler.maximum-allocation-vcores >> 40 >> >> >> yarn.nodemanager.resource.memory-mb >> 7 >> >> >> yarn.nodemanager.resource.cpu-vcores >> 20 >> >> >> >> Did anyone face the same issue?? >> >> Regards, >> Satyajit. > > - To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Re: Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.
Try to turn "yarn.scheduler.capacity.resource-calculator" on On Wed, Aug 3, 2016 at 4:53 PM, Saisai Shao wrote: > Use dominant resource calculator instead of default resource calculator will > get the expected vcores as you wanted. Basically by default yarn does not > honor cpu cores as resource, so you will always see vcore is 1 no matter > what number of cores you set in spark. > > On Wed, Aug 3, 2016 at 12:11 PM, satyajit vegesna > wrote: >> >> Hi All, >> >> I am trying to run a spark job using yarn, and i specify --executor-cores >> value as 20. >> But when i go check the "nodes of the cluster" page in >> http://hostname:8088/cluster/nodes then i see 4 containers getting created >> on each of the node in cluster. >> >> But can only see 1 vcore getting assigned for each containier, even when i >> specify --executor-cores 20 while submitting job using spark-submit. >> >> yarn-site.xml >> >> yarn.scheduler.maximum-allocation-mb >> 6 >> >> >> yarn.scheduler.minimum-allocation-vcores >> 1 >> >> >> yarn.scheduler.maximum-allocation-vcores >> 40 >> >> >> yarn.nodemanager.resource.memory-mb >> 7 >> >> >> yarn.nodemanager.resource.cpu-vcores >> 20 >> >> >> >> Did anyone face the same issue?? >> >> Regards, >> Satyajit. > > - To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Re: Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.
Use dominant resource calculator instead of default resource calculator will get the expected vcores as you wanted. Basically by default yarn does not honor cpu cores as resource, so you will always see vcore is 1 no matter what number of cores you set in spark. On Wed, Aug 3, 2016 at 12:11 PM, satyajit vegesna < satyajit.apas...@gmail.com> wrote: > Hi All, > > I am trying to run a spark job using yarn, and i specify --executor-cores > value as 20. > But when i go check the "nodes of the cluster" page in > http://hostname:8088/cluster/nodes then i see 4 containers getting > created on each of the node in cluster. > > But can only see 1 vcore getting assigned for each containier, even when i > specify --executor-cores 20 while submitting job using spark-submit. > > yarn-site.xml > > yarn.scheduler.maximum-allocation-mb > 6 > > > yarn.scheduler.minimum-allocation-vcores > 1 > > > yarn.scheduler.maximum-allocation-vcores > 40 > > > yarn.nodemanager.resource.memory-mb > 7 > > > yarn.nodemanager.resource.cpu-vcores > 20 > > > > Did anyone face the same issue?? > > Regards, > Satyajit. >
Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.
Hi All, I am trying to run a spark job using yarn, and i specify --executor-cores value as 20. But when i go check the "nodes of the cluster" page in http://hostname:8088/cluster/nodes then i see 4 containers getting created on each of the node in cluster. But can only see 1 vcore getting assigned for each containier, even when i specify --executor-cores 20 while submitting job using spark-submit. yarn-site.xml yarn.scheduler.maximum-allocation-mb 6 yarn.scheduler.minimum-allocation-vcores 1 yarn.scheduler.maximum-allocation-vcores 40 yarn.nodemanager.resource.memory-mb 7 yarn.nodemanager.resource.cpu-vcores 20 Did anyone face the same issue?? Regards, Satyajit.