Hi Jerry,

This solves my problem. 🙏 thanks

On Sun, Sep 10, 2017 at 8:19 PM Saisai Shao <sai.sai.s...@gmail.com> wrote:

> I guess you're using Capacity Scheduler with DefaultResourceCalculator,
> which doesn't count cpu cores into resource calculation, this "1" you saw
> is actually meaningless. If you want to also calculate cpu resource, you
> should choose DominantResourceCalculator.
>
> Thanks
> Jerry
>
> On Sat, Sep 9, 2017 at 6:54 AM, Xiaoye Sun <sunxiaoy...@gmail.com> wrote:
>
>> Hi,
>>
>> I am using Spark 1.6.1 and Yarn 2.7.4.
>> I want to submit a Spark application to a Yarn cluster. However, I found
>> that the number of vcores assigned to a container/executor is always 1,
>> even if I set spark.executor.cores=2. I also found the number of tasks an
>> executor runs concurrently is 2. So, it seems that Spark knows that an
>> executor/container has two CPU cores but the request is not correctly sent
>> to Yarn resource scheduler. I am using
>> the 
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>> on Yarn.
>>
>> I am wondering that is it possible to assign multiple vcores to a
>> container when a Spark job is submitted to a Yarn cluster in yarn-cluster
>> mode.
>>
>> Thanks!
>> Best,
>> Xiaoye
>>
>
>

Reply via email to