This is expected for DefaultResourceCalculator (Memory based scheduling)
where it allocates requested n memory and 1 core (logical) per container.
Say a node has 100GB and 5 cores, 15 containers requested each with 10 GB,
10 containers will be allocated and available node resource will be 0GB and
-5 cores. It stops scheduling once memory is exhausted without considering
the cores.

In case of DominantResourceCalculator, it considers both memory and cpu, it
stops scheduling once any of memory and cpu is exhausted. In above example,
it stops scheduling after 5 containers allocated (50GB and 5 cores) with
remaining 50GB and 0 cores.

On Tue, Feb 19, 2019 at 9:59 PM Huang Meilong <ims...@outlook.com> wrote:

> Hi,
> I'm getting metrics of scheduler queue with jmx:
>
> http://localhost:8088/jmx?qry=Hadoop:service=ResourceManager,name=QueueMetrics,*
>
> I found some negative data points of AvailableVCores, is this a bug of
> YARN?
>
>
> timestamp: 1550565127000, yarn.QueueMetrics.root.AvailableVCores=-31
> timestamp: 1550565156000, yarn.QueueMetrics.root.AvailableVCores=-31
> timestamp: 1550565186000, yarn.QueueMetrics.root.AvailableVCores=-32
> timestamp: 1550565220000, yarn.QueueMetrics.root.AvailableVCores=14
> timestamp: 1550565250000, yarn.QueueMetrics.root.AvailableVCores=14
>

Reply via email to