On 06/03/2014 10:40 PM, ChangBo Guo wrote:
Jay, thanks for raising this up .
+1 for this .
  A related question about the CPU and RAM allocation ratio, shall we
apply them when get hypervisor information with command "nova
hypervisor-show ${hypervisor-name}"
The output  shows like
| memory_mb                 | 15824 |
| memory_mb_used            | 1024 |
| running_vms               | 1 |
| service_host              | node-6 |
| service_id                | 39 |
| vcpus                     | 4 |
| vcpus_used                | 1

vcpus is showing the number of physical CPU, I think that's not
correct.  Any thoughts ?

Yes, I believe it would be appropriate to return the adjusted total of vCPU and memory. This would be trivial if we actually stored the allocation ratios in each compute node record, where they naturally belong (as the ratios describe an attribute of the compute node, not any scheduling policy), instead of in the scheduler filters.

Best,
-jay

2014-06-03 21:29 GMT+08:00 Jay Pipes <jaypi...@gmail.com
<mailto:jaypi...@gmail.com>>:

    Hi Stackers,

    tl;dr
    =====

    Move CPU and RAM allocation ratio definition out of the Nova
    scheduler and into the resource tracker. Remove the calculations for
    overcommit out of the core_filter and ram_filter scheduler pieces.

    Details
    =======

    Currently, in the Nova code base, the thing that controls whether or
    not the scheduler places an instance on a compute host that is
    already "full" (in terms of memory or vCPU usage) is a pair of
    configuration options* called cpu_allocation_ratio and
    ram_allocation_ratio.

    These configuration options are defined in, respectively,
    nova/scheduler/filters/core___filter.py and
    nova/scheduler/filters/ram___filter.py.

    Every time an instance is launched, the scheduler loops through a
    collection of host state structures that contain resource
    consumption figures for each compute node. For each compute host,
    the core_filter and ram_filter's host_passes() method is called. In
    the host_passes() method, the host's reported total amount of CPU or
    RAM is multiplied by this configuration option, and the product is
    then subtracted from the reported used amount of CPU or RAM. If the
    result is greater than or equal to the number of vCPUs needed by the
    instance being launched, True is returned and the host continues to
    be considered during scheduling decisions.

    I propose we move the definition of the allocation ratios out of the
    scheduler entirely, as well as the calculation of the total amount
    of resources each compute node contains. The resource tracker is the
    most appropriate place to define these configuration options, as the
    resource tracker is what is responsible for keeping track of total
    and used resource amounts for all compute nodes.

    Benefits:

      * Allocation ratios determine the amount of resources that a
    compute node advertises. The resource tracker is what determines the
    amount of resources that each compute node has, and how much of a
    particular type of resource have been used on a compute node. It
    therefore makes sense to put calculations and definition of
    allocation ratios where they naturally belong.
      * The scheduler currently needlessly re-calculates total resource
    amounts on every call to the scheduler. This isn't necessary. The
    total resource amounts don't change unless either a configuration
    option is changed on a compute node (or host aggregate), and this
    calculation can be done more efficiently once in the resource tracker.
      * Move more logic out of the scheduler
      * With the move to an extensible resource tracker, we can more
    easily evolve to defining all resource-related options in the same
    place (instead of in different filter files in the scheduler...)

    Thoughts?

    Best,
    -jay

    * Host aggregates may also have a separate allocation ratio that
    overrides any configuration setting that a particular host may have

    _________________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.__org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




--
ChangBo Guo(gcb)


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to