On 04/16/2018 04:16 PM, Eric Fried wrote:
I was presenting an example using VM-ish resource classes, because I can
write them down and everybody knows what I'm talking about without me
having to explain what they are.  But remember we want placement to be
usable outside of Nova as well.

But also, I thought we had situations where the VCPU and MEMORY_MB were
themselves provided by sharing providers, associated with a compute host
RP that may be itself devoid of inventory.  (This may even be a viable
way to model VMWare's clustery things today.)

I still don't see a use in returning the root providers in the allocation requests -- since there is nothing consuming resources from those providers.

And we already return the root_provider_uuid for all providers involved in allocation requests within the provider_summaries section.

So, I can kind of see where we might want to change *this* line of the nova scheduler:

https://github.com/openstack/nova/blob/stable/pike/nova/scheduler/filter_scheduler.py#L349

from this:

 compute_uuids = list(provider_summaries.keys())

to this:

 compute_uuids = set([
     ps['root_provider_uuid'] for ps in provider_summaries
 ])

But other than that, I don't see a reason to change the response from GET /allocation_candidates at this time.

Best,
-jay

On 04/16/2018 01:58 PM, Jay Pipes wrote:
Sorry it took so long to respond. Comments inline.

On 03/30/2018 08:34 PM, Eric Fried wrote:
Folks who care about placement (but especially Jay and Tetsuro)-

I was reviewing [1] and was at first very unsatisfied that we were not
returning the anchor providers in the results.  But as I started digging
into what it would take to fix it, I realized it's going to be
nontrivial.  I wanted to dump my thoughts before the weekend.

<BACKGROUND>
It should be legal to have a configuration like:

          #        CN1 (VCPU, MEMORY_MB)
          #        /      \
          #       /agg1    \agg2
          #      /          \
          #     SS1        SS2
          #      (DISK_GB)  (IPV4_ADDRESS)

And make a request for DISK_GB,IPV4_ADDRESS;
And have it return a candidate including SS1 and SS2.

The CN1 resource provider acts as an "anchor" or "relay": a provider
that doesn't provide any of the requested resource, but connects to one
or more sharing providers that do so.

To be honest, such a request just doesn't make much sense to me.

Think about what that is requesting. I want some DISK_GB resources and
an IP address. For what? What is going to be *using* those resources?

Ah... a virtual machine. In other words, something that would *also* be
requesting some CPU and memory resources as well.

So, the request is just fatally flawed, IMHO. It doesn't represent a use
case from the real world.

I don't believe we should be changing placement (either the REST API or
the implementation of allocation candidate retrieval) for use cases that
don't represent real-world requests.

Best,
-jay

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to