Chris, thank you so much for putting this email together. Really appreciate it. Comments inline. :)

On 07/28/2016 09:57 AM, Chris Dent wrote:

I've been reviewing my notes from the mid-cycle and discussions
leading up to it and realized I have a few unresolved or open topics
that I hope discussion here can help resolve:

# fairly straightforward things

* At what stage in the game (of the placement api) do we need to
  implement oslo_policy handling and enforcement? Right now the auth
  model is simply all-admin-role-all-the-time.

I think this is perfectly acceptable behaviour for Newton. In Ocata, we can add support for the new code-driven oslo.policy work from laski.

* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.

The decision at the mid-cycle was to add a new placement_sql_connection configuration option to the nova.conf. The default value would be None which would mean the code in nova/objects/resource_provider.py would default to using the API database setting.

Deployers who want to alleviate the need for a (potentially disruptive) data migration of tables from the API database to the new placement database would be able to set placement_sql_connection to a separate (from the API DB) URI that the placement service would begin writing records to in Newton. A reno note should accompany the patch that adds placement_sql_connection to inform deployers about their ability to proactively help future upgrades by setting placement_sql_connection to a different URI than the Nova API DB URI.

# less straightforward and further out things

There was some discussion that conflicted with reality a bit and I
think we need to resolve before too long, but shouldn't impact the
newton-based changes:

We bounced around two different HTTP resources for returning one or
several resource providers in response to a launch request:

* POST /allocations

  returns a representation of the one target for this launch
  request, already claimed

This will be in Ocata.

We should work on a spec that outlines the plan for this call and have it submitted and ready for discussion in Barcelona.

* GET /resource_providers

  returns a list of candidate targets for a launch request, similar
  to what the existing select_destinations RPC call can do

This will also be in Ocata. Any calls from the nova-scheduler to the new placement API are going into Ocata.

For Newton, we decided that the concrete goal was to have inventory and allocation records written *from the nova-compute workers* directly to the placement HTTP API.

As a stretch goal for Newton, we're going to try and get the dynamic resource classes CRUD operations added to the placement REST API as well. This will allow Ironic to participate in the brave new resource-providers world with the 'node resource class' that Ironic is adding to their API. [1]

[1] https://review.openstack.org/#/c/345080/

The immediate problem here is that something else is already using
GET /resource_providers:

http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#get-resource-providers

Whatever the URI, it's not clear that GET would be correct here:

* We'll probably want to send a body so GET is not ideal.

* We could pass a reference to a persisted "request spec" as a query
  string item, thus maintaining a GET, but that seems to go against
  the grain of "give a thing the info it needs to get stuff done" that
  is elsewhere in the system.

  I'd personally be pretty okay with launch-info-by-reference mode as
  it allows the placement API to be in charge of request what version
  of a launch request it wants rather than its clients needing to know
  what version the placement API might accept.

It's pretty clear that were going to need at least an interim and
maybe permanent endpoint that returns a list of candidate target
resource providers. This is because, at least initially, the
placement engine will not be able to resolve all requirements down
to the one single result and additional filtering may be required in
the caller.

The question is: Will that need for additional filtering always be
present and if so do we:

* consider that a bad thing that we should strive to fix by
  expanding the powers and size of the placement engine
* consider that a good thing that allows the placement engine to be
  relatively simple and keeps edge-case behaviors being handled
  elsewhere

If the latter, then we'll have to consider how an allocation/claim
in a list of potential allocations can be essentially reserved,
verified, or rejected.

As an example of expanding the powers, there is the
ResourceProviderTags concept, described in:

    https://review.openstack.org/#/c/345138/

This will expand the data model of resource providers and the surface
area of the HTTP API. This may very well be entirely warranted, but
there might be other options if we assuming that returning a list is
"normal".

All of the above are excellent points, but for Ocata. It would be great to create a targeted ML thread *just for discussions on the GET /resource_providers and POST /allocations calls* so that we can keep the conversation targeted.

Best,
-jay

Sorry if this is unclear. I'm rather jet-lagged. Ask questions if
you have them. Thanks.



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to