Hi Stackers,

In Newton, we had a major goal of having Nova sending inventory and allocation records from the nova-compute daemon to the new placement API service over HTTP (i.e. not RPC). I'm happy to say we achieved this goal. We had a stretch goal from the mid-cycle of implementing the custom resource class support. I'm sorry to say that we did not reach this goal, though Ironic did indeed get its part merged and we should be able to complete this work before the summit in Nova.

Through the hard work of many folks [1] we were able to merge code that added a brand new REST API service (/placement) with endpoints for read/write operations against resource providers, inventories, allocations, and usage records. We were able to get patches merged that modified the resource tracker in the nova-compute to write the compute node's inventory and allocation records to the placement API in a fashion that avoided required action on the part of the operator to keep the nova-computes up and running.

For Ocata AND BEYOND, I'd here are a number of rough priorities and goals that we need to work on...

1. Shared storage properly implemented

To fulfill the original use case around accurate reporting of shared resources, we need to complete a few subtasks:

a) complete the aggregates/ endpoints in the placement API so that resource providers can be associated with aggregates b) have the scheduler reporting client tracking more than just the resource provider for the compute node

2. Custom resource classes

This actually isn't all that much work, but just needs some focus. We need the following done in this area:

a) (very simple) REST API added to the placement API for GET/PUT resource class names b) modify the ResourceClass Enum field to be a StringField -- which is wire-compatible with Enum -- and add some code on each side of the client/server communication that caches the standard resource classes as constants that Nova and placement code can share c) modify the Ironic virt driver to pass the new node_class attribute on nodes into the resource tracker and have the resource tracker create resource provider records for each Ironic node with a single inventory record for each of those resource providers for the node class d) modify the resource tracker to track the allocation of instances to resource providers

3. Integration of Nova scheduler with Placement API

We would like the Nova scheduler to be able to query the placement API for quantitative information in Ocata. So, code will need to be pushed that adds a call to the placement API for resource provider UUIDs that meet a given request for some amount of resources. This result will then be used to filter a request in the Nova scheduler for ComputeNode objects to satisfy the qualitative side of the request.

4. Progress on qualitative request components (traits)

A number of things can be done in this area:

a) get os-traits interface stable and include all catalogued standardized trait strings b) agree on schema in placement DB for storing and querying traits against resource providers

5. Nested resource providers

Things like SR-IOV PCI devices are actually resource providers that are embedded within another resource provider (the compute node itself). In order to tag things like SR-IOV PFs or VFs with a set of traits, we need to have discovery code run on the compute node that registers things like SR-IOV PF/VFs or SR-IOV FPGAs as nested resource providers.

Some steps needed here:

a) agreement on schema for placement DB for representing this nesting relationship b) write the discovery code in nova-compute for adding these resource providers to the placement API when found

Anyway, in conclusion, we've got a ton of work to do and I'm going to spend time before the summit trying to get good agreement on direction and proposed implementation for a number of the items listed above. Hopefully by mid-October we'll have a good idea of assignees for various work and what is going to be realistic to complete in Ocata.

Best,
-jay

[1] I'd like to personally thank Chris Dent, Dan Smith, Sean Dague, Ed Leafe, Sylvain Bauza, Andrew Laski, Alex Xu and Matt Riedemann for tolerating my sometimes lengthy absences and for pushing through communication breakdowns resulting from my inability to adequately express my ideas or document agreed solutions.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to