On 10/17/2016 11:14 PM, Ed Leafe wrote:
Now that we’re starting to model some more complex resources, it seems that 
some of the original design decisions may have been mistaken. One approach to 
work around this is to create multiple levels of resource providers. While that 
works, it is unnecessarily complicated IMO. I think we need to revisit some 
basic assumptions about the design before we dig ourselves a big design hole 
that will be difficult to get out of. I’ve tried to summarize my thoughts in a 
blog post. I don’t presume that this is the only possible solution, but I feel 
it is better than the current approach.

https://blog.leafe.com/virtual-bike-sheds/

I commented on your blog, but leave it here for posterity:

First, one of the reasons for the resource providers work was to *standardize* as much as possible the classes of resource that a cloud provides. Without standardized resource classes, there is no interoperability between clouds. The proposed solution of creating resource classes for each combination of actual resource class (the SRIOV VF) and the collection of traits that the VF might have (physical network tag, speed, product and vendor ID, etc) means there would be no interoperable way of referring to a VF resource in one OpenStack cloud as provided the same thing in another OpenStack cloud. The fact that a VF might be tagged to physical network A or physical network B doesn’t change the fundamentals: it’s a virtual function on an SR-IOV-enabled NIC that a guest consumes. If I don’t have a single resource class that represents a virtual function on an SR-IOV-enabled NIC (and instead I have dozens of different resource classes that refer to variations of VFs based on network tag and other traits) then I cannot have a normalized multi-OpenStack cloud environment because there’s no standardization.

Secondly, the compute host to SR-IOV PF is only one relationship that can be represented by nested resource providers. Other relationships that need to be represented include:

* Compute host to NUMA cell relations where a NUMA cell provides both VCPU, MEMORY_MB and MEMORY_PAGE_2M and MEMORY_PAGE_1G inventories that are separate from each other but accounted for in the parent provider (meaning the compute host’s MEMORY_MB inventory is logically the aggregate of both NUMA cells’ inventories of MEMORY_MB). In your data modeling, how would you represent two NUMA cells, each with their own inventories and allocations? Would you create resource classes called NUMA_CELL_0_MEMORY_MB and NUMA_CELL_1_MEMORY_MB etc? See point above about one of the purposes of the resource providers work being the standardization of resource classification.

* NIC bandwidth and NIC bandwidth per physical network. If I have 4 physical NICs on a compute host and I want to track network bandwidth as a consumable resource on each of those NICs, how would I go about doing that? Again, would you suggest auto-creating a set of resource classes representing the NICs? So, NET_BW_KB_EKB_ENP3S1, NET_BW_KB_ENP4S0, and so on? If I wanted to see the total aggregate bandwidth of the compute host, the system will now have to have tribal knowledge built into it to know that all the NET_BW_KB* resource classes are all describing the same exact resource class (network bandwidth in KB) but that the resource class names should be interpreted in a certain way. Again, not standardizable. In the nested resource providers modeling, we would have a parent compute host resource provider and 4 child resource providers — one for each of the NICs. Each NIC would have a set of traits indicating, for example, the interface name or physical network tag. However, the inventory (quantitative) amounts for network bandwidth would be a single standardized resource class, say NET_BW_KB. This nested resource providers system accurately models the real world setup of things that are providing the consumable resource, which is network bandwidth.

Finally, I think you are overstating the complexity of the SQL that is involved in the placement queries. 🙂 I’ve tried to design the DB schema with an eye to efficient and relatively simple SQL queries — and keeping quantitative and qualitative things decoupled in the schema was a big part of that efficiency. I’d like to see specific examples of how you would solve the above scenarios by combining the qualitative and quantitative aspects into a single resource type but still manage to have some interoperable standards that multiple OpenStack clouds can expose.

Best,
-jay

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to