On 06/09/2014 01:38 PM, Joe Cropper wrote:
On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil <philip....@hp.com> wrote:
Hi Joe,



Can you give some examples of what that data would be used for ?

Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.

That information is typically stored in the compute_node.cpu_info field.

Overall, this would give folks writing compute drivers the ability to
attach the "extra spec" style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)

If it's something that the driver can discover on its own and that the driver can/should use in determining the capabilities that the hypervisor offers, then at this point, I believe compute_node.cpu_info is the place to put that information. It's probably worth renaming the cpu_info field to just "capabilities" instead, to be more generic and indicate that it's a place the driver stores discoverable capability information about the node...

Now, for *user-defined* taxonomies, I'm a big fan of simple string tagging, as is proposed for the server instance model in this spec:

https://review.openstack.org/#/c/91444/

Best,
jay




It sounds on the face of it that what you’re looking for is pretty similar
to what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)

Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.




Phil



From: Joe Cropper [mailto:cropper....@gmail.com]
Sent: 07 June 2014 07:30
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Arbitrary "extra specs" for compute nodes?



Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
"extra_specs" concept?

It appears there are entries for things like pci_stats, stats and recently
added extra_resources -- but these all tend to have more specific usages vs.
just arbitrary data that may want to be maintained about the compute node
over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be loaded
as a dict of key-value pairs?

Thoughts?

Thanks,

Joe


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to