On Mon, Sep 15, 2014 at 05:51:43PM +0000, Jay Faulkner wrote:
> Steven,
> 
> It's important to note that two of the blueprints you reference: 
> 
> https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
> https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery
> 
> are both very unlikely to land in Ironic -- these are configuration and 
> discovery pieces that best fit inside a operator-deployed CMDB, rather than 
> Ironic trying to extend its scope significantly to include these type of 
> functions. I expect the scoping or Ironic with regards to hardware 
> discovery/interrogation as well as configuration of hardware (like I will 
> outline below) to be hot topics in Ironic design summit sessions at Paris.

Hmm, okay - not sure I really get how a CMDB is going to help you configure
your RAID arrays in an automated way?

Or are you subscribing to the legacy datacentre model where a sysadmin
configures a bunch of boxes via whatever method, puts their details into
the CMDB, then feeds those details into Ironic?

> A good way of looking at it is that Ironic is responsible for hardware *at 
> provision time*. Registering the nodes in Ironic, as well as hardware 
> settings/maintenance/etc while a workload is provisioned is left to the 
> operators' CMDB. 
> 
> This means what Ironic *can* do is modify the configuration of a node at 
> provision time based on information passed down the provisioning pipeline. 
> For instance, if you wanted to configure certain firmware pieces at provision 
> time, you could do something like this:
> 
> Nova flavor sets capability:vm_hypervisor in the flavor that maps to the 
> Ironic node. This would map to an Ironic driver that exposes vm_hypervisor as 
> a capability, and upon seeing capability:vm_hypervisor has been requested, 
> could then configure the firmware/BIOS of the machine to 'hypervisor 
> friendly' settings, such as VT bit on and Turbo mode off. You could map 
> multiple different combinations of capabilities as different Ironic flavors, 
> and have them all represent different configurations of the same pool of 
> nodes. So, you end up with two categories of abilities: inherent abilities of 
> the node (such as amount of RAM or CPU installed), and configurable abilities 
> (i.e. things than can be turned on/off at provision time on demand) -- or 
> perhaps, in the future, even things like RAM and CPU will be dynamically 
> provisioned into nodes at provision time.

So you advocate pushing all the vendor-specific stuff down into various
Ironic drivers, interesting - is any of what you describe above possible
today?

Steve

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to