New version of the spec:
https://review.openstack.org/#/c/138115/
Problem description updated.
Power interface part removed (not in scope of deploy driver).

On 12/09/2014 12:23 AM, Devananda van der Veen wrote:

I'd like to raise this topic for a wider discussion outside of the hallway track and code reviews, where it has thus far mostly remained.


In previous discussions, my understanding has been that the Fuel team sought to use Ironic to manage "pets" rather than "cattle" - and doing so required extending the API and the project's functionality in ways that no one else on the core team agreed with. Perhaps that understanding was wrong (or perhaps not), but in any case, there is now a proposal to add a FuelAgent driver to Ironic. The proposal claims this would meet that teams' needs without requiring changes to the core of Ironic.


https://review.openstack.org/#/c/138115/


The Problem Description section calls out four things, which have all been discussed previously (some are here [0]). I would like to address each one, invite discussion on whether or not these are, in fact, problems facing Ironic (not whether they are problems for someone, somewhere), and then ask why these necessitate a new driver be added to the project.


They are, for reference:


1. limited partition support

2. no software RAID support

3. no LVM support

4. no support for hardware that lacks a BMC


#1.

When deploying a partition image (eg, QCOW format), Ironic's PXE deploy driver performs only the minimal partitioning necessary to fulfill its mission as an OpenStack service: respect the user's request for root, swap, and ephemeral partition sizes. When deploying a whole-disk image, Ironic does not perform any partitioning -- such is left up to the operator who created the disk image.


Support for arbitrarily complex partition layouts is not required by, nor does it facilitate, the goal of provisioning physical servers via a common cloud API. Additionally, as with #3 below, nothing prevents a user from creating more partitions in unallocated disk space once they have access to their instance. Therefor, I don't see how Ironic's minimal support for partitioning is a problem for the project.


#2.

There is no support for defining a RAID in Ironic today, at all, whether software or hardware. Several proposals were floated last cycle; one is under review right now for DRAC support [1], and there are multiple call outs for RAID building in the state machine mega-spec [2]. Any such support for hardware RAID will necessarily be abstract enough to support multiple hardware vendor's driver implementations and both in-band creation (via IPA) and out-of-band creation (via vendor tools).


Given the above, it may become possible to add software RAID support to IPA in the future, under the same abstraction. This would closely tie the deploy agent to the images it deploys (the latter image's kernel would be dependent upon a software RAID built by the former), but this would necessarily be true for the proposed FuelAgent as well.


I don't see this as a compelling reason to add a new driver to the project. Instead, we should (plan to) add support for software RAID to the deploy agent which is already part of the project.


#3.

LVM volumes can easily be added by a user (after provisioning) within unallocated disk space for non-root partitions. I have not yet seen a compelling argument for doing this within the provisioning phase.


#4.

There are already in-tree drivers [3] [4] [5] which do not require a BMC. One of these uses SSH to connect and run pre-determined commands. Like the spec proposal, which states at line 122, "Control via SSH access feature intended only for experiments in non-production environment," the current SSHPowerDriver is only meant for testing environments. We could probably extend this driver to do what the FuelAgent spec proposes, as far as remote power control for cheap always-on hardware in testing environments with a pre-shared key.


(And if anyone wonders about a use case for Ironic without external power control ... I can only think of one situation where I would rationally ever want to have a control-plane agent running inside a user-instance: I am both the operator and the only user of the cloud.)


----------------


In summary, as far as I can tell, all of the problem statements upon which the FuelAgent proposal are based are solvable through incremental changes in existing drivers, or out of scope for the project entirely. As another software-based deploy agent, FuelAgent would duplicate the majority of the functionality which ironic-python-agent has today.


Ironic's driver ecosystem benefits from a diversity of hardware-enablement drivers. Today, we have two divergent software deployment drivers which approach image deployment differently: "agent" drivers use a local agent to prepare a system and download the image; "pxe" drivers use a remote agent and copy the image over iSCSI. I don't understand how a second driver which duplicates the functionality we already have, and shares the same goals as the drivers we already have, is beneficial to the project.


Doing the same thing twice just increases the burden on the team; we're all working on the same problems, so let's do it together.


-Devananda



[0] https://blueprints.launchpad.net/ironic/+spec/ironic-python-agent-partition


[1] https://review.openstack.org/#/c/107981/


[2] https://review.openstack.org/#/c/133828/11/specs/kilo/new-ironic-state-machine.rst


[3] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/snmp.py

[4] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/iboot.py

[5] http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ssh.py







_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to