I'm not sure if it is necessary to write up or provide support on how to
use more than one deployment tool, but I think any work that
inadvertently makes it harder for an operator to use their own existing
deployment infrastructure could run some people off.
Regarding "deploy a VM to deploy bifrost to deploy bare metal", I
suspect that situation will not be unique to bifrost. At the moment I'm
using MAAS and it has a hard dependency on Upstart for init up until
around Ubuntu Trusty and then was ported to systemd in Wily. I do not
think you can just switch to another init daemon or run it under
supervisord without significant work. I was not even able to get the
maas package to install during a docker build because it couldn't
communicate with the init system it wanted. In addition, for any
deployment tool that enrolls/deploys via PXE the tool may also require
accommodations when being containerized simply because this whole topic
is fairly low in the stack of abstractions. For example I'm not sure
whether any of these tools running in a container would respond to a new
bare metal host's initial DHCP broadcast without --net=host or similar
consideration.
As long as the most common deployment option in Kolla is Ansible, making
deployment tools pluggable is fairly easy to solve. MAAS and bifrost
both have inventory scripts that can provide dynamic inventory to
kolla-ansible while still pulling Kolla's child groups from the
multinode inventory file. Another common pattern could be for a given
deployment tool to template out a new (static) multinode inventory and
then we just append Kolla's groups to the file before calling
kolla-ansible. The problem, to me, becomes in getting every other option
(k8s, puppet, etc.) to work similarly. Perhaps you just state that each
implementation must be pluggable to various deployment tools and let
people that know their respective tool handle the how.(?)
Currently I am running MAAS inside a Vagrant box to retain some of the
immutability and easy "create/destroy" workflow that having it
containerized would offer. It works very well and, assuming nothing else
was running on the underlying deployment host, I'd have no issue running
it in prod that way even with the Vagrant layer.
Thank you,
Mark
On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners?
The austin discussion was titled generic bare metal, but very quickly
turned into bifrost-only discourse. The initial survey showed
cobbler/maas/OoO as alternatives people use today. So if the bifrost
strategy is, "deploy a VM to deploy bifrost to deploy bare metal" and
will cleaned up later, then maybe its time to take a deeper look at
the other deployment tools and see if they are a better fit?
Thx,
britt
From: "Steven Dake (stdake)" <std...@cisco.com <mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.
From: Devananda van der Veen <devananda....@gmail.com
<mailto:devananda....@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.
On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake)
<std...@cisco.com <mailto:std...@cisco.com>> wrote:
Sean,
Thanks for taking this on :) I didn't know you had such an AR :)
From: "Mooney, Sean K" <sean.k.moo...@intel.com
<mailto:sean.k.moo...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.
Hi everyone.
Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.
Are some initial success it have hit a roadblock with the
current install playbook provided by bifrost.
In particular the install playbook both installs the
ironic dependencies and configure and runs the services.
What I'd do here is ignore the install playbook and duplicate
what it installs. We don't want to install at run time, we
want to install at build time. You weren't clear if that is
what your doing.
That's going to be quite a bit of work. The bifrost-install
playbook does a lot more than just install the ironic services and
a few system packages; it also installs rabbit, mysql, nginx,
dnsmasq *and* configures all of these in a very specific way.
Re-inventing all of this is basically re-inventing Bifrost.
Sean's latest proposal was splitting this one operation into three
smaller decomposed steps.
The reason we would ignore the install playbook is because it
runs the services. We need to run the services in a different
way.
Do you really need to run them in a different way? If it's just a
matter of "use a different init system", I wonder how easily that
could be accomodated within the Bifrost project itself.... If
there's another reason, please elaborate.
To run in a container, we cannot use systemd. This leaves us with
supervisord, which certainly can and should be done in the context of
upstream bifrost.
This will (as we discussed at ODS) be a fat container on the
underlord cloud – which I guess is ok. I'd recommend not
using systemd, as that will break systemd systems badly.
Instead use a different init system, such as supervisord.
The installation of ironic and its dependencies would not
be a problem but the ansible service module is not cable
able of starting the
Infrastructure services (mysql,rabbit …) without a running
init system which is not present during the docker build.
When I created a biforst container in the past is spawned
a Ubuntu upstart container then docker exec into the
container and ran
Bifrost install script. This works because the init system
is running and the service module could test and start the
relevant services.
This leave me with 3 paths forward.
1.I can continue to try and make the bifrost install
script work with the kolla build system by using sed to
modify the install playbook or try start systemd during
the docker build.
2.I can use the kolla build system to build only part of
the image
a. the bifrost-base image would be build with the kolla
build system without running the bifrost playbook. This
would allow the existing allow the existing features of
the build system such as adding headers/footers to be used.
b.After the base image is built by kolla I can spawn an
instance of bifrost-base with systemd running
c.I can then connect to this running container and run the
bifrost install script unmodified.
d.Once it is finished I can stop the container and export
it to an image “bifros-postinstall”.
e.This can either be used directly (fat container) or as
the base image for other container that run each of the
ironic services (thin containers)
3.I can skip the kolla build system entirely and create a
script/playbook that will build the bifrost container
similar to 2.
4.
Make a supervisord set of init scripts and make the docker
file do what it was intended – install the files. This is kind
of a mashup of your 1-3 ideas. Good thinking :)
While option 1 would fully use the kolla build system It
is my least favorite as it is both hacky and complicated
to make work.
Docker really was not designed to run systemd as part of
docker build.
For option 2 and 3 I can provide a single playbook/script
that will fully automate the build but the real question I
have
Is should I use the kolla build system to make the base
image or not.
If anyone else has suggestion on how I can progress
please let me know but currently I am leaning towards
option 2.
If you have questions about my suggestion to use supervisord,
hit me up on IRC. Ideally we would also contribute these init
scripts back into bifrost code base assuming they want them,
which I think they would. Nobody will run systemd in a
container, and we all have an interest in seeing BiFrost as
the standard bare metal deployment model inside or outside of
containers.
Regards
-steve
The only other option I see would be to not use a
container and either install biforst on the host or in a vm.
GROAN – one advantage containers provide us is not mucking up
the host OS with a bajillion dependencies. I'd like to keep
that part of Kolla intact :)
Right - don't install it on the host, but what's the problem with
running it in a VM?
FWIW, I already run Bifrost quite successfully in a VM in each of
my environments.
There isn't a super specific problem with running it in a VM other
than Kolla is about containers not VMs. OpenStack can obviously be
run in a VM – our major reason for wanting containers is upgradability
which Vms don't offer atomically.
That said, we could run in a VM initially and over time port to run in
a container. What we are after long term is a container–based
approach to bifrost in upstream bifrost, not replicating or
duplicating a bunch of work.
I believe Sean's approach of splitting out the 3 separate steps makes
logical sense (to me) in the sense that the one major installation
step is broken into the separate build & deploy steps that Kolla uses.
Hope that helps
Regards
-steve
--Deva
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev