Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)

From: "Mooney, Sean K" <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.  This will (as we 
discussed at ODS) be a fat container on the underlord cloud – which I guess is 
ok.  I'd recommend not using systemd, as that will break systemd systems badly. 
 Instead use a different init system, such as supervisord.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is 
not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.       I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.       I can use the kolla build system to build only part of the image

a.        the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.      After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.       I can then connect to this running container and run the bifrost 
install script unmodified.

d.      Once it is finished I can stop the container and export it to an image 
“bifros-postinstall”.

e.      This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.       I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was 
intended – install the files.  This is kind of a mashup of your 1-3 ideas.  
Good thinking :)


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards option 2.


If you have questions about my suggestion to use supervisord, hit me up on IRC. 
 Ideally we would also contribute these init scripts back into bifrost code 
base assuming they want them, which I think they would.  Nobody will run 
systemd in a container, and we all have an interest in seeing BiFrost as the 
standard bare metal deployment model inside or outside of containers.

Regards
-steve

The only other option I see would be to not use a container and either install 
biforst on the host or in a vm.

GROAN – one advantage containers provide us is not mucking up the host OS with 
a bajillion dependencies.  I'd like to keep that part of Kolla intact :)

These would essentially be a no op for kolla as we would simply have to 
document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to