On 10/14/2014 01:28 PM, Lars Kellogg-Stedman wrote:
On Tue, Oct 14, 2014 at 12:33:42PM -0400, Jay Pipes wrote:
Can I use your Dockerfiles to build Ubuntu/Debian images instead of only
Fedora images?

Not easily, no.

Seems to me that the image-based Docker system makes the
resulting container quite brittle -- since a) you can't use configuration
management systems like Ansible to choose which operating system or package
management tools you wish to use...

While that's true, it seems like a non-goal.  You're not starting with
a virtual machine and a blank disk here, you're starting from an
existing filesystem.

I'm not sure I understand your use case enough to give you a more
useful reply.

Sorry, I'm trying hard to describe some of this. I have limited vocabulary to use in this new space :)

I guess what I am saying is that there is a wealth of existing configuration management provenance that installs and configures application packages. These configuration management modules/tools are written to entirely take away the multi-operating-system, multi-package-manager problems.

Instead of having two Dockerfiles, one that only works on Fedora that does something like:

FROM fedora20
RUN yum -y install python-pbr

and one that only works on Debian:

FROM debian:wheezy
RUN apt-get install -y python-pbr

Configuration management tools like Ansible already work cross-operating system, and allow you to express what gets installed regardless of the operating system of the disk image:

tasks:
  - name: install PBR Debian
    apt: name=python-pbr state=present
    when: ansible_os_family == "Debian"
  - name: install PBR RH
    yum: name=python-pbr state=present
    when: ansible_os_family == "RedHat"

Heck, in Chef, you wouldn't even need the when: switch logic, since Chef knows which package management system to use depending on the operating system.

With Docker, you are limited to the operating system of whatever the image uses.

This means, for things like your openstack-containers Ansible+Docker environment (which is wicked cool, BTW), you have containers running Fedora20 for everything except MySQL, which due to the fact that you are using the "official" [1] MySQL image on dockerhub, is only a Debian:Wheezy image.

This means you now have to know the system administrative comments and setup for two operating systems ... or go find a Fedora20 image for mysql somewhere.

It just seems to me that Docker is re-inventing a whole bunch of stuff that configuration management tools like Ansible, Puppet, Chef, and Saltstack have gotten good at over the years.

[1] Is there an official MySQL docker image? I found 553 Dockerhub repositories for MySQL images...

So... what am I missing with this? What makes Docker images more ideal than
straight up LXC containers and using Ansible to control upgrades/changes to
configuration of the software on those containers?

I think that in general that Docker images are more share-able, and
the layered model makes building components on top of a base image
both easy and reasonably efficient in terms of time and storage.

By layered model, are you referring to the bottom layer being the Docker image and then upper layers being stuff managed by a configuration management system?

I think that Ansible makes a great tool for managing configuration
inside Docker containers, and you could easily use it as part of the
image build process.  Right now, people using Docker are basically
writing shell scripts to perform system configuration, which is like a
20 year step back in time.

Right, I've noticed that :)

>  Using a more structured mechanism for
doing this is a great idea, and one that lots of people are pursuing.
I have looked into using Puppet as part of both the build and runtime
configuration process, but I haven't spent much time on it yet.

Oh, I don't think Puppet is any better than Ansible for these things.

A key goal for Docker images is generally that images are "immutable",
or at least "stateless".  You don't "yum upgrade" or "apt-get upgrade"
in a container; you generate a new image with new packages/code/etc.
This makes it trivial to revert to a previous version of a deployment,
and clearly separates the "build the image" process from the "run the
application" process.

OK, so bear with me more on this, please... :)

Let's say I build Docker images for, say, my nova-conductor container, and I build the image by doing a devstack-style "install from git repos" method. Then, I build another nova-conductor container from a newer revision in source control.

How would I go about essentially transferring the ownership of the RPC exchanges that the original nova-conductor container managed to the new nova-conductor container? Would it be as simple as shutting down the old container and starting up the new nova-conductor container using things like --link rabbitmq:rabbitmq in the startup docker line?

Genuinely curious and inspired by this new container-the-world vision,
-jay

I like this model.



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to