To follow up and explain the patches for code review:
The "header" patch https://review.openstack.org/620310 -> (requires)
https://review.rdoproject.org/r/#/c/17534/, and also
https://review.openstack.org/620061 -> (which in turn requires)
https://review.openstack.org/619744 -> (Kolla change, the 1st to go)
https://review.openstack.org/619736
Please also read the commit messages, I tried to explain all "Whys" very
carefully. Just to sum up it here as well:
The current self-containing (config and runtime bits) architecture of
containers badly affects:
* the size of the base layer and all containers images as an
additional 300MB (adds an extra 30% of size).
* Edge cases, where we have containers images to be distributed, at
least once to hit local registries, over high-latency and limited
bandwith, highly unreliable WAN connections.
* numbers of packages to update in CI for all containers for all
services (CI jobs do not rebuild containers so each container gets
updated for those 300MB of extra size).
* security and the surface of attacks, by introducing systemd et al as
additional subjects for CVE fixes to maintain for all containers.
* services uptime, by additional restarts of services related to
security maintanence of irrelevant to openstack components sitting
as a dead weight in containers images for ever.
On 11/27/18 4:08 PM, Bogdan Dobrelya wrote:
Changing the topic to follow the subject.
[tl;dr] it's time to rearchitect container images to stop incluiding
config-time only (puppet et al) bits, which are not needed runtime and
pose security issues, like CVEs, to maintain daily.
Background: 1) For the Distributed Compute Node edge case, there is
potentially tens of thousands of a single-compute-node remote edge sites
connected over WAN to a single control plane, which is having high
latency, like a 100ms or so, and limited bandwith.
2) For a generic security case,
3) TripleO CI updates all
Challenge:
Here is a related bug [1] and implementation [1] for that. PTAL folks!
[0] https://bugs.launchpad.net/tripleo/+bug/1804822
[1] https://review.openstack.org/#/q/topic:base-container-reduction
Let's also think of removing puppet-tripleo from the base container.
It really brings the world-in (and yum updates in CI!) each job and
each container!
So if we did so, we should then either install puppet-tripleo and co
on the host and bind-mount it for the docker-puppet deployment task
steps (bad idea IMO), OR use the magical --volumes-from
<a-side-car-container> option to mount volumes from some
"puppet-config" sidecar container inside each of the containers being
launched by docker-puppet tooling.
On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås <hjensas at redhat.com>
wrote:
We add this to all images:
https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35
/bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python
socat sudo which openstack-tripleo-common-container-base rsync cronie
crudini openstack-selinux ansible python-shade puppet-tripleo python2-
kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB
Is the additional 276 MB reasonable here?
openstack-selinux <- This package run relabling, does that kind of
touching the filesystem impact the size due to docker layers?
Also: python2-kubernetes is a fairly large package (18007990) do we use
that in every image? I don't see any tripleo related repos importing
from that when searching on Hound? The original commit message[1]
adding it states it is for future convenience.
On my undercloud we have 101 images, if we are downloading every 18 MB
per image thats almost 1.8 GB for a package we don't use? (I hope it's
not like this? With docker layers, we only download that 276 MB
transaction once? Or?)
[1] https://review.openstack.org/527927
--
Best regards,
Bogdan Dobrelya,
Irc #bogdando
--
Best regards,
Bogdan Dobrelya,
Irc #bogdando
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev