On 12/3/18 10:34 AM, Bogdan Dobrelya wrote:
Hi Kevin.
Puppet not only creates config files but also executes a service dependent steps, like db sync, so neither '[base] -> [puppet]' nor '[base] -> [service]' would not be enough on its own. That requires some services specific code to be included into *config* images as well.

PS. There is a related spec [0] created by Dan, please take a look and propose you feedback

[0] https://review.openstack.org/620062

I'm terribly sorry, but that's a corrected link [0] to that spec.

[0] https://review.openstack.org/620909


On 11/30/18 6:48 PM, Fox, Kevin M wrote:
Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin
________________________________________
From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; openstack-disc...@lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

On 11/30/18 1:52 PM, Dan Prince wrote:
On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:
On 11/29/18 6:42 PM, Jiří Stránský wrote:
On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
On 11/28/18 6:02 PM, Jiří Stránský wrote:
<snip>

Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and
not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can
certainly put
them in a separate container outside of the runtime service
containers
but doing so would actually cost you much more
space/bandwidth for each
service container. As both of these have to get downloaded to
each node
anyway in order to generate config files with our current
mechanisms
I'm not sure this buys you anything.

+1. I was actually under the impression that we concluded
yesterday on
IRC that this is the only thing that makes sense to seriously
consider.
But even then it's not a win-win -- we'd gain some security by
leaner
production images, but pay for it with space+bandwidth by
duplicating
image content (IOW we can help achieve one of the goals we had
in mind
by worsening the situation w/r/t the other goal we had in
mind.)

Personally i'm not sold yet but it's something that i'd
consider if we
got measurements of how much more space/bandwidth usage this
would
consume, and if we got some further details/examples about how
serious
are the security concerns if we leave config mgmt tools in
runtime
images.

IIRC the other options (that were brought forward so far) were
already
dismissed in yesterday's IRC discussion and on the reviews.
Bin/lib bind
mounting being too hacky and fragile, and nsenter not really
solving the
problem (because it allows us to switch to having different
bins/libs
available, but it does not allow merging the availability of
bins/libs
from two containers into a single context).

We are going in circles here I think....

+1. I think too much of the discussion focuses on "why it's bad
to have
config tools in runtime images", but IMO we all sorta agree
that it
would be better not to have them there, if it came at no cost.

I think to move forward, it would be interesting to know: if we
do this
(i'll borrow Dan's drawing):

base container| --> |service container| --> |service
container w/
Puppet installed|

How much more space and bandwidth would this consume per node
(e.g.
separately per controller, per compute). This could help with
decision
making.

As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd
~190MB

that would be an extra layer size for each of the container
images to be
downloaded/fetched into registries.

Thanks, i tried to do the math of the reduction vs. inflation in
sizes
as follows. I think the crucial point here is the layering. If we
do
this image layering:

base| --> |+ service| --> |+ Puppet|

we'd drop ~267 MB from base image, but we'd be installing that to
the
topmost level, per-component, right?

Given we detached systemd from puppet, cronie et al, that would be
267-190MB, so the math below would be looking much better

Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?

Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672



It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest win.

What confuses me is why are there still patches posted to move Puppet
out of the base layer when we agree moving it out of the base layer
would actually cause our resulting container image set to be larger in
size.

Dan



In my basic deployment, undercloud seems to have 17 "components"
(49
containers), overcloud controller 15 components (48 containers),
and
overcloud compute 4 components (7 containers). Accounting for
overlaps,
the total number of "components" used seems to be 19. (By
"components"
here i mean whatever uses a different ConfigImage than other
services. I
just eyeballed it but i think i'm not too far off the correct
number.)

So we'd subtract 267 MB from base image and add that to 19 leaf
images
used in this deployment. That means difference of +4.8 GB to the
current
image sizes. My /var/lib/registry dir on undercloud with all the
images
currently has 5.1 GB. We'd almost double that to 9.9 GB.

Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the
CDNs
(both external and e.g. internal within OpenStack Infra CI clouds).

And for internal traffic between local registry and overcloud
nodes, it
gives +3.7 GB per controller and +800 MB per compute. That may not
be so
critical but still feels like a considerable downside.

Another gut feeling is that this way of image layering would take
longer
time to build and to run the modify-image Ansible role which we use
in
CI, so that could endanger how our CI jobs fit into the time limit.
We
could also probably measure this but i'm not sure if it's worth
spending
the time.

All in all i'd argue we should be looking at different options
still.

Given that we should decouple systemd from all/some of the
dependencies
(an example topic for RDO [0]), that could save a 190MB. But it
seems we
cannot break the love of puppet and systemd as it heavily relies
on the
latter and changing packaging like that would higly likely affect
baremetal deployments with puppet and systemd co-operating.

Ack :/

Long story short, we cannot shoot both rabbits with a single
shot, not
with puppet :) May be we could with ansible replacing puppet
fully...
So splitting config and runtime images is the only choice yet to
address
the raised security concerns. And let's forget about edge cases
for now.
Tossing around a pair of extra bytes over 40,000 WAN-distributed
computes ain't gonna be our the biggest problem for sure.

[0]
https://review.rdoproject.org/r/#/q/topic:base-container-reduction

Dan


Thanks

Jirka

_______________________________________________________________
___________

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___________________________________________________________________
_______
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards,
Bogdan Dobrelya,
Irc #bogdando





--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to