Sorry, I'm a bit late to the discussion. Over the time I've maintained openstack clouds, I've seen many times, integration issues pop up between distro packages and openstack packages. The released openstack packages are usually fine. Changes in the distros break kolla builds over time.
In the past Kolla's dealt with this by having users show up, say "the *stuff* doesn't build" and then someone on irc scrambles to figure out why it broke. This is not a good place to be in. We even suffer through it ourselves with the gates. An upstream releases something, everything breaks, and then we spend all our time debugging the same problem rather then some of us debugging that and the rest making forward progress. So, we're starting to break up the gate jobs into periodic gates that test known good kolla stuff against newer upstream stuff to see if it works. if it does, it caches it for the regular gate tests for new patch sets. If it fails, then someone has time to look at it without the rest of the gates being broken. As part of this process we also produce known working, up to date, openstack stable release containers, since we need them for upgrade gate testing. So, basically to have proper gates in kolla, we have to do all the work of building up to date stable/trunk containers that are tested with the gate suite of tests. So, the real question is, can we go the last 2 feet and push them to the docker hub rather then doing it manually like we do today and they tend to rot there. There is a fair amount of benefit to some users and very little additional cost to openstack. I see very little reason not to. We should still recommend users build it themselves. But for many use cases, such as testing the waters, an operator might just want the easiest way to see the thing work (pull updated containers from the hub), prove out that its worth doing it with Kolla, then either building their own containers for production, or better yet, paying a Distro for support. We want to make it as easy as possible to try out OpenStack. This is one of the biggest/most reported problems with OpenStack. Saying, step 1, you must always build all the containers yourself is not a part of solving that problem. Thanks, Kevin ________________________________________ From: Flavio Percoco [fla...@redhat.com] Sent: Tuesday, May 16, 2017 6:20 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images? On 16/05/17 14:08 +0200, Thierry Carrez wrote: >Flavio Percoco wrote: >> From a release perspective, as Doug mentioned, we've avoided releasing >> projects >> in any kind of built form. This was also one of the concerns I raised when >> working on the proposal to support other programming languages. The problem >> of >> releasing built images goes beyond the infrastructure requirements. It's the >> message and the guarantees implied with the built product itself that are the >> concern here. And I tend to agree with Doug that this might be a problem for >> us >> as a community. Unfortunately, putting your name, Michal, as contact point is >> not enough. Kolla is not the only project producing container images and we >> need >> to be consistent in the way we release these images. >> >> Nothing prevents people for building their own images and uploading them to >> dockerhub. Having this as part of the OpenStack's pipeline is a problem. > >I totally subscribe to the concerns around publishing binaries (under >any form), and the expectations in terms of security maintenance that it >would set on the publisher. At the same time, we need to have images >available, for convenience and testing. So what is the best way to >achieve that without setting strong security maintenance expectations >for the OpenStack community ? We have several options: > >1/ Have third-parties publish images >It is the current situation. The issue is that the Kolla team (and >likely others) would rather automate the process and use OpenStack >infrastructure for it. > >2/ Have third-parties publish images, but through OpenStack infra >This would allow to automate the process, but it would be a bit weird to >use common infra resources to publish in a private repo. > >3/ Publish transient (per-commit or daily) images >A "daily build" (especially if you replace it every day) would set >relatively-limited expectations in terms of maintenance. It would end up >picking up security updates in upstream layers, even if not immediately. > >4/ Publish images and own them >Staff release / VMT / stable team in a way that lets us properly own >those images and publish them officially. > >Personally I think (4) is not realistic. I think we could make (3) work, >and I prefer it to (2). If all else fails, we should keep (1). Agreed #4 is a bit unrealistic. Not sure I understand the difference between #2 and #3. Is it just the cadence? I'd prefer for these builds to have a daily cadence because it sets the expectations w.r.t maintenance right: "These images are daily builds and not certified releases. For stable builds you're better off building it yourself" Flavio -- @flaper87 Flavio Percoco __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev