I was thinking on your reply for some time. I think I now have some 
constructive bits to add.

> On 30 Jun 2016, at 18:50, Doug Wiegley <doug...@parksidesoftware.com> wrote:
> 
> 
>> On Jun 30, 2016, at 7:01 AM, Ihar Hrachyshka <ihrac...@redhat.com> wrote:
>> 
>> 
>>> On 30 Jun 2016, at 06:03, Kosnik, Lubosz <lubosz.kos...@intel.com> wrote:
>>> 
>>> Like Doug said Amphora suppose to be a black box. It suppose to get some 
>>> data - like info in /etc/defaults and do everything inside on its own.
>>> Everyone will be able to prepare his own implementation of this image 
>>> without mixing things between each other.
>> 
>> That would be correct if the image would not be maintained by the project 
>> itself. Then indeed every vendor would prepare their own image, maybe 
>> collaborate on common code for that. Since this code is currently in 
>> octavia, we kinda need to plug into it for other vendors. Otherwise you pick 
>> one and give it a preference.
> 
> No, I disagree with that premise, because it pre-supposes that we have any 
> interest in supporting *this exact reference implementation* for any period 
> of time.
> 
> Octavia has a few goals:
> 
> - Present an openstack loadbalancing API to operators and users.
> - Put VIPs on openstack clouds, that do loadbalancy things, and are always 
> there and working.
> - Upgrade seamlessly.
> 
> That’s it. A few more constraints:
> 
> - It’s an openstack project, so it must be python, with our supported 
> version, running on our supported OSs, using our shared libraries, being 
> open, level playing field, etc…
> 
> Nowhere in there is the amp concept, or that we must always require nova, or 
> that said amps must run a REST agent, or anything about the load-balancing 
> backend.The amp itself, and all the code written for it, is just a means to 
> an end. If the day comes tomorrow that the amp agent and amp concept is 
> silly, as long as we have a seamless upgrade and those VIPs keep operating, 
> we are under no obligation as a project to keep using that amp code or 
> maintaining it. Our obligation is to the operators and users.
> 

You assume that operators and users don’t care about reference implementation 
and its internals. It can’t be further from truth. Architecture matters to 
operators, since it often defines how it’s used, if at all.

Another thing that matters is whether the team behind the architecture provides 
compatibility guarantees for an extended period of time. You can’t just switch 
design every second cycle and expect operators and distributions to catch  up. 
When you plan for a transition, backwards compatibility should be at the core 
of discussion.

> The amp “agent” code has already gone through two iterations (direct ssh, now 
> a silly rest agent on the amp). We’ve already discussed that the current 
> ubuntu based amp is too heavy-weight and needs to change. Tomorrow it could 
> be based on a microlinux. And the day after that, cirros plus a static nginx. 
> And the day after that, a docker swarm with an proxy running on a simulated 
> minecraft redstone machine (well, we’d have to find an open-source clone of 
> minecraft, first.)

That does not sound like a reasonable approach. Operators and distributions 
cannot be expected to adopt to your new cool ways every cycle. Please pick an 
implementation that is good enough and stick to it for extended time.

Yes, I know lbaas project is generally more experimental (starting with v1, 
switching to v2, getting it out of experimental only to deprecate right away 
and switch end points to octavia that uses absolutely different reference 
architecture without providing any migration path, …)

But maybe that’s not a thing to be proud of, and it’s time to stop.

> 
> The point being, as a project contributor, I have zero interest in signing up 
> for long-term maintenance of something that 1) is not user visible, and 2) is 
> likely to change; all for the sake of any particular vendors sensibilities. 
> The current octavia will run just fine on ubuntu or redhat, and the current 
> amp image will launch just fine on a nova run by either, too.

There was always an expectation in neutron community that we provide reasonable 
plug points to vendors, both for distributions as well as networking vendors, 
and we accommodate to wide variety of technologies.

> 
> That said, every part of octavia is pluggable with drivers, and while I will 
> personally resist adding multiple reference drivers in-tree, it doesn’t mean 
> everyone will, nor does it preclude using shims and external repos.

While Octavia itself is indeed pluggable, those plugging points are too high 
level, leaving alternative distributions to reimplement the whole stack. In 
that particular case, other distributions can indeed craft their own images 
with a customized amp agent. The problem is, by not giving us any real plugging 
points to leverage, we are effectively suggested to fork the whole agent. If 
that happens, I don’t think it will be of help to either party in the deal.

> 
> That’s just my opinion, but I’d hate to see us tying our own hands by adding 
> support and maintenance burden at this early stage, beyond delivering VIPs to 
> users. I’d be more inclined to see the amp image itself cease to exist inside 
> an openstack project, before I want to spend the time supporting lots of 
> them, for non-technical reasons.
> 

Fine. But while you stick to amp, why pushing back attempts to make it more 
distribution friendly?

And while we are at it… no, it’s not about ‘sensibilities’ or ’non-technical 
reasons', it’s about ability to actually maintain and support technology for 
really extensive time that spans a lot further than a usual OpenStack cycle. I 
feel that you come from a different background than mine and have different 
views on what real support and maintenance means.

In context of distributions that cater to enterprises with installation 
lifespans stretching for long years, being able to support an image means being 
able to update kernel and other components when new vulnerabilities arise, 
apply security guarding technics, use custom failure state collection technics, 
etc. Most of these things are hard or even impossible without access to the 
build system for the image. Suggesting that other distributions should just 
take Ubuntu images and use them as blackboxes is not constructive: customers 
pay for open source, among other things, because they get access to the code 
and build system.

Ihar
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to