My comments in red (sorry again).

From: Eugene Nikanorov <enikano...@mirantis.com<mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 2, 2014 5:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Fulfilling Operator Requirements: 
Driver / Management API

Hi Adam,

My comments inline:


On Fri, May 2, 2014 at 1:33 AM, Adam Harwell 
<adam.harw...@rackspace.com<mailto:adam.harw...@rackspace.com>> wrote:
I am sending this now to gauge interest and get feedback on what I see as an 
impending necessity — updating the existing "haproxy" driver, replacing it, or 
both.
I agree with Stephen's first point here.
For HAProxy driver to support advanced use cases like routed mode, it's agent 
should be severely changed and receive some capabilities of L3 agent.
In fact, I'd suggest making additional driver, not for haproxy in VMs, but 
for... dedicated haproxy nodes.
Dedicated haproxy node is a host (similar to compute) with L2 agent and lbaas 
(not necessarily existing) agent on it.

In fact, it's essentially the same model as used right now, but i think it has 
it's advantages over haproxy-in-vm, at least:
- plugin driver doesn't need to manage VM life cycle (no orchestration)
- immediate "natural" multitenant support with isolated networks
- instead of adding haproxy in VM, you add a process (which is both faster and 
more efficient);
more scaling is achieved by adding physical haproxy node; existing agent health 
reporting will make it available for loadbalancer scheduling automatically.

I think that driver sounds like a good idea — I think we agree in essence, that 
there will need to be drivers to provide a variety of different approaches. I 
guess the question becomes, is there a smart way to accomplish this?

HAProxy: This references two things currently, and I feel this is a source of 
some misunderstanding. When I refer to  HAProxy (capitalized), I will be 
referring to the official software package (found here: http://haproxy.1wt.eu/ 
), and when I refer to "haproxy" (lowercase, and in quotes) I will be referring 
to the neutron-lbaas driver (found here: 
https://github.com/openstack/neutron/tree/master/neutron/services/loadbalancer/drivers/haproxy
 ). The fact that the neutron-lbaas driver is named directly after the software 
package seems very unfortunate, and while it is not directly in the scope of 
what I'd like to discuss here, I would love to see it changed to more 
accurately reflect what it is --  one specific driver implementation that 
coincidentally uses HAProxy as a backend. More on this later.
We also was referring existing driver as "haproxy-on-host".
Ok, I will use that term from now on (I just hadn't seen it anywhere, and you 
can understand how it is confusing to just see "haproxy" as the driver name).


Operator Requirements: The requirements that can be found on the wiki page 
here:  
https://wiki.openstack.org/wiki/Neutron/LBaaS/requirements#Operator_Requirements
 and focusing on (but not limited to) the following list:
* Scalability
* DDoS Mitigation
* Diagnostics
* Logging and Alerting
* Recoverability
* High Availability (this is in the User Requirements section, but will be 
largely up to the operator to handle, so I would include it when discussing 
Operator Requirements)
Those requirements are of very different kinds and they are going to be 
addressed by quite different components of lbaas, not solely by the driver.

Management API: A restricted API containing resources that Cloud Operators 
could access, including most of the list of Operator Requirements (above).
The work is being done on this front: we're designing a way for plugin drivers 
to expose their own API, that specifically is needed for operator API which 
might not be common between providers.
Ok, this sounds like what some other people mentioned, and does sound like 
essentially what we'd need to do for this to work in any real capacity. The 
question I have then is, do we still need to talk about this at all, or just 
agree to make sure this method works, and then go our own ways implementing our 
Management APIs?


Load Balancer (LB): I use this term very generically — essentially a logical 
entity that represents one "use case". As used in the sentence: "I have a Load 
Balancer in front of my website." or "The Load Balancer I set up to offload SSL 
Decryption is lowering my CPU load nicely."

----------------------------------
---- Overview
----------------------------------
What we've all been discussing for the past month or two (the API, Object 
Model, etc) is being directly driven by the User and Operator Requirements that 
have somewhat recently been enumerated (many thanks to everyone who has 
contributed to that discussion!). With that in mind, it is hopefully apparent 
that the current API proposals don't directly address many (or really, any) of 
the Operator requirements! Where in either of our API proposals are logging, 
high availability, scalability, DDoS mitigation, etc? I believe the answer is 
that none of these things can possibly be handled by the API, but are really 
implementation details at the driver level. Radware, NetScaler, Stingray, F5 
and HAProxy of any flavour would all have very different ways of handling these 
things (these are just some of the possible backends I can think of). At the 
end of the day, what we really have are the requirements for a driver, which 
may or may not use HAProxy, that we hope will satisfy all of our concerns. That 
said, we may also want to have some form of "Management API" to expose these 
features in a common way.
I'm not sure on the 'common way' here. I'd prefer to let vendors implement what 
is suitable for them and converge on similarities later.
Same as above — I guess we don't really work together, we just make our own 
Management APIs? Is this a good design decision? Is there any alternative? 
Please note that I'm not saying you're necessarily wrong…

In this case, we really need to discuss two things:

  1.  Whether to update the existing "haproxy" driver to accommodate these 
Operator Requirements, or whether to start from scratch with a new driver 
(possibly both).

See my comment on this above. I'd prefer to have drivers in both variants, 
however I'm not sure if such code/solution duplication is acceptable, most 
probably it is (as they will support different use cases). The problem is that 
existing solution (particularly, haproxy namespace driver) can't support some 
important use cases, but it hardly makes sense to rework it for those cases. On 
the other hand, the new driver might not support the way existing driver works, 
but that might be fine.
Essentially: yes, that all sounds like what I was thinking as well.

  1.  How to expose these Operator features at the (Management?) API level.

See above. There was a bp filed for this ( 
https://blueprints.launchpad.net/neutron/+spec/lbaas-extensions ) and we also 
had a session at Icehouse summit ( 
https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension) on how this 
could be implemented.
Thanks for the links, I'll attempt to acquaint myself with these.

----------------------------------
---- 1) Driver
----------------------------------
I believe the current "haproxy" driver serves a very specific purpose, and 
while it will need some incremental updates, it would be in the best interest 
of the community to also create and maintain a new driver (which it sounds like 
several groups have already begun work on — ack!) that could support a 
different approach. For instance, the current "haproxy" driver is implemented 
by initializing HAProxy processes on a set of shared hosts, whereas there has 
been some momentum behind creating individual Virtual Machines (via Nova) for 
each Load Balancer created, similar to Libra's approach. Alternatively, we 
could use LXC or a similar technology to more effectively isolate LBs and 
assuage concerns about tenant cross-talk (real or imaginary, this has been an 
issue for some customers).
I think VM approach is also possible as additional third option (in addition to 
existing driver and dedicated host).
Please note that similar work on this is also on the way: 
https://review.openstack.org/#/c/88213/

Either way, we'd probably need a brand new driver, to avoid breaking backwards 
compatibility with the existing driver (which does work perfectly fine in many 
cases). In fact, it's possible that when we begin discussing this as a broader 
community, we might decide to create more than one additional driver (depending 
on which approaches people want to use and what features are most important to 
them). The only concern I have about that outcome is the necessary amount of 
code-reuse, and whether it would be possible to share certain aspects of these 
drivers without too much copy/pasting.
I generally agree with that. I'm only a little concerned about possible 
duplicate solutions.
I think German had some ideas about this, with regard to splitting up the 
provider from the driver, possibly allowing for much less copy/pasting or 
duplication of effort. I'm hoping we can discuss more along those lines.


An example of one possible new driver could be the following (just off the top 
of my head):
* Use a pair of new Nova VMs for each LB (Scalability), configured to use a 
Shared IP (High Availability).
* Log to Swift / Ceilometer (Logging / Alerting / Metering).
* Provide calls that could be exposed via a Management API to show low level 
diagnostic details (Diagnostics).
* Provide calls that could be exposed via a Management API to allow 
syncing/reloading existing LBs or moving them across clusters (Recoverability, 
DDoS Mitigation).
This new driver would be named to reflect what features it provides, or at 
least given a unique name that can be referenced without confusion (something 
like "OpenHA" or "NovaHA" would work if that's not taken).

----------------------------------
---- 2) Management API
----------------------------------
Going forward, it should then be required (can we enforce this?) that any 
mainline driver include support for calls to handle these named Operator 
Requirements, for example: obtaining logs (or log locations?), diagnostic 
information, and admin type actions including rebuilding or migrating LB 
instances.

I think we should not put these requirements, at least from the beginning. It 
seems that operator's API might be even more complex then tenant's and that 
makes consensus on it much harder.

So far we haven't really talked about any of these features in depth, though I 
believe the general need for a Management API was alluded to on several 
occasions. Should we shelve this discussion until after we have the User API 
specification locked down? Should we begin defining a contract for this 
Management API at the summit, since it would be the main gateway to the 
Operator Requirements that we have all been stressing recently?

----------------------------------
---- Summary
----------------------------------
I would apologize for not having much concrete specification here, but I think 
it is better to validate my basic assumptions first, before jumping deeper down 
this rabbit hole. The type of comments I'm hoping to prompt are along the lines 
of:
* "We should just focus on the existing haproxy driver."
* "We should definitely collaborate to make a new driver as a community."
New driver(s) that use haproxy is totally fine, i think.
* "I don't think a Management API is necessary."
It really is!
Good, I think so too. :)
* "This is definitely what I was thinking we'd need to do."

 Anything specific implementation details I've mentioned are intended be taken 
as one possible example, not as a well thought out proposal. I am, as one might 
say, "speaking my mind". My hope is that some of this will simmer on the 
general subconscious. I'd like to hear what the general consensus is on these 
topics, because these are some of the assumptions I've been operating under 
during the rest of our discussions, and if they're invalid, I may need to 
rebase my view on the API discussion as a whole.

Thanks ya'll, I'm looking forward to getting some additional viewpoints!
--Adam Harwell (rm_work)



Thanks,
Eugene.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to