Re: [openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-21 Thread Stephen Balukoff
Hi Michael!

Just to give others some background on this: The current proposal (by me)
is to have each Listener object, (as defined in the Neutron LBaaS v2 code
base) correspond with one haproxy process on the Octavia VM in the
currently proposed Octavia design document. Michael's proposal is to have
each Loadbalancer object correspond with one haproxy process (which would
have multiple front-end sections in it to service each Listener on the
Loadbalancer).

Anyway, we thought it would be useful to discuss this on the mailing list
so that we could give others a chance to register their opinions, and
justify the same.

That being said, my responses to your points are in-line below, followed by
my reasoning for wanting 1 haproxy process = 1 listener in the
implementation:


On Wed, Aug 20, 2014 at 12:34 PM, Michael Johnson johnso...@gmail.com
wrote:

 I am proposing that Octavia should support deployment models that
 enable multiple listeners to be configured inside the HAProxy
 instance.

 The model I am proposing is:

 1. One or more VIP per Octavia VM (propose one VIP in 0.5 release)
 2. One or more HAProxy instance per Octavia VM
 3. One or more listeners on each HAProxy instance


This is where our proposals differ. I propose 1 listener per haproxy
instance.


 4. Zero or more pools per listener (shared pools should be supported
 as a configuration render optimization, but propose support post 0.5
 release)
 5. One or more members per pool


I would also propose zero or more members per pool. A pool with zero
members in it has been (is being) used by some of our customers to
blacklist certain client IP addresses. These customers want to respond to
the blacklisted IPs with an error 503 page (which can be done by haproxy)
instead of simply not responding to packets (if the blacklist were done at
the firewall).


 This provides flexibility to the operator to support multiple
 deployment models,  including active-active and hot standby Octavia
 VMs.  Without the flexibility to have multiple listeners per HAProxy
 instance we are limiting the operators deployment models.


I don't think your conclusion follows logically from your justification
here.  Specifically, active-active and hot standby Octavia VMs are equally
supported by a one-process-per-listener model. Further, for reasons I'll
get into below, I think the one-process-per-listener model actually
provides more flexibility to the operators and users in how services are
deployed. Therefore, the conclusion I come to is the exact opposite of
yours: By insisting that all listeners on a given loadbalancer share a
single haproxy process, we actually limit flexibility in deployment models
(as well as introduce some potential operational problems we otherwise
wouldn't encounter).


I am advocating for multiple listeners per HAProxy instance because I
 think it provides the following advantages:

 1. It reduces memory overhead due to running multiple HAProxy
 instances on one Octavia VM.  Since the Octavia constitution states
 that Octavia is for large operators where this memory overhead could
 have a financial impact we should allow alternate deployment options.

2. It reduces host CPU overhead due to reduced context switching that
 would occur between HAProxy instances.  HAProxy is event driven and
 will mostly be idle waiting for traffic, where multiple instances of
 HAProxy will require context switching between the processes which
 increases the VM’s CPU load.  Since the Octavia constitution states
 that we are designing for large operators, anything we can do to
 reduce the host CPU load reduces the operator’s costs.


So these two points might be the only compelling reason I see to follow the
approach you suggest. However, I would like to see the savings here
justified via benchmarks. If benchmarks don't show a significant difference
in performance running multiple haproxy instances to service different
listeners over running a single haproxy instance servicing the same
listeners, then I don't think these points are sufficient justification. I
understand your team (HP) is going to be working on these, hopefully in
time for next week's Octavia meeting.

Please also understand that memory and CPU usage are just two factors in
determining overall cost of the solution. Slowing progress on delivering
features, increasing faults and other problems by having a more complicated
configuration, and making problems more difficult to isolate and
troubleshoot are also factors that affect the cost of a solution (though
they aren't as easy to quantify). Therefore it does not necessarily
logically follow that anything we can do to reduce CPU load decreases the
operator's costs.

Keep in mind, also, that for large operators the scaling strategy is to
ensure services can be scaled horizontally (meaning the CPU / memory
footprint of a single process isn't very important for a large load that
will be spread across many machines anyway), and any costs for delivering
the 

Re: [openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-21 Thread Dustin Lundquist
I'm on the fence here, I see a number of advantages to each:

Single HAProxy process per listener:

   - Failure isolation
   - TLS Performance -- for non TLS services HAProxy is IO bound, and there
   is no reason to run it across multiple CPU cores, but with HAProxy
   terminating TLS there is an increased potential of a DoS with a large
   number of incoming TLS handshakes.
   - Reduced impact of reconfiguration -- while there is very little impact
   when reloading the configuration since HAProxy hands off the listener
   sockets to the new instance and the old instance continues to handle those
   connections, with a more complex configuration it is more likely to affect
   services on other listeners.

Multiple listeners on a single HAProxy instance:

   - Enables sharing pools between listeners -- this would reduce keep
   health monitor traffic, and pipe-lining requests from multiple listeners is
   possible
   - Reduced resource usage -- we should preform the benchmarks and
   quantify this
   - Simplified stats/log aggregation
   - Simplified Octavia instances -- I think each Octavia instance only
   running a single HAProxy process is a win, its easier to monitor and
   upstart/systemd/init only needs to start a single process.


Dustin Lundquist
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-21 Thread Stephen Balukoff
Hi Dustin,

Responses in-line:



On Thu, Aug 21, 2014 at 1:56 PM, Dustin Lundquist dus...@null-ptr.net
wrote:

 I'm on the fence here, I see a number of advantages to each:

 Single HAProxy process per listener:

- Failure isolation
- TLS Performance -- for non TLS services HAProxy is IO bound, and
there is no reason to run it across multiple CPU cores, but with HAProxy
terminating TLS there is an increased potential of a DoS with a large
number of incoming TLS handshakes.
- Reduced impact of reconfiguration -- while there is very little
impact when reloading the configuration since HAProxy hands off the
listener sockets to the new instance and the old instance continues to
handle those connections, with a more complex configuration it is more
likely to affect services on other listeners.

 Multiple listeners on a single HAProxy instance:

- Enables sharing pools between listeners -- this would reduce keep
health monitor traffic, and pipe-lining requests from multiple listeners is
possible

 I spoke to this point above. Frankly, I'm starting to think this argument
might be premature optimization:  I'm guessing the number of incidents
where pools are shared between listeners on a single loadbalancer is going
to be relatively rare--  so few as to not merit consideration for the
overall design. :/



- Reduced resource usage -- we should preform the benchmarks and
quantify this

 Yep, I'm looking forward to seeing the benchmarks here.



- Simplified stats/log aggregation

 I disagree here. This is especially the case if we use something like
syslog-ng for gathering logs (which is effectively non-blocking, which is
probably desirable no matter whether one haproxy process or multiple
haproxy processes are used). I'm not sure haproxy's code for appending logs
it writes to directly is non-blocking.

Stats parsing from haproxy is simpler if more processes are used. As far as
aggregation: Well, we've yet to define what people might want aggregated.
But note here that shared pools across listeners means shared stats for
those pools:  A user might want to see that pool's stats for listener A
versus listener B, which isn't possible if the pool is shared across
listeners. :/  (In any case, we're still talking hypotheticals here...)


- Simplified Octavia instances -- I think each Octavia instance only
running a single HAProxy process is a win, its easier to monitor and
upstart/systemd/init only needs to start a single process.

 So, in the model proposed by Michael, a single haproxy instance consists
of all the listeners on that loadbalancer as a single process. So if more
than one loadbalancer is deployed to a single Octavia VM, you're going to
need to start / stop / otherwise control multiple haproxy processes anyway.
So the system upstart / systemd / init scripts aren't going to cut it for
this set-up. My thought was to write a new control script (similar to the
one we use in our environment already) which controls all the haproxy
processes, and which can be called on boot to look for and start any
processes for which configuration exists locally (assuming persistent
storage for the VM or something-- if some operators want to do this).  It's
just as likely that we would have a freshly-booted Octavia VM check in with
its controller on boot, download any configurations it should be running,
and start the associated haproxy process(es). Again, the model proposed by
Michael and the model proposed by me do not differ much in how this control
must work if we're allowing multiple loadbalancers per Octavia VM.

We can potentially debate whether we allow multiple loadbalancers per
Octavia VM, but I think restricting this to a maximum of one is not
desirable from a hardware utilization perspective. Many production load
balanced services sit nearly idle all day, so there's no reason an Operator
shouldn't be allowed to combine multiple loadbalancers on a single Octavia
VM (perhaps at a lower price tier to the user). This is also similar to how
actual load balancing hardware appliance vendors tend to operate. The
restrction of 1 loadbalancer per Octavia VM does limit the operator's
options, eh.

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

2014-08-20 Thread Michael Johnson
I am proposing that Octavia should support deployment models that
enable multiple listeners to be configured inside the HAProxy
instance.

The model I am proposing is:

1. One or more VIP per Octavia VM (propose one VIP in 0.5 release)
2. One or more HAProxy instance per Octavia VM
3. One or more listeners on each HAProxy instance
4. Zero or more pools per listener (shared pools should be supported
as a configuration render optimization, but propose support post 0.5
release)
5. One or more members per pool

This provides flexibility to the operator to support multiple
deployment models,  including active-active and hot standby Octavia
VMs.  Without the flexibility to have multiple listeners per HAProxy
instance we are limiting the operators deployment models.

I am advocating for multiple listeners per HAProxy instance because I
think it provides the following advantages:

1. It reduces memory overhead due to running multiple HAProxy
instances on one Octavia VM.  Since the Octavia constitution states
that Octavia is for large operators where this memory overhead could
have a financial impact we should allow alternate deployment options.
2. It reduces host CPU overhead due to reduced context switching that
would occur between HAProxy instances.  HAProxy is event driven and
will mostly be idle waiting for traffic, where multiple instances of
HAProxy will require context switching between the processes which
increases the VM’s CPU load.  Since the Octavia constitution states
that we are designing for large operators, anything we can do to
reduce the host CPU load reduces the operator’s costs.
3. Hosting multiple HAProxy instances on one Octavia VM will increase
the load balancer build time because multiple configuration files,
start/stop scripts, health monitors, and HAProxy Unix sockets will
have to be created.  This could significantly impact operator
topologies that use hot standby Octavia VMs for failover.
4. It reduces network traffic and health monitoring overhead because
only one HAProxy instance per Octavia VM will need to be monitored.
This again, saves the operator money and increases the scalability for
large operators.
5. Multiple listeners per instance allows the sharing of backend pools
which reduces the amount of health monitoring traffic required to the
backend servers.  It also has the potential to share SSL certificates
and keys.
6. It allows customers to think of load balancers (floating IPs) as an
application service, sharing the fate of multiple listeners and
providing a consolidated log file.  This also provides a natural
grouping of services (around the floating IP) for a defined
performance floor.  With multiple instances per Octavia VM one
instance could negatively impact all of the other instances which may
or may not be related to the other floating IP(s).
7. Multiple listeners per instance reduces the number of TCP ports
used on the Octavia VM, increasing the per-VM scalability.


I don’t want us, by design, to limit the operator flexibility in
deployment an topologies, especially when it potentially impacts the
costs for large operators.  Having multiple listeners per HAProxy
instance is a very common topology in the HAProxy community and I
don’t think we should block that use case with Octavia deployments.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev