Re: Backend Server Dynamic Configuration

2013-01-09 Thread Kevin Heatwole
I understand your point.  The fact is that I am running on a very small budget. 
 I need the site to scale, but I also need to only use as few servers as 
possible (Amazon EC2 instances aren't that cheap unless I can minimize the size 
and number of instances used).  Although my budget is small, I also have a 
requirement for High Availability.  Recent outages for Amazon has me thinking I 
need to spread servers into multiple zones and even different regions.  This 
all adds up to a higher Amazon bill.

Anyway, I have so much work to do in planning and setting up this site that I 
just resist adding new tasks to my list.  I would rather have HAProxy directly 
support more dynamic configuration than to explore other tools that can 
automate dynamic reconfiguration of HAProxy.  

Amazon already provides CloudWatch and Auto-Scale to manage spinning up new 
servers or taking them down.  But, unless I use Amazon's ELB instead of 
HAProxy, there isn't really built in support for dynamic HAProxy configuration. 
 I'd rather use HAProxy than ELB since I plan on moving to colocation when my 
Amazon bill goes over $500 a month.


On Jan 9, 2013, at 5:57 PM, Zachary Stern  wrote:

> If you need this kind of functionality, you are probably running some kind of 
> large infrastructure right? Or at least a lot of webservers or backend 
> servers. You would do well to look into some automation there. There are 
> plenty of existing tools.
> 
> 
> On Wed, Jan 9, 2013 at 5:47 PM, Kevin Heatwole  wrote:
> You might be right that the best way to do dynamic configuration is to have a 
> tool from a third-party (or created in house) that does monitoring of the 
> backend servers and edits the config file itself and reloads haproxy.
> 
> I just don't want the hassle of finding such tools or writing my own.   Maybe 
> haproxy could contain such a tool that is separate from haproxy but 
> maintained, tested, and delivered with haproxy.  I have to admit that I 
> haven't researched any of the tools you use.  I should do that, but I always 
> worry about creating a new dependency on a new tool since these tools are 
> usually available for free and could lose support and maintenance at any time.
> 
> HAProxy already has some support for dynamic configuration in the health 
> checks that mark down/up a server depending on the result of the health 
> check.  I figure it is relatively simple to build configuration checks on top 
> of the health checks since most of the hard work has already been done for 
> health checks.
> 
> On Jan 9, 2013, at 5:34 PM, Zachary Stern  wrote:
> 
>> Right, and my point is that you can make it dynamic without changing the way 
>> haproxy itself works. What your asking for seems like making haproxy itself 
>> overcomplicated, especially for people with simple deployments. But hey, 
>> maybe I'm 100% wrong. In fact, let's operate on that assumption.
>> 
>> 
>> On Wed, Jan 9, 2013 at 5:26 PM, Kevin Heatwole  wrote:
>> I guess I wasn't clear again.  I'm not talking about "editing" the 
>> configuration file and reloading HAProxy.
>> 
>> My suggestion is simply to implement a dynamic interface to the backend 
>> servers so they can change the current behavior of the HAProxy instance 
>> (especially in a load balanced HAProxy backend).
>> 
>> I'll leave it to the developers to figure out what can be dynamically 
>> changed and if adding a server to a backend is too complex, then that won't 
>> be part of the interface.
>> 
>> On Jan 9, 2013, at 5:18 PM, Zachary Stern  wrote:
>> 
>>> I understood completely KT. It's perfectly possible to add new lines to the 
>>> haproxy config dynamically and automatically using things like puppet.
>>> 
>>> E.g. my iptables configurations are dyanmically generated as I spin up new 
>>> servers, using puppet and the rackspace API. You could do something 
>>> similar, regardless of cloud or not.
>>> 
>>> When I spin up a new server, it's connected to puppet, tagged as a certain 
>>> kind of server, and dynamically added as a backend to haproxy if 
>>> appropriate.
>>> 
>>> 
>>> On Wed, Jan 9, 2013 at 5:16 PM, KT Walrus  wrote:
>>> I think you might have misunderstood.  By "adding new server", I mean to 
>>> add it as a server in HAProxy configuration.  That is, the effect is to add 
>>> the "server" line for the new server into the config file.  This has 
>>> nothing to do with launching the server in the cloud.  It is the reverse of 
>>> marking a server DOWN, except that the server being marked UP was not 
>>>

Re: Backend Server Dynamic Configuration

2013-01-09 Thread Kevin Heatwole
You might be right that the best way to do dynamic configuration is to have a 
tool from a third-party (or created in house) that does monitoring of the 
backend servers and edits the config file itself and reloads haproxy.

I just don't want the hassle of finding such tools or writing my own.   Maybe 
haproxy could contain such a tool that is separate from haproxy but maintained, 
tested, and delivered with haproxy.  I have to admit that I haven't researched 
any of the tools you use.  I should do that, but I always worry about creating 
a new dependency on a new tool since these tools are usually available for free 
and could lose support and maintenance at any time.

HAProxy already has some support for dynamic configuration in the health checks 
that mark down/up a server depending on the result of the health check.  I 
figure it is relatively simple to build configuration checks on top of the 
health checks since most of the hard work has already been done for health 
checks.

On Jan 9, 2013, at 5:34 PM, Zachary Stern  wrote:

> Right, and my point is that you can make it dynamic without changing the way 
> haproxy itself works. What your asking for seems like making haproxy itself 
> overcomplicated, especially for people with simple deployments. But hey, 
> maybe I'm 100% wrong. In fact, let's operate on that assumption.
> 
> 
> On Wed, Jan 9, 2013 at 5:26 PM, Kevin Heatwole  wrote:
> I guess I wasn't clear again.  I'm not talking about "editing" the 
> configuration file and reloading HAProxy.
> 
> My suggestion is simply to implement a dynamic interface to the backend 
> servers so they can change the current behavior of the HAProxy instance 
> (especially in a load balanced HAProxy backend).
> 
> I'll leave it to the developers to figure out what can be dynamically changed 
> and if adding a server to a backend is too complex, then that won't be part 
> of the interface.
> 
> On Jan 9, 2013, at 5:18 PM, Zachary Stern  wrote:
> 
>> I understood completely KT. It's perfectly possible to add new lines to the 
>> haproxy config dynamically and automatically using things like puppet.
>> 
>> E.g. my iptables configurations are dyanmically generated as I spin up new 
>> servers, using puppet and the rackspace API. You could do something similar, 
>> regardless of cloud or not.
>> 
>> When I spin up a new server, it's connected to puppet, tagged as a certain 
>> kind of server, and dynamically added as a backend to haproxy if appropriate.
>> 
>> 
>> On Wed, Jan 9, 2013 at 5:16 PM, KT Walrus  wrote:
>> I think you might have misunderstood.  By "adding new server", I mean to add 
>> it as a server in HAProxy configuration.  That is, the effect is to add the 
>> "server" line for the new server into the config file.  This has nothing to 
>> do with launching the server in the cloud.  It is the reverse of marking a 
>> server DOWN, except that the server being marked UP was not originally 
>> included in the list of servers for the HAProxy backend.
>> 
>> On Jan 9, 2013, at 4:21 PM, Zachary Stern  wrote:
>> 
>>> 
>>> 
>>> On Wed, Jan 9, 2013 at 4:13 PM, Kevin Heatwole  wrote:
>>> 4.  Adding new server to backend by having configuration check return new 
>>> server configuration.
>>> 
>>> I don't know about the other features, but this one I think violates the 
>>> UNIX philosophy of "do one thing and do it well". There are already plenty 
>>> of tools you can use to achieve this with HAproxy, like puppet or chef, and 
>>> things like the ruby fog gem for cloud provisioning, etc.
>>> 
>>> 
>>> -- 
>>> 
>>> zachary alex stern I systems architect
>>> 
>>> o: 212.363.1654 x106 | f: 212.202.6488 | z...@enternewmedia.com
>>> 
>>> 60-62 e. 11th street, 4th floor | new york, ny | 10003
>>> 
>>> www.enternewmedia.com
>>> 
>> 
>> 
>> 
>> 
>> -- 
>> 
>> zachary alex stern I systems architect
>> 
>> o: 212.363.1654 x106 | f: 212.202.6488 | z...@enternewmedia.com
>> 
>> 60-62 e. 11th street, 4th floor | new york, ny | 10003
>> 
>> www.enternewmedia.com
>> 
> 
> 
> 
> 
> -- 
> 
> zachary alex stern I systems architect
> 
> o: 212.363.1654 x106 | f: 212.202.6488 | z...@enternewmedia.com
> 
> 60-62 e. 11th street, 4th floor | new york, ny | 10003
> 
> www.enternewmedia.com
> 



Re: Backend Server Dynamic Configuration

2013-01-09 Thread Kevin Heatwole
I guess I wasn't clear again.  I'm not talking about "editing" the 
configuration file and reloading HAProxy.

My suggestion is simply to implement a dynamic interface to the backend servers 
so they can change the current behavior of the HAProxy instance (especially in 
a load balanced HAProxy backend).

I'll leave it to the developers to figure out what can be dynamically changed 
and if adding a server to a backend is too complex, then that won't be part of 
the interface.

On Jan 9, 2013, at 5:18 PM, Zachary Stern  wrote:

> I understood completely KT. It's perfectly possible to add new lines to the 
> haproxy config dynamically and automatically using things like puppet.
> 
> E.g. my iptables configurations are dyanmically generated as I spin up new 
> servers, using puppet and the rackspace API. You could do something similar, 
> regardless of cloud or not.
> 
> When I spin up a new server, it's connected to puppet, tagged as a certain 
> kind of server, and dynamically added as a backend to haproxy if appropriate.
> 
> 
> On Wed, Jan 9, 2013 at 5:16 PM, KT Walrus  wrote:
> I think you might have misunderstood.  By "adding new server", I mean to add 
> it as a server in HAProxy configuration.  That is, the effect is to add the 
> "server" line for the new server into the config file.  This has nothing to 
> do with launching the server in the cloud.  It is the reverse of marking a 
> server DOWN, except that the server being marked UP was not originally 
> included in the list of servers for the HAProxy backend.
> 
> On Jan 9, 2013, at 4:21 PM, Zachary Stern  wrote:
> 
>> 
>> 
>> On Wed, Jan 9, 2013 at 4:13 PM, Kevin Heatwole  wrote:
>> 4.  Adding new server to backend by having configuration check return new 
>> server configuration.
>> 
>> I don't know about the other features, but this one I think violates the 
>> UNIX philosophy of "do one thing and do it well". There are already plenty 
>> of tools you can use to achieve this with HAproxy, like puppet or chef, and 
>> things like the ruby fog gem for cloud provisioning, etc.
>> 
>> 
>> -- 
>> 
>> zachary alex stern I systems architect
>> 
>> o: 212.363.1654 x106 | f: 212.202.6488 | z...@enternewmedia.com
>> 
>> 60-62 e. 11th street, 4th floor | new york, ny | 10003
>> 
>> www.enternewmedia.com
>> 
> 
> 
> 
> 
> -- 
> 
> zachary alex stern I systems architect
> 
> o: 212.363.1654 x106 | f: 212.202.6488 | z...@enternewmedia.com
> 
> 60-62 e. 11th street, 4th floor | new york, ny | 10003
> 
> www.enternewmedia.com
> 



Backend Server Dynamic Configuration

2013-01-09 Thread Kevin Heatwole
The following future potential feature would help me use haproxy more for an 
upcoming project.  I apologize if this is already addressed through existing 
features or not considered generally useful.

Implement new type of health checks, call them "configuration checks".  A 
configuration check would operate just like HTTP health checks except that 
purpose of the request is to allow a backend server to reconfigure how HAProxy 
perceives and utilizes the server by returning special text responses.  Dynamic 
configuration changes supported could include:

1.  Setting new interval time for subsequent configuration checks to the server.
2.  Setting new maxconn or weight for the server (allowing backend to 
"throttle" or "increase" load for itself).
3.  Setting server state (DOWN, MAINTENANCE, UP, STARTING, STOPPING, DAMAGED) 
changing how HAProxy treats existing connections or new connections for the 
server.
4.  Change server from active to backup (or vice versa).
4.  Adding new server to backend by having configuration check return new 
server configuration.
5.  Changing any other useful settings that affect backend servers.

It would be nice if HAProxy provided a complete set of configuration data 
(including performance data) for all servers in the backend as an option for 
the configuration request (so, decisions can be made in the backend like 
downing itself because the server has the lowest load and the other servers 
have lots of spare capacity).

I believe these changes will really help deployments to the cloud.  And, it 
doesn't rely on having a separate management console which might crash or be 
unable to connect to one or more servers due to networking issues.  The backend 
servers can be responsible for managing themselves.




Re: Server Busy to redispatch to different backend

2013-01-04 Thread Kevin Heatwole
Oh.  One more issue I forgot to include.  Since the 4 backend servers are 
"guard"ed to not accept more than MAXCONN connections each, I have to handle 
the situation where all the servers are full.  In this case, I just want to 
return a quick "all servers busy page".  To make this happen, the frontend load 
balancers will only load balance the first 3 backends.  Each backend HAProxy 
will "first" balance to the localhost server with overflow going to the next 
backend server.  For the 4th backend, HAProxy will return an "all servers busy" 
status code if unable to serve the request by its localhost server and the 
frontend NGINX will return the appropriate "servers busy page".

In this setup, I get an early warning that I might need to add more backend 
servers if the 4th server ever starts serving requests.  I can also set MAXCONN 
for this 4th server to a smaller value and use the 4th server to run other 
tasks.

Also, adding a new server to the backend is simple.  Add the new server as the 
first and reload NGINX load balancers to start balancing to the new server.  
Same for removing a server due to reduced demand.  Always remove from the front.

Kevin

On Jan 4, 2013, at 9:05 AM, KT Walrus  wrote:

> I've had a change in thinking.
> 
> I've decided to not use HAProxy on the frontend load balancers and will go 
> with NGINX for SSL and simple ip_hash load balancing to the 4 backend servers.
> 
> I've also decided to handle sessionDB selection totally in PHP using a cookie 
> and to always use the localhost sessionDB with possible copy of session 
> record from the EC2 active backups of the sessionDBs (if sessionDB is 
> different).
> 
> I will use HAProxy to "guard" each backend server from going over MAXCONN 
> connections, running on the backend server.  These HAProxy's will use "first" 
> load balancing to prefer the localhost server over the others.  Also, HAProxy 
> will proxy for the sessionDBs (not load balancing) so that it can use the 
> active backend sessionDBs in preference to the EC2 active backups.  
> 
> This means the HAProxy in each backend will be responsible for letting the 
> frontend NGINX know to mark the backend as down if any part of the backend 
> stack fails (nginx, php, local mainDB, or local sessionDB).  Of course, NGINX 
> will also mark the backend down if HAProxy simply doesn't respond at all.
> 
> Any further comments on this?  If not, thank you for those that responded 
> previously to help me get to this point in planning out my system 
> architecture.  The key for me in using HAProxy is the "first" load balancing 
> algorithm.  I haven't read about anyone else using "first" in this manner 
> (the documentation seems to indicate this is just for elastic cloud operation 
> to minimize cloud instances).
> 
> Kevin
> 
> On Jan 3, 2013, at 3:55 PM, Kevin Heatwole  wrote:
> 
>> I intend to have a two tiered LB architecture:  frontend LBs and "guard" 
>> backend LBs.
>> 
>> The "guard" backend LBs serve to guarantee that the backend server never has 
>> more than MAXCONN concurrent requests.  Excess request are forwarded to some 
>> other backend using "first" load balancing.
>> 
>> Since I now plan to use "cookie persistence" in the frontend LBs, I'm 
>> wondering if I can use the same cookie in the backend "guard" LBs to change 
>> the cookie to the new backend if the request is forwarded?
>> 
>> That is, should all LBs, whether frontend or guard use the same cookie and 
>> the same pool of backends (only the "guard" backends would proxy to 
>> nginx/varnish while the "frontends" would proxy to the "guard" HAProxys)?
>> 
>> Now that I can handle backend server changes occasionally, I want the change 
>> to stick for all subsequent requests by the user.  Even if the server change 
>> was made by a second level guard HAProxy...
>> 
>> 
>> 
> 




Server Busy to redispatch to different backend

2013-01-03 Thread Kevin Heatwole
I intend to have a two tiered LB architecture:  frontend LBs and "guard" 
backend LBs.

The "guard" backend LBs serve to guarantee that the backend server never has 
more than MAXCONN concurrent requests.  Excess request are forwarded to some 
other backend using "first" load balancing.

Since I now plan to use "cookie persistence" in the frontend LBs, I'm wondering 
if I can use the same cookie in the backend "guard" LBs to change the cookie to 
the new backend if the request is forwarded?

That is, should all LBs, whether frontend or guard use the same cookie and the 
same pool of backends (only the "guard" backends would proxy to nginx/varnish 
while the "frontends" would proxy to the "guard" HAProxys)?

Now that I can handle backend server changes occasionally, I want the change to 
stick for all subsequent requests by the user.  Even if the server change was 
made by a second level guard HAProxy...





Cookie Persistence and Backend Recognition of Server Change

2013-01-03 Thread Kevin Heatwole
I'm thinking of using cookie persistence to stick a user to the same backend 
(if available) for all requests coming from the user.

But, I need to handle the case where HAProxy switches the user to a different 
backend (because the original backend has gone offline or MAXCONN reached) than 
the one saved in the cookie.  

My question is:  Can the backends tell when the frontend has changed to a new 
backend server than the one saved in the cookie?

I assume so, but I'm wondering how to do this.  Have the backend save the 
frontend cookie value in another cookie, if the frontend cookie has changed?  
Or, is it simpler than this and the frontend can set a request attribute 
(X-Server-Changed?) that the backend simply checks?

I need to copy previous session data to the new backend sessionDB (from the 
slave sessionDB backup) to continue processing the user requests uninterrupted 
on the new backend.

Kevin




Re: clarification on peers and inclusion into 1.4 soon?

2012-04-23 Thread Kevin Heatwole
On Apr 23, 2012, at 7:31 PM, David Birdsong wrote:
...
> - nginx is already in front of haproxy, but nginx is not the first
> listener, so it sees the IP addresses as HTTP headers too. the last
> time I checked nginx only blocks IP addresses from layer 4
> connections. any other blocking would require nginx to compare the IP
> addresses as strings or regexes which I want to avoid doing on every
> single request. if the list grows long, every request suffers. ip
> comparison on long lists of IP's is one area where haproxy is the
> clear winner

I'm no nginx expert, but I use the geo keyword to quickly search for banned ups:

geo $is_banned_ip {
default 0;
include /www/nginx/banned_ips.conf;
}  

where the banned_ips.conf lists the IPs followed by a 1 as in:

X.X.X.X/32 1;
X.X.X.Y/32 1;

Then, inside the location, I simply test $is_banned_ip and reject those 
requests, as in:

   location / {
if ($is_banned_id) { return 401; }
index  index.php index.html;
}

The geo keyword is primarily used for getting the geolocation of an IP so I 
think you can have a pretty large list of IPs and still be very efficient. 

Kevin



> On Mon, Apr 23, 2012 at 2:48 PM, Kevin Heatwole  wrote:
>> You might want to block the IPs before they get into haproxy.  Maybe put an 
>> nginx reverse proxy in front of haproxy?  I use nginx to dynamically 
>> block/allow HTTP requests by IP.   Another possibility, if you just need to 
>> block a list of IPs would be to use a firewall/iptables in front of haproxy 
>> to do the blocking.
> 
> - nginx is already in front of haproxy, but nginx is not the first
> listener, so it sees the IP addresses as HTTP headers too. the last
> time I checked nginx only blocks IP addresses from layer 4
> connections. any other blocking would require nginx to compare the IP
> addresses as strings or regexes which I want to avoid doing on every
> single request. if the list grows long, every request suffers. ip
> comparison on long lists of IP's is one area where haproxy is the
> clear winner
> 
> - iptables won't work either, iptables works on TCP/IP not HTTP
> 
> i'd like to keep IP blocking in haproxy.
>> 
>> On Apr 23, 2012, at 2:45 PM, David Birdsong wrote:
>> 
>>> Hi, I've got a situation where I need to update haproxy every  1-2
>>> mins to apprise it of a new list of ip addresses to tarpit.
>>> 
>>> I've rigged up a fairly hacky pipeline to detect scrapers on our site
>>> based on entries found X-Forwarded-For. To get around the fact the
>>> stick-table entries are only keyed off of protocols lower than http
>>> currently, I need to reload haproxy for every new IP address that I
>>> detect. It's ugly, but I've decided to reload haproxy on our site
>>> every 2 minutes. This means that all load balancing info is lost very
>>> frequently, maxconns per backend are reset, and during deploy time
>>> when we rely on slowstart to warm our backend's, we have to completely
>>> disable reloads of haproxy altogether which opens us up to heavy
>>> scraping for ~1-3 hours per day during our code deploy which makes it
>>> tough to differentiate between slowness induced by recent code changes
>>> and scrapers sucking up resources.
>>> 
>>> I'd love to not reload haproxy and let it learn about IP's to block
>>> internally, but my understanding is that IP addresses found at the
>>> HTTP level will not work their way into stick tables for some time.
>>> 
>>> Will peers help to maintain state inside of haproxy between graceful
>>> reloads? Will connection counts to backends be maintained? Are any
>>> stats back populated to the new process?
>>> 
>>> Also, how is the stability of peer mode? It's going to take some
>>> arguing and hand-wringing to convince others in our organization to
>>> put a 1.5 version out in front of the site despite the fact that
>>> haproxy is generally the most stable piece of software in our stack
>>> dev version or not. Are there any efforts to port peer mode into 1.4
>>> soon?
>>> 
>>> Thanks again for one of the most useful, fast, and stable tools that
>>> the community has come to rely so heavily on.
>>> 
>> 




Re: clarification on peers and inclusion into 1.4 soon?

2012-04-23 Thread Kevin Heatwole
You might want to block the IPs before they get into haproxy.  Maybe put an 
nginx reverse proxy in front of haproxy?  I use nginx to dynamically 
block/allow HTTP requests by IP.   Another possibility, if you just need to 
block a list of IPs would be to use a firewall/iptables in front of haproxy to 
do the blocking.

On Apr 23, 2012, at 2:45 PM, David Birdsong wrote:

> Hi, I've got a situation where I need to update haproxy every  1-2
> mins to apprise it of a new list of ip addresses to tarpit.
> 
> I've rigged up a fairly hacky pipeline to detect scrapers on our site
> based on entries found X-Forwarded-For. To get around the fact the
> stick-table entries are only keyed off of protocols lower than http
> currently, I need to reload haproxy for every new IP address that I
> detect. It's ugly, but I've decided to reload haproxy on our site
> every 2 minutes. This means that all load balancing info is lost very
> frequently, maxconns per backend are reset, and during deploy time
> when we rely on slowstart to warm our backend's, we have to completely
> disable reloads of haproxy altogether which opens us up to heavy
> scraping for ~1-3 hours per day during our code deploy which makes it
> tough to differentiate between slowness induced by recent code changes
> and scrapers sucking up resources.
> 
> I'd love to not reload haproxy and let it learn about IP's to block
> internally, but my understanding is that IP addresses found at the
> HTTP level will not work their way into stick tables for some time.
> 
> Will peers help to maintain state inside of haproxy between graceful
> reloads? Will connection counts to backends be maintained? Are any
> stats back populated to the new process?
> 
> Also, how is the stability of peer mode? It's going to take some
> arguing and hand-wringing to convince others in our organization to
> put a 1.5 version out in front of the site despite the fact that
> haproxy is generally the most stable piece of software in our stack
> dev version or not. Are there any efforts to port peer mode into 1.4
> soon?
> 
> Thanks again for one of the most useful, fast, and stable tools that
> the community has come to rely so heavily on.
> 




Re: new balance algorithm

2012-03-31 Thread Kevin Heatwole
> On Mar 31, 2012, at 11:06 PM, David Birdsong wrote:
>> On Sat, Mar 31, 2012 at 7:55 PM, Kevin Heatwole  wrote:
>>> I would plan to automate this by having all servers included in the haproxy 
>>> config but only the first server would initially be UP and all others DOWN. 
>>>  When a server handles a request, it makes sure that its next server is 
>>> activated.  When a server doesn't handle any requests for some time, it 
>>> deactivates its next server (if any).
>> 
>> You could implement this by monitoring your available slots on a
>> backend, once the slots decrease %N of total slots, spin up new
>> instances. Apply the same logic in reverse to turn off nodes.
> 
I just noticed in the documentation that the http-check can send its state.  
This seams to contain enough info (scur and qcur) for a script to decide to 
activate or deactivate a server.  I like that this decision can be made using 
the stats from the load balancer.

So, I think this is sufficient for my needs and I don't need a new balance 
algorithm.



Re: new balance algorithm

2012-03-31 Thread Kevin Heatwole
On Mar 31, 2012, at 11:06 PM, David Birdsong wrote:
> On Sat, Mar 31, 2012 at 7:55 PM, Kevin Heatwole  wrote:
>> I would plan to automate this by having all servers included in the haproxy 
>> config but only the first server would initially be UP and all others DOWN.  
>> When a server handles a request, it makes sure that its next server is 
>> activated.  When a server doesn't handle any requests for some time, it 
>> deactivates its next server (if any).
> 
> You could implement this by monitoring your available slots on a
> backend, once the slots decrease %N of total slots, spin up new
> instances. Apply the same logic in reverse to turn off nodes.

You make a good point.  I was also thinking of using dynamic WEIGHTs in haproxy 
by setting the last available server to have a WEIGHT of 1% and then when that 
server starts getting a steady stream of requests, up its WEIGHT to 100% and 
allocate a new last server. 

I think I could make your suggestion work, but I'd rather just have it all 
configured in the load balancer.  I'd also rather see the load balancer have 
some mechanism to tell the last active server it should spin up a spare or to 
deactivate itself so it is completely in control.


new balance algorithm

2012-03-31 Thread Kevin Heatwole
I am just investigating use of haproxy for the first time.

I'd like the balancing algorithm to send http request to the first server in 
the list until the number of requests hits a configurable number.  When the 
request limit for a server is hit, I then want new requests to go to the next 
server until that server hits its configurable limit.  So, instead of RR, I 
want to load down a server before overflowing to the next server.

What I think I want to do is to always have the last server in the farm not 
have any requests.  If it does, I will activate another server to ensure I have 
enough capacity to handle the load spike.  But, when the last two servers go 
completely idle again, I can deactivate the last idle server.

My servers are "in the cloud" and I pay for each one that is activated so I 
think this type of load balancing would help me activate only servers I need 
(saving me money).  

I would plan to automate this by having all servers included in the haproxy 
config but only the first server would initially be UP and all others DOWN.  
When a server handles a request, it makes sure that its next server is 
activated.  When a server doesn't handle any requests for some time, it 
deactivates its next server (if any).

Does this make sense?  I'm new to scaling out and haproxy, so if this scheme 
already exists, please point me to where it is discussed in the documentation.

Kevin