On Wed, Jan 09, 2013 at 11:01:40PM +0000, Steven Acreman wrote:
> We use chef and ohai which talks to AWS to calculate node counts for
> servers based off tags and metadata. We then have a cookbook that generates
> the haproxy.cfg every time chef runs (on a cron). If the file changes we
> reload the config which seems to keep the sessions alive.

Yes it does :-)

> There are far simpler ways to achieve the above. For instance a really
> simple bash script that echo's static content to haproxy.cfg and the amazon
> command line tools feeding a for loop that prints out the server lines with
> their options.
> 
> Having said all of this.. I think it would be cool to allow a file based
> backend. What I mean by that is that I would prefer to generate backend
> files and upon those files changing haproxy would automatically load them.
> So for instance on my static backend I may update the static.servers file
> which is referenced in haproxy.cfg. I believe you can do similar things
> with mod_jdk and apache.

I've been thinking for some time about having a "servers" section
where we could declare a farm and have all backends use this/these
farms. One benefit would be shared checks and another one would be
shared maxconns. But that's not always trivial, it needs some more
thinking (eg: to arbitrate whom to serve next when maxconn is enforced).

However one point of using a dedicated section is that you could load
haproxy with 3 files then :

  haproxy -f global.cfg -f servers.cfg -f services.cfg

And voila !

I'm sure we'd have some corner cases if we do this, reason why it still
needs some thinking. Anyway, it's still classified as non-urgent in my
head :-)

Willy


Reply via email to