On Thu, Jan 30, 2014 at 08:03:37PM +0100, PiBa-NL wrote:
> can you doublecheck the sticktable fills properly with the socket
> commands, and you are running with "nbproc 1" ?

It appears that 1.4 does support 'show table' via the stats
socket. Yes, nbproc is 1.

> can you post the (anonimized) config your currently using?

No need to anonymize, I'm running this all in a few kvm instances.

---

global
    daemon
    stats socket /tmp/haproxy

defaults
    mode http
    option http-server-close
    timeout connect 5s
    timeout client 10s
    timeout server 10s

frontend http-vip
    bind 192.168.122.101:80
    default_backend http-servers

backend http-servers
    stick-table type ip size 1
    stick on dst
    server node-02 192.168.122.102:80 check
    server node-03 192.168.122.103:80 check backup
    server node-04 192.168.122.104:80 check backup

---

Thanks for the assistance.

Ryan


> Ryan O'Hara schreef op 30-1-2014 19:50:
> >On Thu, Jan 30, 2014 at 07:39:29PM +0100, PiBa-NL wrote:
> >>This should (i expect) work with any number of backup servers, as
> >>long as you only need 1 active.
> >Yes, it appears this is exactly what I want. A quick test shows that
> >once failback is still occurring. Not sure why. Once my primary fails,
> >the first backup gets the traffic as expected. Once the primary comes
> >back online, it services all requests again.
> >
> >I'm using 1.4 and my configuration is nearly identical to the example
> >shown in the blow, sans the peers.
> >
> >Ryan
> >
> >
> >
> >>Ryan O'Hara schreef op 30-1-2014 19:34:
> >>>On Thu, Jan 30, 2014 at 07:14:30PM +0100, PiBa-NL wrote:
> >>>>Im not 100% sure but if i remember something i read correctly it was
> >>>>like using a "stick on dst" stick-table.
> >>>>
> >>>>That way the sticktable will make sure all traffic go's to a single
> >>>>server, and only when it fails another server will be put in the
> >>>>sticktable that will only have 1 entry.
> >>>Yes. That sounds accurate.
> >>>
> >>>>You might want to test what happens when haproxy configuration is
> >>>>reloaded.. But if you configure 'peers' the new haproxy process
> >>>>should still have the same 'active' backend..
> >>>>
> >>>>p.s. That is if im not mixing stuff up...
> >>>This blog has something very close to what I'd like to deploy:
> >>>
> >>>http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/
> >>>
> >>>The only difference is that I'd like to have more than just one
> >>>backup. I'll try to find some time to experiment in the next few days.
> >>>
> >>>Thanks.
> >>>Ryan
> >>>
> >>>
> >>>>Ryan O'Hara schreef op 30-1-2014 17:42:
> >>>>>I'd like to define a proxy (tcp mode) that has multiple backend
> >>>>>servers yet only uses one at a time. In other words, traffic comes
> >>>>>into the frontend and is redirected to one backend server. Should that
> >>>>>server fail, another is chosen.
> >>>>>
> >>>>>I realize this might be an odd thing to do with haproxy, and if you're
> >>>>>thinking that simple VIP failover (ie. keepalived) is better suited
> >>>>>for this, you are correct. Long story.
> >>>>>
> >>>>>I've gotten fairly close to achieving this behavior by having all my
> >>>>>backend servers declared 'backup' and not using 'allbackups'. The only
> >>>>>caveat is that these "backup" servers have a preference based on the
> >>>>>order they are defined. Say my severs are defined in the backend like
> >>>>>this:
> >>>>>
> >>>>> server foo-01 ... backup
> >>>>> server foo-02 ... backup
> >>>>> server foo-03 ... backup
> >>>>>
> >>>>>If foo-01 is up, all traffic will go to it. When foo-0t is down, all
> >>>>>traffic will go to foo-02. When foo-01 comes back online, traffic goes
> >>>>>back to foo-01. Ideally the backend servers would change only when it
> >>>>>failed. Beside, this solution is rather ugly.
> >>>>>
> >>>>>Is there a better way?
> >>>>>
> >>>>>Ryan
> >>>>>
> >>
> 
> 

Reply via email to