Hi Guys,

I was reading an old blog and I found myself having an issue with the way
the stick table and "nopurge" interact. Basically what is said in that Blog
does not work as advertised with my version of HAProxy(1.8.23).

What you will find is that the stick table correctly gets an entry added
fine but when the real server then fails a health check and the backup
server is inserted this entry is not updated. As such the moment the real
server comes back online traffic instantly moves away from the backup
server. Remove "nopurge" and my expectations are then met with traffic
staying on the backup server until I take action such as clearing the stick
table or putting the backup server itself into maint mode.

The Blog:
https://www.haproxy.com/blog/emulating-activepassing-application-clustering-with-haproxy/

I don't understand if this is a bug or a change in the expectations of this
feature but I thought I'd highlight it to everyone so I can get some advice.

The issue was also covered in the comments of the Blog (why oh why did I
not read this first!):

*jean *on March 28, 2017 at 6:19 pm
> Hello,
> I feel like I missed something here… When I implement this configuration
> on a simple 2-nodes haproxy solution:
> – the table gets populated after the first request
> # table: bk_ldap_mirror, type: ip, size:1, used:1
> 0x55f490608b74: key=192.168.1.2 use=0 exp=0 server_id=1
> – If I shutdown the s1 backend, failover happens, everything goes to s2,
> but no change in the table.
> – when I put s1 backend back on, all further requests get back to s1
> What I expected:
> – once s1 is done, change server_id value in stick table would switch to 2
> – when s1 is back online, stick to s2 unless it fails or is pushed to
> maintenance mode, in which case server_id in stick table would change again.
> I’m on haproxy 1.7.3. What am I missing?


wtarreau on May 4, 2017 at 7:18 am
> What you describe is what should happen with this configuration. Either
> you’ve got a mistake or you’re facing a bug, I can’t say for now. Please
> first upgrade to 1.7.5 to fix known bugs and retry. If it doesn’t work, you
> should bring this to the mailing list as it might be a bug.


Eugene Brown on December 18, 2017 at 11:06 pm
> I have found that using nopurge allows for a failback. Removing nopurge
> proves sticky.
> As soon as my original server comes back up, if nopurge is set, the
> connect fails back.
> I did not let my failed connection dead for for an extended time.
> What I don’t understand if the table size is 1 and is no purged, then what
> is in the table when it fails to the second connection? I display the table
> and it never changes with nopurge set. But when not set, the table updates
> and the connection persists on the new connection.


So is it a bug? Or is it a change in the behavior where we might need to go
back and update the documentation?

Thanks in advance!

Aaron West

Reply via email to