Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-21 Thread Willy Tarreau
On Fri, Dec 22, 2017 at 12:34:45AM +0100, Cyril Bonté wrote:
> And after performing the same tests with the patch applied, I can confirm I
> don't reproduce the issue anymore ;-)

Cool, thanks for your feedback Cyril!
Willy



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-21 Thread Cyril Bonté

Hi all,

Le 21/12/2017 à 15:25, Willy Tarreau a écrit :

On Thu, Dec 21, 2017 at 02:53:07PM +0100, Emeric Brun wrote:

Hi All,

This bug should be fixed using this patch (patch on dev, abd should be 
backported in 1.8).


now applied to both branches,, thanks!
Willy


And after performing the same tests with the patch applied, I can 
confirm I don't reproduce the issue anymore ;-)


--
Cyril Bonté



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-21 Thread Willy Tarreau
On Thu, Dec 21, 2017 at 02:53:07PM +0100, Emeric Brun wrote:
> Hi All,
> 
> This bug should be fixed using this patch (patch on dev, abd should be 
> backported in 1.8).

now applied to both branches,, thanks!
Willy



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-21 Thread Emeric Brun
Hi All,

This bug should be fixed using this patch (patch on dev, abd should be 
backported in 1.8).

R,
Emeric

On 12/21/2017 10:42 AM, Greg Nolle wrote:
> Thanks guys! I should be able to test the new version this weekend if you are 
> able to issue it before then.
> 
> Best regards,
> Greg
> 
> On Thu, Dec 21, 2017 at 12:15 AM, Willy Tarreau  > wrote:
> 
> On Thu, Dec 21, 2017 at 12:04:11AM +0100, Cyril Bonté wrote:
> > Hi Greg,
> >
> > Le 20/12/2017 à 22:42, Greg Nolle a écrit :
> > > Hi Andrew,
> > >
> > > Thanks for the info but I'm afraid I'm not seeing anything here that
> > > would affect the issue I'm seeing, and by the way the docs don't
> > > indicate that the cookie names have to match the server names.
> >
> > First, don't worry about the configuration, there is nothing wrong in 
> it ;-)
> >
> > > That being said, I tried using your settings and am still seeing the
> > > issue (see below for new full config). And like I say, this is only an
> > > issue with v1.8.1, it works as expected in v1.7.9.
> >
> > I won't be able to look further tonight, but at least I could identify 
> when
> > the regression occured : it's caused by the work done to prepare
> > multi-threading, more specifically by this commit :
> > http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=64cc49cf7 
> 
> >
> > I add Emeric to the thread, maybe he'll be able to provide a fix faster 
> than
> > me (I'll won't be very available for the next days).
> 
> Thus I'll ping Emeric tomorrow as well so that we can issue 1.8.2 soon in
> case someone wants to play with it on friday afternoon jus before xmas :-)
> 
> Willy
> 
> 

>From db483435c294541cbab27babacb9daefc043fd32 Mon Sep 17 00:00:00 2001
From: Emeric Brun 
Date: Thu, 21 Dec 2017 14:42:26 +0100
Subject: [PATCH] BUG/MEDIUM: checks: a server passed in maint state was not
 forced down.

Setting a server in maint mode, the required next_state was not set
before calling the 'lb_down' function and so the system state was never
commited.

This patch should be backported in 1.8
---
 src/server.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/server.c b/src/server.c
index 23e4cc9..a37e919 100644
--- a/src/server.c
+++ b/src/server.c
@@ -4630,10 +4630,11 @@ void srv_update_status(struct server *s)
 		else {	/* server was still running */
 			check->health = 0; /* failure */
 			s->last_change = now.tv_sec;
+
+			s->next_state = SRV_ST_STOPPED;
 			if (s->proxy->lbprm.set_server_status_down)
 s->proxy->lbprm.set_server_status_down(s);
 
-			s->next_state = SRV_ST_STOPPED;
 			if (s->onmarkeddown & HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS)
 srv_shutdown_streams(s, SF_ERR_DOWN);
 
-- 
2.7.4



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-21 Thread Greg Nolle
Thanks guys! I should be able to test the new version this weekend if you
are able to issue it before then.

Best regards,
Greg

On Thu, Dec 21, 2017 at 12:15 AM, Willy Tarreau  wrote:

> On Thu, Dec 21, 2017 at 12:04:11AM +0100, Cyril Bonté wrote:
> > Hi Greg,
> >
> > Le 20/12/2017 à 22:42, Greg Nolle a écrit :
> > > Hi Andrew,
> > >
> > > Thanks for the info but I'm afraid I'm not seeing anything here that
> > > would affect the issue I'm seeing, and by the way the docs don't
> > > indicate that the cookie names have to match the server names.
> >
> > First, don't worry about the configuration, there is nothing wrong in it
> ;-)
> >
> > > That being said, I tried using your settings and am still seeing the
> > > issue (see below for new full config). And like I say, this is only an
> > > issue with v1.8.1, it works as expected in v1.7.9.
> >
> > I won't be able to look further tonight, but at least I could identify
> when
> > the regression occured : it's caused by the work done to prepare
> > multi-threading, more specifically by this commit :
> > http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=64cc49cf7
> >
> > I add Emeric to the thread, maybe he'll be able to provide a fix faster
> than
> > me (I'll won't be very available for the next days).
>
> Thus I'll ping Emeric tomorrow as well so that we can issue 1.8.2 soon in
> case someone wants to play with it on friday afternoon jus before xmas :-)
>
> Willy
>


Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Willy Tarreau
On Thu, Dec 21, 2017 at 12:04:11AM +0100, Cyril Bonté wrote:
> Hi Greg,
> 
> Le 20/12/2017 à 22:42, Greg Nolle a écrit :
> > Hi Andrew,
> > 
> > Thanks for the info but I'm afraid I'm not seeing anything here that
> > would affect the issue I'm seeing, and by the way the docs don't
> > indicate that the cookie names have to match the server names.
> 
> First, don't worry about the configuration, there is nothing wrong in it ;-)
> 
> > That being said, I tried using your settings and am still seeing the
> > issue (see below for new full config). And like I say, this is only an
> > issue with v1.8.1, it works as expected in v1.7.9.
> 
> I won't be able to look further tonight, but at least I could identify when
> the regression occured : it's caused by the work done to prepare
> multi-threading, more specifically by this commit :
> http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=64cc49cf7
> 
> I add Emeric to the thread, maybe he'll be able to provide a fix faster than
> me (I'll won't be very available for the next days).

Thus I'll ping Emeric tomorrow as well so that we can issue 1.8.2 soon in
case someone wants to play with it on friday afternoon jus before xmas :-)

Willy



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Cyril Bonté

Hi Greg,

Le 20/12/2017 à 22:42, Greg Nolle a écrit :

Hi Andrew,

Thanks for the info but I’m afraid I’m not seeing anything here that 
would affect the issue I’m seeing, and by the way the docs don’t 
indicate that the cookie names have to match the server names.


First, don't worry about the configuration, there is nothing wrong in it ;-)

That being said, I tried using your settings and am still seeing the 
issue (see below for new full config). And like I say, this is only an 
issue with v1.8.1, it works as expected in v1.7.9.


I won't be able to look further tonight, but at least I could identify 
when the regression occured : it's caused by the work done to prepare 
multi-threading, more specifically by this commit : 
http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=64cc49cf7


I add Emeric to the thread, maybe he'll be able to provide a fix faster 
than me (I'll won't be very available for the next days).


--
Cyril Bonté



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Greg Nolle
Hi Andrew,

Thanks for the info but I’m afraid I’m not seeing anything here that would
affect the issue I’m seeing, and by the way the docs don’t indicate that
the cookie names have to match the server names.

That being said, I tried using your settings and am still seeing the issue
(see below for new full config). And like I say, this is only an issue with
v1.8.1, it works as expected in v1.7.9.

defaults
  mode http
  option redispatch
  retries 3
  timeout queue 20s
  timeout client 50s
  timeout connect 5s
  timeout server 50s

listen stats
  bind :1936
  stats enable
  stats uri /
  stats hide-version
  stats admin if TRUE

frontend main
  bind :9080
  default_backend main

backend main
  balance leastconn
  cookie SERVERID maxidle 30m maxlife 12h insert nocache indirect
  server server-1-google www.google.com:80 weight 100 cookie
server-1-google check port 80 inter 4000  rise 2  fall 2  minconn 0
 maxconn 0 on-marked-down shutdown-sessions
  server server-2-yahoo www.yahoo.com:80 weight 100 cookie server-2-yahoo
check port 80 inter 4000  rise 2  fall 2  minconn 0  maxconn 0
on-marked-down shutdown-sessions



On Wed, Dec 20, 2017 at 8:57 PM, Andrew Smalley 
wrote:

> Also our cookie line looks as below
>
> cookie SERVERID maxidle 30m maxlife 12h insert nocache indirect
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>
>
> On 20 December 2017 at 20:55, Andrew Smalley 
> wrote:
> > Greg
> >
> > its just been pointed out your cookies are wrong, they would usually
> > match your server name.
> > I would change this
> >
> >   server server-1-google www.google.com:80 check cookie google
> >   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
> >
> >
> > to this
> >
> >   server server-1-google www.google.com:80 check cookie server-1-google
> >   server server-2-yahoo www.yahoo.com:80 check cookie server-2-yahoo
> >
> >
> > We use something like this as a default server line
> >
> > server RIP_Name 172.16.1.1  weight 100  cookie RIP_Name  check port
> > 80 inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
> > shutdown-sessions
> > Andruw Smalley
> >
> > Loadbalancer.org Ltd.
> >
> > www.loadbalancer.org
> > +1 888 867 9504 / +44 (0)330 380 1064
> > asmal...@loadbalancer.org
> >
> > Leave a Review | Deployment Guides | Blog
> >
> >
> > On 20 December 2017 at 20:52, Andrew Smalley 
> wrote:
> >> Hi Greg
> >>
> >> Apologies  I was confused with the terminology we use here,
> >>
> >> Indeed MAINT should be the same as our HALT feature,
> >>
> >> Maybe you can share your config and we can see what's wrong?
> >>
> >>
> >> Andruw Smalley
> >>
> >> Loadbalancer.org Ltd.
> >>
> >> www.loadbalancer.org
> >> +1 888 867 9504 / +44 (0)330 380 1064
> >> asmal...@loadbalancer.org
> >>
> >> Leave a Review | Deployment Guides | Blog
> >>
> >>
> >> On 20 December 2017 at 20:45, Greg Nolle 
> wrote:
> >>> Hi Andrew,
> >>>
> >>> I can’t find any reference to a “HALTED” status in the manual. I’m
> >>> *not* referring to “DRAIN” though (which I would expect to behave as
> >>> you describe), I’m referring to "MAINT", i.e. disabling the backend
> >>> server. Here’s the snippet from the management manual to clarify what
> >>> I’m referring to:
> >>>
> >>>> “Setting the state to “maint” disables any traffic to the server as
> well as any health checks"
> >>>
> >>> Best regards,
> >>> Greg
> >>>
> >>> On Wed, Dec 20, 2017 at 8:29 PM, Andrew Smalley
> >>>  wrote:
> >>>> Hi Greg
> >>>>
> >>>> You say traffic still goes to the real server when in MAINT mode,
> >>>> Assuming you mean DRAIN Mode and not HALTED then this is expected.
> >>>>
> >>>> Existing connections still goto a server while DRAINING but no new
> >>>> connections will get there.
> >>>>
> >>>> If the real server is HALTED then no traffic gets to it.
> >>>>
> >>>>
> >>>> Andruw Smalley
> >>>>
> >>>> Loadbalancer.org Ltd.
> >>>>
> >>>> www.loadbalancer.org
> >>>> +1 888 867 9504 / +44 (0)330 380 1064
> >>>> asmal...@l

Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Andrew Smalley
Also our cookie line looks as below

 cookie SERVERID maxidle 30m maxlife 12h insert nocache indirect
Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 20 December 2017 at 20:55, Andrew Smalley  wrote:
> Greg
>
> its just been pointed out your cookies are wrong, they would usually
> match your server name.
> I would change this
>
>   server server-1-google www.google.com:80 check cookie google
>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>
>
> to this
>
>   server server-1-google www.google.com:80 check cookie server-1-google
>   server server-2-yahoo www.yahoo.com:80 check cookie server-2-yahoo
>
>
> We use something like this as a default server line
>
> server RIP_Name 172.16.1.1  weight 100  cookie RIP_Name  check port
> 80 inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
> shutdown-sessions
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>
>
> On 20 December 2017 at 20:52, Andrew Smalley  
> wrote:
>> Hi Greg
>>
>> Apologies  I was confused with the terminology we use here,
>>
>> Indeed MAINT should be the same as our HALT feature,
>>
>> Maybe you can share your config and we can see what's wrong?
>>
>>
>> Andruw Smalley
>>
>> Loadbalancer.org Ltd.
>>
>> www.loadbalancer.org
>> +1 888 867 9504 / +44 (0)330 380 1064
>> asmal...@loadbalancer.org
>>
>> Leave a Review | Deployment Guides | Blog
>>
>>
>> On 20 December 2017 at 20:45, Greg Nolle  wrote:
>>> Hi Andrew,
>>>
>>> I can’t find any reference to a “HALTED” status in the manual. I’m
>>> *not* referring to “DRAIN” though (which I would expect to behave as
>>> you describe), I’m referring to "MAINT", i.e. disabling the backend
>>> server. Here’s the snippet from the management manual to clarify what
>>> I’m referring to:
>>>
>>>> “Setting the state to “maint” disables any traffic to the server as well 
>>>> as any health checks"
>>>
>>> Best regards,
>>> Greg
>>>
>>> On Wed, Dec 20, 2017 at 8:29 PM, Andrew Smalley
>>>  wrote:
>>>> Hi Greg
>>>>
>>>> You say traffic still goes to the real server when in MAINT mode,
>>>> Assuming you mean DRAIN Mode and not HALTED then this is expected.
>>>>
>>>> Existing connections still goto a server while DRAINING but no new
>>>> connections will get there.
>>>>
>>>> If the real server is HALTED then no traffic gets to it.
>>>>
>>>>
>>>> Andruw Smalley
>>>>
>>>> Loadbalancer.org Ltd.
>>>>
>>>> www.loadbalancer.org
>>>> +1 888 867 9504 / +44 (0)330 380 1064
>>>> asmal...@loadbalancer.org
>>>>
>>>> Leave a Review | Deployment Guides | Blog
>>>>
>>>>
>>>> On 20 December 2017 at 20:26, Greg Nolle  wrote:
>>>>> When cookie persistence is used, it seems that the status of the
>>>>> servers in the backend is ignored in v1.8.1. I try marking as MAINT a
>>>>> backend server for which my browser has been given a cookie but
>>>>> subsequent requests still go to that server (as verified in the
>>>>> stats). The same issue happens when I use a stick table.
>>>>>
>>>>> I’ve included a simple example config where this happens at the
>>>>> bottom. The exact same config in v1.7.9 gives the expected behaviour
>>>>> that new requests are migrated to a different active backend server.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> Many thanks,
>>>>> Greg
>>>>>
>>>>> defaults
>>>>>   mode http
>>>>>   option redispatch
>>>>>   retries 3
>>>>>   timeout queue 20s
>>>>>   timeout client 50s
>>>>>   timeout connect 5s
>>>>>   timeout server 50s
>>>>>
>>>>> listen stats
>>>>>   bind :1936
>>>>>   stats enable
>>>>>   stats uri /
>>>>>   stats hide-version
>>>>>   stats admin if TRUE
>>>>>
>>>>> frontend main
>>>>>   bind :9080
>>>>>   default_backend main
>>>>>
>>>>> backend main
>>>>>   balance leastconn
>>>>>   cookie SERVERID insert indirect nocache
>>>>>   server server-1-google www.google.com:80 check cookie google
>>>>>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>>>>>
>>>>



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Andrew Smalley
Greg

its just been pointed out your cookies are wrong, they would usually
match your server name.
I would change this

  server server-1-google www.google.com:80 check cookie google
  server server-2-yahoo www.yahoo.com:80 check cookie yahoo


to this

  server server-1-google www.google.com:80 check cookie server-1-google
  server server-2-yahoo www.yahoo.com:80 check cookie server-2-yahoo


We use something like this as a default server line

  server RIP_Name 172.16.1.1  weight 100  cookie RIP_Name  check port
80 inter 4000  rise 2  fall 2  minconn 0  maxconn 0  on-marked-down
shutdown-sessions
Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 20 December 2017 at 20:52, Andrew Smalley  wrote:
> Hi Greg
>
> Apologies  I was confused with the terminology we use here,
>
> Indeed MAINT should be the same as our HALT feature,
>
> Maybe you can share your config and we can see what's wrong?
>
>
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>
>
> On 20 December 2017 at 20:45, Greg Nolle  wrote:
>> Hi Andrew,
>>
>> I can’t find any reference to a “HALTED” status in the manual. I’m
>> *not* referring to “DRAIN” though (which I would expect to behave as
>> you describe), I’m referring to "MAINT", i.e. disabling the backend
>> server. Here’s the snippet from the management manual to clarify what
>> I’m referring to:
>>
>>> “Setting the state to “maint” disables any traffic to the server as well as 
>>> any health checks"
>>
>> Best regards,
>> Greg
>>
>> On Wed, Dec 20, 2017 at 8:29 PM, Andrew Smalley
>>  wrote:
>>> Hi Greg
>>>
>>> You say traffic still goes to the real server when in MAINT mode,
>>> Assuming you mean DRAIN Mode and not HALTED then this is expected.
>>>
>>> Existing connections still goto a server while DRAINING but no new
>>> connections will get there.
>>>
>>> If the real server is HALTED then no traffic gets to it.
>>>
>>>
>>> Andruw Smalley
>>>
>>> Loadbalancer.org Ltd.
>>>
>>> www.loadbalancer.org
>>> +1 888 867 9504 / +44 (0)330 380 1064
>>> asmal...@loadbalancer.org
>>>
>>> Leave a Review | Deployment Guides | Blog
>>>
>>>
>>> On 20 December 2017 at 20:26, Greg Nolle  wrote:
>>>> When cookie persistence is used, it seems that the status of the
>>>> servers in the backend is ignored in v1.8.1. I try marking as MAINT a
>>>> backend server for which my browser has been given a cookie but
>>>> subsequent requests still go to that server (as verified in the
>>>> stats). The same issue happens when I use a stick table.
>>>>
>>>> I’ve included a simple example config where this happens at the
>>>> bottom. The exact same config in v1.7.9 gives the expected behaviour
>>>> that new requests are migrated to a different active backend server.
>>>>
>>>> Any ideas?
>>>>
>>>> Many thanks,
>>>> Greg
>>>>
>>>> defaults
>>>>   mode http
>>>>   option redispatch
>>>>   retries 3
>>>>   timeout queue 20s
>>>>   timeout client 50s
>>>>   timeout connect 5s
>>>>   timeout server 50s
>>>>
>>>> listen stats
>>>>   bind :1936
>>>>   stats enable
>>>>   stats uri /
>>>>   stats hide-version
>>>>   stats admin if TRUE
>>>>
>>>> frontend main
>>>>   bind :9080
>>>>   default_backend main
>>>>
>>>> backend main
>>>>   balance leastconn
>>>>   cookie SERVERID insert indirect nocache
>>>>   server server-1-google www.google.com:80 check cookie google
>>>>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>>>>
>>>



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Andrew Smalley
Hi Greg

Apologies  I was confused with the terminology we use here,

Indeed MAINT should be the same as our HALT feature,

Maybe you can share your config and we can see what's wrong?


Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 20 December 2017 at 20:45, Greg Nolle  wrote:
> Hi Andrew,
>
> I can’t find any reference to a “HALTED” status in the manual. I’m
> *not* referring to “DRAIN” though (which I would expect to behave as
> you describe), I’m referring to "MAINT", i.e. disabling the backend
> server. Here’s the snippet from the management manual to clarify what
> I’m referring to:
>
>> “Setting the state to “maint” disables any traffic to the server as well as 
>> any health checks"
>
> Best regards,
> Greg
>
> On Wed, Dec 20, 2017 at 8:29 PM, Andrew Smalley
>  wrote:
>> Hi Greg
>>
>> You say traffic still goes to the real server when in MAINT mode,
>> Assuming you mean DRAIN Mode and not HALTED then this is expected.
>>
>> Existing connections still goto a server while DRAINING but no new
>> connections will get there.
>>
>> If the real server is HALTED then no traffic gets to it.
>>
>>
>> Andruw Smalley
>>
>> Loadbalancer.org Ltd.
>>
>> www.loadbalancer.org
>> +1 888 867 9504 / +44 (0)330 380 1064
>> asmal...@loadbalancer.org
>>
>> Leave a Review | Deployment Guides | Blog
>>
>>
>> On 20 December 2017 at 20:26, Greg Nolle  wrote:
>>> When cookie persistence is used, it seems that the status of the
>>> servers in the backend is ignored in v1.8.1. I try marking as MAINT a
>>> backend server for which my browser has been given a cookie but
>>> subsequent requests still go to that server (as verified in the
>>> stats). The same issue happens when I use a stick table.
>>>
>>> I’ve included a simple example config where this happens at the
>>> bottom. The exact same config in v1.7.9 gives the expected behaviour
>>> that new requests are migrated to a different active backend server.
>>>
>>> Any ideas?
>>>
>>> Many thanks,
>>> Greg
>>>
>>> defaults
>>>   mode http
>>>   option redispatch
>>>   retries 3
>>>   timeout queue 20s
>>>   timeout client 50s
>>>   timeout connect 5s
>>>   timeout server 50s
>>>
>>> listen stats
>>>   bind :1936
>>>   stats enable
>>>   stats uri /
>>>   stats hide-version
>>>   stats admin if TRUE
>>>
>>> frontend main
>>>   bind :9080
>>>   default_backend main
>>>
>>> backend main
>>>   balance leastconn
>>>   cookie SERVERID insert indirect nocache
>>>   server server-1-google www.google.com:80 check cookie google
>>>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>>>
>>



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Greg Nolle
Hi Andrew,

I can’t find any reference to a “HALTED” status in the manual. I’m
*not* referring to “DRAIN” though (which I would expect to behave as
you describe), I’m referring to "MAINT", i.e. disabling the backend
server. Here’s the snippet from the management manual to clarify what
I’m referring to:

> “Setting the state to “maint” disables any traffic to the server as well as 
> any health checks"

Best regards,
Greg

On Wed, Dec 20, 2017 at 8:29 PM, Andrew Smalley
 wrote:
> Hi Greg
>
> You say traffic still goes to the real server when in MAINT mode,
> Assuming you mean DRAIN Mode and not HALTED then this is expected.
>
> Existing connections still goto a server while DRAINING but no new
> connections will get there.
>
> If the real server is HALTED then no traffic gets to it.
>
>
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>
>
> On 20 December 2017 at 20:26, Greg Nolle  wrote:
>> When cookie persistence is used, it seems that the status of the
>> servers in the backend is ignored in v1.8.1. I try marking as MAINT a
>> backend server for which my browser has been given a cookie but
>> subsequent requests still go to that server (as verified in the
>> stats). The same issue happens when I use a stick table.
>>
>> I’ve included a simple example config where this happens at the
>> bottom. The exact same config in v1.7.9 gives the expected behaviour
>> that new requests are migrated to a different active backend server.
>>
>> Any ideas?
>>
>> Many thanks,
>> Greg
>>
>> defaults
>>   mode http
>>   option redispatch
>>   retries 3
>>   timeout queue 20s
>>   timeout client 50s
>>   timeout connect 5s
>>   timeout server 50s
>>
>> listen stats
>>   bind :1936
>>   stats enable
>>   stats uri /
>>   stats hide-version
>>   stats admin if TRUE
>>
>> frontend main
>>   bind :9080
>>   default_backend main
>>
>> backend main
>>   balance leastconn
>>   cookie SERVERID insert indirect nocache
>>   server server-1-google www.google.com:80 check cookie google
>>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>>
>



Re: Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Andrew Smalley
Hi Greg

You say traffic still goes to the real server when in MAINT mode,
Assuming you mean DRAIN Mode and not HALTED then this is expected.

Existing connections still goto a server while DRAINING but no new
connections will get there.

If the real server is HALTED then no traffic gets to it.


Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 20 December 2017 at 20:26, Greg Nolle  wrote:
> When cookie persistence is used, it seems that the status of the
> servers in the backend is ignored in v1.8.1. I try marking as MAINT a
> backend server for which my browser has been given a cookie but
> subsequent requests still go to that server (as verified in the
> stats). The same issue happens when I use a stick table.
>
> I’ve included a simple example config where this happens at the
> bottom. The exact same config in v1.7.9 gives the expected behaviour
> that new requests are migrated to a different active backend server.
>
> Any ideas?
>
> Many thanks,
> Greg
>
> defaults
>   mode http
>   option redispatch
>   retries 3
>   timeout queue 20s
>   timeout client 50s
>   timeout connect 5s
>   timeout server 50s
>
> listen stats
>   bind :1936
>   stats enable
>   stats uri /
>   stats hide-version
>   stats admin if TRUE
>
> frontend main
>   bind :9080
>   default_backend main
>
> backend main
>   balance leastconn
>   cookie SERVERID insert indirect nocache
>   server server-1-google www.google.com:80 check cookie google
>   server server-2-yahoo www.yahoo.com:80 check cookie yahoo
>



Traffic delivered to disabled server when cookie persistence is enabled after upgrading to 1.8.1

2017-12-20 Thread Greg Nolle
When cookie persistence is used, it seems that the status of the
servers in the backend is ignored in v1.8.1. I try marking as MAINT a
backend server for which my browser has been given a cookie but
subsequent requests still go to that server (as verified in the
stats). The same issue happens when I use a stick table.

I’ve included a simple example config where this happens at the
bottom. The exact same config in v1.7.9 gives the expected behaviour
that new requests are migrated to a different active backend server.

Any ideas?

Many thanks,
Greg

defaults
  mode http
  option redispatch
  retries 3
  timeout queue 20s
  timeout client 50s
  timeout connect 5s
  timeout server 50s

listen stats
  bind :1936
  stats enable
  stats uri /
  stats hide-version
  stats admin if TRUE

frontend main
  bind :9080
  default_backend main

backend main
  balance leastconn
  cookie SERVERID insert indirect nocache
  server server-1-google www.google.com:80 check cookie google
  server server-2-yahoo www.yahoo.com:80 check cookie yahoo



Re: Cookie persistence - what I am I doing wrong?

2015-01-14 Thread Cyril Bonté

Hi Shawn,

Le 15/01/2015 01:59, Shawn Heisey a écrit :

I'm trying to ensure that multiple connections from the same browser end
up on the same back end server, and having lots of trouble.  All my work
with haproxy up to now has been with connections that don't need
persistence - everything relevant happens in one http request.

This is probably PEBCAK or ID10T ... but I am not seeing my mistake.

Here's the frontend and backend I've got:

frontend nc-80
description Front end that accepts requests for production.
bind X.X.X.72:80
acl blockit path_beg-i /v2.0
http-request deny if blockit
default_backend nc-80-backend

backend nc-80-backend
 description Back end for main site
 cookie JSESSIONID prefix
 server frontier 10.100.2.25:80 weight 100 track apache80/frontier
 server fremont 10.100.2.26:80 weight 100 track apache80/fremont
 server fiesta 10.100.2.29:80 weight 150 track apache80/fiesta


You need to specify a cookie value for each server, with the "cookie" 
keyword : 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#cookie%20%28Server%20and%20default-server%20options%29


For example :
server frontier 10.100.2.25:80 weight 100 cookie frontier track 
apache80/frontier
server fremont 10.100.2.26:80 weight 100 cookie fremont track 
apache80/fremont
server fiesta 10.100.2.29:80 weight 150 cookie fiesta track 
apache80/fiesta


Use any value you want but keep them unique to have stickiness requests 
on a specific server.


For an application session JSESSIONID 12345678901234567890123456789012, 
from the client side it will result in cookies like :

JSESSIONID=frontier~12345678901234567890123456789012
JSESSIONID=fremont~12345678901234567890123456789012
JSESSIONID=fiesta~12345678901234567890123456789012



The log lines all have --NN in them with these settings, and requests
from a single browser page load are hitting all three webservers.

I also tried 'cookie SRV insert indirect nocache' with no better
results.  With this, the log lines all have --NI.


It will also work once cookie values are set.



Here's the -vv output:

HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau 

Build options :
   TARGET  = linux26
   CPU = generic
   CC  = gcc
   CFLAGS  = -O2 -g -fno-strict-aliasing
   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Running on OpenSSL version : OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : no (version might be too old, 0.9.8f min
needed)
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 6.6 06-Feb-2006
PCRE library supports JIT : no (USE_PCRE_JIT not set)

Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Thanks,
Shawn




--
Cyril Bonté



Cookie persistence - what I am I doing wrong?

2015-01-14 Thread Shawn Heisey
I'm trying to ensure that multiple connections from the same browser end
up on the same back end server, and having lots of trouble.  All my work
with haproxy up to now has been with connections that don't need
persistence - everything relevant happens in one http request.

This is probably PEBCAK or ID10T ... but I am not seeing my mistake.

Here's the frontend and backend I've got:

frontend nc-80
   description Front end that accepts requests for production.
   bind X.X.X.72:80
   acl blockit path_beg-i /v2.0
   http-request deny if blockit
   default_backend nc-80-backend

backend nc-80-backend
description Back end for main site
cookie JSESSIONID prefix
server frontier 10.100.2.25:80 weight 100 track apache80/frontier
server fremont 10.100.2.26:80 weight 100 track apache80/fremont
server fiesta 10.100.2.29:80 weight 150 track apache80/fiesta

The log lines all have --NN in them with these settings, and requests
from a single browser page load are hitting all three webservers.

I also tried 'cookie SRV insert indirect nocache' with no better
results.  With this, the log lines all have --NI.

Here's the -vv output:

HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Running on OpenSSL version : OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : no (version might be too old, 0.9.8f min
needed)
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 6.6 06-Feb-2006
PCRE library supports JIT : no (USE_PCRE_JIT not set)

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Thanks,
Shawn



cookie persistence when a server is down

2014-09-30 Thread Colin Ingarfield

Hello,

I'm testing HAProxy's cookie based persistence feature(s) and I have a 
question.  Currently I have 2 test servers set up behind HAProxy. They 
use a JSESSIONID cookie like many java application servers.


In haproxy.cfg I have these persistence settings:

server server1 127.0.0.1:9443ssl verify none check cookie server1
server server2 172.28.128.3:9443 ssl verify none check cookie server2

cookie JSESSIONID prefix

This works as expected.  HAProxy adds the prefix to the cookie and this 
enables "sticky" sessions.


When I put, for example, server1 into maintenance, HAProxy routes 
server1 clients to server2.  I can see this in the HAProxy logs with 
termination flags "--DN".  When I put server1 back in service, it routes 
server1 clients back to server1, because the cookie has not changed 
(flag "--VN").


But what if I had a server3?  When I put server1 in maint, will server1 
clients be randomly routed to server2 & 3 on each request?  Or are they 
somehow temporarily persisted to server 2 or 3 until server1 becomes 
available again?


Thank you,
Colin Ingarfield



Re: Cookie Persistence and Backend Recognition of Server Change

2013-01-03 Thread KT Walrus
Nevermind.  I solved my problem by having the backend save the sessionDB server 
id in its SESSION_ID cookie.  If the SESSION_ID cookie isn't the same server id 
as the localhost sessionDB, it knows a change has been made and it will first 
copy the session data out of the read-only slave sessionDB to the localhost 
sessionDB (updating the SESSION_ID cookie) before proceeding to handle the 
request.

On Jan 3, 2013, at 12:47 PM, Kevin Heatwole  wrote:

> I'm thinking of using cookie persistence to stick a user to the same backend 
> (if available) for all requests coming from the user.
> 
> But, I need to handle the case where HAProxy switches the user to a different 
> backend (because the original backend has gone offline or MAXCONN reached) 
> than the one saved in the cookie.  
> 
> My question is:  Can the backends tell when the frontend has changed to a new 
> backend server than the one saved in the cookie?
> 
> I assume so, but I'm wondering how to do this.  Have the backend save the 
> frontend cookie value in another cookie, if the frontend cookie has changed?  
> Or, is it simpler than this and the frontend can set a request attribute 
> (X-Server-Changed?) that the backend simply checks?
> 
> I need to copy previous session data to the new backend sessionDB (from the 
> slave sessionDB backup) to continue processing the user requests 
> uninterrupted on the new backend.
> 
> Kevin
> 
> 




Cookie Persistence and Backend Recognition of Server Change

2013-01-03 Thread Kevin Heatwole
I'm thinking of using cookie persistence to stick a user to the same backend 
(if available) for all requests coming from the user.

But, I need to handle the case where HAProxy switches the user to a different 
backend (because the original backend has gone offline or MAXCONN reached) than 
the one saved in the cookie.  

My question is:  Can the backends tell when the frontend has changed to a new 
backend server than the one saved in the cookie?

I assume so, but I'm wondering how to do this.  Have the backend save the 
frontend cookie value in another cookie, if the frontend cookie has changed?  
Or, is it simpler than this and the frontend can set a request attribute 
(X-Server-Changed?) that the backend simply checks?

I need to copy previous session data to the new backend sessionDB (from the 
slave sessionDB backup) to continue processing the user requests uninterrupted 
on the new backend.

Kevin




Re: multi location and cookie persistence on SERVERID

2012-02-01 Thread Baptiste
Brilliant!
Thanks for keeping us up to date.

Cheers



Re: multi location and cookie persistence on SERVERID

2012-01-31 Thread eni-urge...@scan-eco.com


Hello

I manage to configure this setup.

Thanks to baptiste and willy for their patience.

And thanks to the dev team for this fabulous product

Le 27/01/2012 08:11, Baptiste a écrit :

Ok, I understand now why you're doing like that :)

Let me update Willy's example:


frontend site1
bind :80

monitor-uri /check
monitor fail if { nbsrv(local) le 0 }

acl local_ok nbsrv(local) gt 0
acl site2_ok nbsrv(site2) gt 0
acl site3_ok nbsrv(site3) gt 0

acl is_site1 hdr_sub(cookie) SERVERID=a
acl is_site2 hdr_sub(cookie) SERVERID=b
acl is_site3 hdr_sub(cookie) SERVERID=c

use_backend site2 if is_site2 site2_ok
use_backend site3 if is_site3 site3_ok
use_backend site2 if !local_ok site2_ok
use_backend site3 if !local_ok site3_ok

default_backend local

 backend local
# handles site1's traffic as well as non-site specific traffic
# all cookies are prefixed with "a"
cookie SERVERID
server srv1 1.0.0.1:80 cookie a1 check
server srv2 1.0.0.1:80 cookie a2 check
server srv3 1.0.0.1:80 cookie a3 check

 backend site2
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site2 2.2.2.2:80 check

 backend site3
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site3 3.3.3.3:80 check




Note that I have not tested this example, so there may be some mistake.
The idea here is to count the number of available server in a backend,
then take routing decision based on this information.
Each site must monitor its local backend and provide it's status to
other (monitor* lines) and pick up status from remote backend.

Tell me if you managed to configure your setup.

cheers






Re: multi location and cookie persistence on SERVERID

2012-01-29 Thread Willy Tarreau
On Fri, Jan 27, 2012 at 09:34:04AM +0100, eni-urge...@scan-eco.com wrote:
> Hello and thank you for your answer.
> 
> I thought it was something to do with monitor fail if but i didn't 
> understand that it's possible to count the number of server on a backend.

In fact, the nbsrv ACL was made *exactly* for that purpose :-)

> I will test this asap. and write back to the mailing list

Baptiste's example should work. When you write such ACLs, you have to
always keep the worst case in mind then imagine all combinations (eg:
both are dead, local is dead, remote is dead, inter-site is dead). It
is possible that you end up with a bit of complexity to handle all the
situations, but with some comments in the config file to explain what
situation you're covering with each rule, it should not be an issue at
all.

Regards,
Willy




Re: multi location and cookie persistence on SERVERID

2012-01-27 Thread eni-urge...@scan-eco.com

Hello and thank you for your answer.

I thought it was something to do with monitor fail if but i didn't 
understand that it's possible to count the number of server on a backend.


I will test this asap. and write back to the mailing list

Thanks to you for your help and thanks to the dev team of haproxy. I 
really love this product.



Le 27/01/2012 08:11, Baptiste a écrit :

Ok, I understand now why you're doing like that :)

Let me update Willy's example:


frontend site1
bind :80

monitor-uri /check
monitor fail if { nbsrv(local) le 0 }

acl local_ok nbsrv(local) gt 0
acl site2_ok nbsrv(site2) gt 0
acl site3_ok nbsrv(site3) gt 0

acl is_site1 hdr_sub(cookie) SERVERID=a
acl is_site2 hdr_sub(cookie) SERVERID=b
acl is_site3 hdr_sub(cookie) SERVERID=c

use_backend site2 if is_site2 site2_ok
use_backend site3 if is_site3 site3_ok
use_backend site2 if !local_ok site2_ok
use_backend site3 if !local_ok site3_ok

default_backend local

 backend local
# handles site1's traffic as well as non-site specific traffic
# all cookies are prefixed with "a"
cookie SERVERID
server srv1 1.0.0.1:80 cookie a1 check
server srv2 1.0.0.1:80 cookie a2 check
server srv3 1.0.0.1:80 cookie a3 check

 backend site2
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site2 2.2.2.2:80 check

 backend site3
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site3 3.3.3.3:80 check




Note that I have not tested this example, so there may be some mistake.
The idea here is to count the number of available server in a backend,
then take routing decision based on this information.
Each site must monitor its local backend and provide it's status to
other (monitor* lines) and pick up status from remote backend.

Tell me if you managed to configure your setup.

cheers






Re: multi location and cookie persistence on SERVERID

2012-01-26 Thread Baptiste
Ok, I understand now why you're doing like that :)

Let me update Willy's example:


frontend site1
bind :80

monitor-uri /check
monitor fail if { nbsrv(local) le 0 }

acl local_ok nbsrv(local) gt 0
acl site2_ok nbsrv(site2) gt 0
acl site3_ok nbsrv(site3) gt 0

acl is_site1 hdr_sub(cookie) SERVERID=a
acl is_site2 hdr_sub(cookie) SERVERID=b
acl is_site3 hdr_sub(cookie) SERVERID=c

use_backend site2 if is_site2 site2_ok
use_backend site3 if is_site3 site3_ok
use_backend site2 if !local_ok site2_ok
use_backend site3 if !local_ok site3_ok

default_backend local

backend local
# handles site1's traffic as well as non-site specific traffic
# all cookies are prefixed with "a"
cookie SERVERID
server srv1 1.0.0.1:80 cookie a1 check
server srv2 1.0.0.1:80 cookie a2 check
server srv3 1.0.0.1:80 cookie a3 check

backend site2
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site2 2.2.2.2:80 check

backend site3
# reroute traffic to site 2's load balancer
option httpchk GET /check
server site3 3.3.3.3:80 check




Note that I have not tested this example, so there may be some mistake.
The idea here is to count the number of available server in a backend,
then take routing decision based on this information.
Each site must monitor its local backend and provide it's status to
other (monitor* lines) and pick up status from remote backend.

Tell me if you managed to configure your setup.

cheers



Re: multi location and cookie persistence on SERVERID

2012-01-26 Thread eni-urge...@scan-eco.com

I followed some willy's advise in a old mail
http://permalink.gmane.org/gmane.comp.web.haproxy/3238


Le 26/01/2012 17:08, eni-urge...@scan-eco.com a écrit :

hello thank you for your advise.

I dont know why i configure 2 different backend. I think i saw this 
config on a website and thinking that was the best for me.






Le 26/01/2012 07:33, Baptiste a écrit :

Bonjour,

Well, as far as I can see, this is due to your configuration!
Why routing user in the frontend using the persistance cookie?
You should take routing decision based on the number of servers
remaining in a backend or using some options like "allbackups" and
puting all your server in a single backend with backup keyword for the
servers which are not located locally.

If you need more help for both configuration, let me know.

cheers









Re: multi location and cookie persistence on SERVERID

2012-01-26 Thread eni-urge...@scan-eco.com

hello thank you for your advise.

I dont know why i configure 2 different backend. I think i saw this 
config on a website and thinking that was the best for me.






Le 26/01/2012 07:33, Baptiste a écrit :

Bonjour,

Well, as far as I can see, this is due to your configuration!
Why routing user in the frontend using the persistance cookie?
You should take routing decision based on the number of servers
remaining in a backend or using some options like "allbackups" and
puting all your server in a single backend with backup keyword for the
servers which are not located locally.

If you need more help for both configuration, let me know.

cheers






Re: multi location and cookie persistence on SERVERID

2012-01-25 Thread Baptiste
Bonjour,

Well, as far as I can see, this is due to your configuration!
Why routing user in the frontend using the persistance cookie?
You should take routing decision based on the number of servers
remaining in a backend or using some options like "allbackups" and
puting all your server in a single backend with backup keyword for the
servers which are not located locally.

If you need more help for both configuration, let me know.

cheers



Re: Cookie persistence

2011-10-19 Thread Willy Tarreau
On Thu, Oct 20, 2011 at 12:55:25PM +0900, Ist Conne wrote:
> So, It is difficult problem.
> Do we have not Workaround?

I am not a lawyer either but I would recommend at some point that you
leave patent issues aside. If you really care about them, then you'll
quickly need to find another job : *everything* in IT is patented. I
really mean *everything*. Right now you can't write a 20-line piece
of code which does not infringe on a patent you don't know about.

Whatever the LB you'll use, you should not use Linux to host it since
it's said to infringe a large number of patents from many companies too.

The problem with patents is that any obvious improvement made to a
product has to be patented if you don't want your competitor to file
it before you and prevent you from using it. I personally don't want
to play with that (and I don't have the money nor the time either).

F5 used to sue a few competitors several years ago and came to a
settlement. I suspect it's just because they need to protect their
patents if they don't want to lose them. I'm not aware of them being
aggressive in this area. After all, they're using a log of open source
and running Linux in their products too :-)

Also, I'd say that their patent covers network devices, possibly those
processing contents at the packet level. Haproxy does that as a proxy
server and strictly applies exactly what cookies were made for : have
a server send a location information to a client so that when the client
brings that information back, the server knows where to look it up. Many
application servers were already using cookies that way long before the
patent filing.

Last, if you're still scared, you might consider using haproxy's cookie
prefix mode which has a number of advantages in some circumstances, and
is not covered by the patent.

Once again, I'm a software developer/designer, call me as you want, but
I'm not a lawyer. It's not my job and I have no skills there. So your
concerns should be addressed to a lawyer, or better, in doubt don't use
any software that is not certified 100% patent-safe (which probably means
don't use any software at all).

Regards,
Willy




Re: Cookie persistence

2011-10-19 Thread Ist Conne
Thanks for reply

So, It is difficult problem.
Do we have not Workaround?

2011/10/17 Holger Just :
> On 2011-10-17 14:48, Ist Conne wrote:
>> HAProxy is supported cookie-based persistence.
>> But, cookie-based Load balancing has a patented F5 Networks.
>> http://www.google.com/patents/about?id=3MYLEBAJ
>
> Without being a lawyer, I'd play the prior art card as HAProxy supported
> cookie based persistence since 1.0 which dates prior to the patent filling.
>
> That said, I think the patent might actually be a nuisance which might
> produce some serious costs and headaches if F5 is determined to enforce
> it but from my point of view it will not stand.
>
> Patents... MEH!
>
> --Holger
>
>



Re: Cookie persistence

2011-10-17 Thread Holger Just
On 2011-10-17 14:48, Ist Conne wrote:
> HAProxy is supported cookie-based persistence.
> But, cookie-based Load balancing has a patented F5 Networks.
> http://www.google.com/patents/about?id=3MYLEBAJ

Without being a lawyer, I'd play the prior art card as HAProxy supported
cookie based persistence since 1.0 which dates prior to the patent filling.

That said, I think the patent might actually be a nuisance which might
produce some serious costs and headaches if F5 is determined to enforce
it but from my point of view it will not stand.

Patents... MEH!

--Holger



Cookie persistence

2011-10-17 Thread Ist Conne
Hello,

HAProxy is supported cookie-based persistence.
But, cookie-based Load balancing has a patented F5 Networks.
http://www.google.com/patents/about?id=3MYLEBAJ

UltraMonkey-L7 was once implemented, but has now stopped working.
HAProxy do not have this patent-problem?

That the no problem if the backend web server issued cookie?

Thank you.
--
ist



RDP Cookie Persistence

2010-02-18 Thread Mark Brooks
We have been using RDP cookie persistence and noticed that sometimes the
distribution of the connections is not exactly even. The problem we suspect
is using round robin do distribute the load you can end up with strange
loadings where people have disconnected in groups, is it possible to use
least connected for the balancing algorithm.
The problem is if you have 2 terminal servers server a, server b, two people
connect to a then two people connect to b ( so connecting, a,b,a,b)  the
next person to come along connects to a again. then all the people leave a
so there are no people connected to a, and 2 people connected to b the next
connection will go to b so we have 3 on b and 0 on a, obviously this can be
expanded then you end up with strange loadings.

Hopefully I have explained that correctly

Any thoughts would be greatly appreciated

Mark