Hi Jim,

On Wed, Jun 02, 2010 at 01:54:22PM -0500, Jim Riggs wrote:
> I came across this while trying to get my new stack and configuration up and 
> running.  A portion of my config looks like this:
> 
> backend lb.example.com
>   server web1 web1.example.com:80 track web1.example.com/web1
>   server web2 web2.example.com:80 track web2.example.com/web2
> 
> backend web1.example.com
>   server web1 web1.example.com:80 check
> 
> backend web2.example.com
>   server web2 web2.example.com:80 check disabled
> 
> 
> Note that web2.example.com/web2 is disabled for testing.  This is recognized 
> from a stats perspective.  That is, the web2 backend is marked down and 
> web2/web2 is marked as MAINT.  Additionally, lb/web2 is marked as 
> "MAINT(via)".  Everything seems correct, except that when I send traffic to 
> backend lb, web2 is still receiving balanced traffic.
> 
> I will state up front that I am not yet familiar with the haproxy code, but 
> in a cursory look, it appears this is happening when the config file is 
> parsed.  When web2/web2 is being marked as disabled, we set newsrv->state on 
> that server, but this will never affect other servers that are tracking it.  
> It seems like after all of the config is parsed and all servers are set up, 
> we need to walk through all of the "untracking" servers and properly 
> configure all of the servers that are tracking them.  Maybe this means a call 
> to set_server_(up|down|disabled) for each which will update all of the 
> tracking servers?
> 
> Again, I'm not all that familiar with the code, but I think this is what is 
> happening.  Regardless, I know it's broken.  :-)
> 
> Any thoughts?

It is not a problem of code, but more of definition. The behaviour you
observe is correct (whether it is desired in your case is anothe matter).
The "track" directive is used to replace health checks, and as such, will
only affect the "operational status" of the server.

The "disabled" directive affects the "administrative" status of the server,
which means that regardless of its operational status, we want it to be
forced down. This status is not propagated over tracking, and after a quick
thought, it would appear a bit dangerous to do so by default.

But I'm not opposed to reconsider this if :
  1) it does not break existing setups (which means nobody comes in saying
     "please don't do that")

  2) users find valid cases for this feature which cannot be covered by
     the current behaviour.

In your case, we could imagine that you just have to add the "disabled"
keyword on the tracking server too. For people who use "track" for inter-
protocol dependency, we could imagine that they want to immediately stop
HTTPS to refuse new logins, but leave HTTP alive for some time for currently
connected users. (that's just an example, probably not the best one).

Regards,
Willy


Reply via email to