Re: Ubuntu 16.04 PPA logs not working

2017-01-27 Thread Vincent Bernat
 ❦ 27 janvier 2017 20:54 -0600, David Morton  :

> I have a pretty default Ubuntu 16.04 image on AWS set up with the
> haproxy 1.7 ppa package.   I'm not seeing a /var/log/haproxy log file.
>
>
> haproxy config is:
>
>   log /dev/loglocal0
>   log /dev/loglocal1 notice
>   chroot /var/lib/haproxy
>
>
> and rsyslog  is:
>
> # Create an additional socket in haproxy's chroot in order to allow
> logging via
> # /dev/log to chroot'ed HAProxy processes
> $AddUnixListenSocket /var/lib/haproxy/dev/log
>
> # Send HAProxy messages to a dedicated logfile
> if $programname startswith 'haproxy' then /var/log/haproxy.log
> &~
>
>
> Am I missing something obvious?

The package doesn't reload rsyslog for you (more recent versions of the
rsyslog package will do that). Does /var/lib/haproxy/dev/log exist?
-- 
After all, all he did was string together a lot of old, well-known quotations.
-- H. L. Mencken, on Shakespeare



Re: Lua sample fetch logging ends up in response when doing http-request redirect

2017-01-27 Thread thierry . fournier
Hi,

thanks for the bug repport. I already encoutered with another function
than redirect. Can you try the join patch ?

Thierry


On Fri, 27 Jan 2017 22:50:00 +
Jesse Schulman  wrote:

> I've found what seems to be a bug when I log from within a Lua sample fetch
> that I am using to determine a redirect URL.  It seems that whatever is
> logged from the lua script is written to the log file as expected, but it
> also is replacing the response, making the response invalid and breaking
> the redirection.
> 
> Thanks,
> Jesse
> 
> Here's what I'm seeing:
> 
> *no logging: curl -v http://lab.mysite.com *
> > GET / HTTP/1.1
> > Host: lab.mysite.com
> > User-Agent: curl/7.51.0
> > Accept: */*
> >
> < HTTP/1.1 302 Found
> < Cache-Control: no-cache
> < Content-length: 0
> < Location: https://www.google.com/
> < Connection: close
> <
> 
> *issue seen here with logging the string "LOG MSG" from lua script: curl -v
> http://lab.mysite.com/log *
> > GET /log HTTP/1.1
> > Host: lab.mysite.com
> > User-Agent: curl/7.51.0
> > Accept: */*
> >
> LOG MSG 302 Found
> Cache-Control: no-cache
> Content-length: 0
> Location: https://www.google.com/log
> Connection: close
> 
> 
> Here are steps to reproduce and my current setup:
> 
> */etc/redhat-release:*
> CentOS Linux release 7.2.1511 (Core)
> 
> *uname -rv*
> 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
> 
> *haproxy -vv:*
> HA-Proxy version 1.7.2 2017/01/13
> Copyright 2000-2017 Willy Tarreau 
> 
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
>   OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
> USE_LUA=1 USE_PCRE=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.7
> Running on zlib version : 1.2.7
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
> Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.32 2012-11-30
> Running on PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with Lua version : Lua 5.3.3
> Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
> IP_FREEBIND
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available filters :
> [COMP] compression
> [TRACE] trace
> [SPOE] spoe
> 
> *haproxy.cfg:*
> global
>log 127.0.0.1 local2 debug
>lua-load /etc/haproxy/lua/redirect.lua
>chroot /var/lib/haproxy
>pidfile /var/run/haproxy.pid
>maxconn 256
>tune.ssl.default-dh-param 1024
>stats socket /var/run/haproxy.sock mode 600 level admin
>stats timeout 2m #Wait up to 2 minutes for input
>user haproxy
>group haproxy
>daemon
> 
> defaults
>log global
>mode tcp
>option tcplog
>option dontlognull
>timeout connect 10s
>timeout client 60s
>timeout server 60s
>timeout tunnel 600s
> 
> frontend http
>bind "${BIND_IP}:80"
>mode http
>option httplog
>option forwardfor
>capture request header Host len 32
>log-format %hr\ %r\ %ST\ %b/%s\ %ci:%cp\ %B\ %Tr
> 
>http-request redirect prefix "%[lua.get_redirect()]"
> 
> *lua/redirect.lua:*
> core.register_fetches("get_redirect", function(txn)
>   local path = txn.sf:path()
>   if (path == "/log") then
>  core.Info("LOG MSG")
>   end
>   return "https://www.google.com;
> 
> end)
>From 64f2cf8ecc4c19e58fc1d8f8fb04c7b90cc97427 Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER 
Date: Sat, 28 Jan 2017 07:39:53 +0100
Subject: [PATCH] BUG/MEDIUM: http: redirect overwrite a buffer
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

see 4b788f7d349ddde3f70f063b7394529eac6ab678

If we use the action "http-request redirect" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.

This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.

This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value 

Ubuntu 16.04 PPA logs not working

2017-01-27 Thread David Morton

I have a pretty default Ubuntu 16.04 image on AWS set up with the
haproxy 1.7 ppa package.   I'm not seeing a /var/log/haproxy log file.


haproxy config is:

  log /dev/log  local0
  log /dev/log  local1 notice
  chroot /var/lib/haproxy


and rsyslog  is:

# Create an additional socket in haproxy's chroot in order to allow
logging via
# /dev/log to chroot'ed HAProxy processes
$AddUnixListenSocket /var/lib/haproxy/dev/log

# Send HAProxy messages to a dedicated logfile
if $programname startswith 'haproxy' then /var/log/haproxy.log
&~


Am I missing something obvious?

-- 
David Morton
morto...@dgrmm.net



Lua sample fetch logging ends up in response when doing http-request redirect

2017-01-27 Thread Jesse Schulman
I've found what seems to be a bug when I log from within a Lua sample fetch
that I am using to determine a redirect URL.  It seems that whatever is
logged from the lua script is written to the log file as expected, but it
also is replacing the response, making the response invalid and breaking
the redirection.

Thanks,
Jesse

Here's what I'm seeing:

*no logging: curl -v http://lab.mysite.com *
> GET / HTTP/1.1
> Host: lab.mysite.com
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: no-cache
< Content-length: 0
< Location: https://www.google.com/
< Connection: close
<

*issue seen here with logging the string "LOG MSG" from lua script: curl -v
http://lab.mysite.com/log *
> GET /log HTTP/1.1
> Host: lab.mysite.com
> User-Agent: curl/7.51.0
> Accept: */*
>
LOG MSG 302 Found
Cache-Control: no-cache
Content-length: 0
Location: https://www.google.com/log
Connection: close


Here are steps to reproduce and my current setup:

*/etc/redhat-release:*
CentOS Linux release 7.2.1511 (Core)

*uname -rv*
3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016

*haproxy -vv:*
HA-Proxy version 1.7.2 2017/01/13
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe

*haproxy.cfg:*
global
   log 127.0.0.1 local2 debug
   lua-load /etc/haproxy/lua/redirect.lua
   chroot /var/lib/haproxy
   pidfile /var/run/haproxy.pid
   maxconn 256
   tune.ssl.default-dh-param 1024
   stats socket /var/run/haproxy.sock mode 600 level admin
   stats timeout 2m #Wait up to 2 minutes for input
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode tcp
   option tcplog
   option dontlognull
   timeout connect 10s
   timeout client 60s
   timeout server 60s
   timeout tunnel 600s

frontend http
   bind "${BIND_IP}:80"
   mode http
   option httplog
   option forwardfor
   capture request header Host len 32
   log-format %hr\ %r\ %ST\ %b/%s\ %ci:%cp\ %B\ %Tr

   http-request redirect prefix "%[lua.get_redirect()]"

*lua/redirect.lua:*
core.register_fetches("get_redirect", function(txn)
  local path = txn.sf:path()
  if (path == "/log") then
 core.Info("LOG MSG")
  end
  return "https://www.google.com;

end)


Re: unique-id-header and req.hdr

2017-01-27 Thread Patrick Hemmer


On 2017/1/27 15:31, Ciprian Dorin Craciun wrote:
> On Fri, Jan 27, 2017 at 10:24 PM, Patrick Hemmer
>  wrote:
>> Something that might satisfy both requests, why not just append to the
>> existing request-id?
>>
>> unique-id-format %[req.hdr(X-Request-ID)],%{+X}o\
>> %ci:%cp_%fi:%fp_%Ts_%rt:%pid
>>
>> This does result in a leading comma if X-Request-ID is unset. If that's
>> unpleasant, you could do something like write tiny LUA sample converter to
>> append a comma if the value is not empty.
>
> However, just setting the `unique-id-format` is not enough, as we
> should also send that ID to the backend, thus there is a need of
> `http-request set-header X-Request-Id %[unique-id] if !...`.  (By not
> using the `http-request`, we do get the ID from the header in the log,
> but not to the backend.)

That's what the `unique-id-header` config parameter is for.

>
> But now -- I can't say with certainty, but I remember trying various
> variants -- I think the evaluation order of `unique-id-format` is
> after all the `http-request` rules, thus the header will always be
> empty (if not explicitly set in the request), although in the log we
> would have a correct ID.
>
>
> (This is why I settled with a less optimal solution of having two
> headers, but with identical values, and working correctly in all
> instances.)
>
> Ciprian.
>



Re: unique-id-header and req.hdr

2017-01-27 Thread Ciprian Dorin Craciun
On Fri, Jan 27, 2017 at 10:24 PM, Patrick Hemmer
 wrote:
> Something that might satisfy both requests, why not just append to the
> existing request-id?
>
> unique-id-format %[req.hdr(X-Request-ID)],%{+X}o\
> %ci:%cp_%fi:%fp_%Ts_%rt:%pid
>
> This does result in a leading comma if X-Request-ID is unset. If that's
> unpleasant, you could do something like write tiny LUA sample converter to
> append a comma if the value is not empty.


However, just setting the `unique-id-format` is not enough, as we
should also send that ID to the backend, thus there is a need of
`http-request set-header X-Request-Id %[unique-id] if !...`.  (By not
using the `http-request`, we do get the ID from the header in the log,
but not to the backend.)

But now -- I can't say with certainty, but I remember trying various
variants -- I think the evaluation order of `unique-id-format` is
after all the `http-request` rules, thus the header will always be
empty (if not explicitly set in the request), although in the log we
would have a correct ID.


(This is why I settled with a less optimal solution of having two
headers, but with identical values, and working correctly in all
instances.)

Ciprian.



Re: unique-id-header and req.hdr

2017-01-27 Thread Patrick Hemmer


On 2017/1/27 14:38, Cyril Bonté wrote:
> Le 27/01/2017 à 20:11, Ciprian Dorin Craciun a écrit :
>> On Fri, Jan 27, 2017 at 9:01 PM, Cyril Bonté 
>> wrote:
>>> Instead of using "unique-id-header" and temporary headers, you can
>>> use the
>>> "unique-id" fetch sample [1] :
>>>
>>> frontend public
>>> bind *:80
>>> unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
>>> default_backend ui
>>>
>>> backend ui
>>> http-request set-header X-Request-Id %[unique-id] unless {
>>> req.hdr(X-Request-Id) -m found }
>>
>>
>> Indeed this might be one version of ensuring that a `X-Request-Id`
>> exists, however it doesn't serve a second purpose
>
> And that's why I didn't reply to your anwser but to the original
> question ;-)
>

Something that might satisfy both requests, why not just append to the
existing request-id?

unique-id-format %[req.hdr(X-Request-ID)],%{+X}o\
%ci:%cp_%fi:%fp_%Ts_%rt:%pid

This does result in a leading comma if X-Request-ID is unset. If that's
unpleasant, you could do something like write tiny LUA sample converter
to append a comma if the value is not empty.

-Patrick


Re: unique-id-header and req.hdr

2017-01-27 Thread Cyril Bonté

Le 27/01/2017 à 20:11, Ciprian Dorin Craciun a écrit :

On Fri, Jan 27, 2017 at 9:01 PM, Cyril Bonté  wrote:

Instead of using "unique-id-header" and temporary headers, you can use the
"unique-id" fetch sample [1] :

frontend public
bind *:80
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
default_backend ui

backend ui
http-request set-header X-Request-Id %[unique-id] unless {
req.hdr(X-Request-Id) -m found }



Indeed this might be one version of ensuring that a `X-Request-Id`
exists, however it doesn't serve a second purpose


And that's why I didn't reply to your anwser but to the original 
question ;-)


--
Cyril Bonté



Re: unique-id-header and req.hdr

2017-01-27 Thread Ciprian Dorin Craciun
On Fri, Jan 27, 2017 at 9:01 PM, Cyril Bonté  wrote:
> Instead of using "unique-id-header" and temporary headers, you can use the
> "unique-id" fetch sample [1] :
>
> frontend public
> bind *:80
> unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
> default_backend ui
>
> backend ui
> http-request set-header X-Request-Id %[unique-id] unless {
> req.hdr(X-Request-Id) -m found }


Indeed this might be one version of ensuring that a `X-Request-Id`
exists, however it doesn't serve a second purpose (which I found most
important), namely of matching the HAProxy logs with the logs of a
downstream server (or an upstream client).

For example, suppose we have a client (an API library), which sets
`X-Request-Id` header to a random value, logs it, and makes the
request.  This request reaches HAProxy, which because it uses a custom
`unique-id-format` which disregards the `X-Request-Id` header, will
log a completely different request id, but pass the "original" header
to the downstream server.  The downstream server receives the request,
logs the request ID and gives back the response.

Now if we want to match all the three logs we can't.  The client and
server logs are in sync, but the HAProxy logs uses its own custom
request ID.


By using the variant I proposed (setting the `unique-id-format` to the
actual value of the `X-Request-Id` header), we can match all the three
logs.

Ciprian.


P.S.:  Obviously we can explicitly log the `X-Request-Id` header via a
capture, but it isn't as explicit (or easy to identify) as setting the
unique-id to the header value.



Re: unique-id-header and req.hdr

2017-01-27 Thread Cyril Bonté

Hi,

Le 26/01/2017 à 23:10, sendmaildevnull a écrit :

I'm trying generate a unique-id-header only if one is not already
provided in the request. If I provide the header in my request to
haproxy I end up with duplicate headers, one with auto generated header
and another with the user provided header. I attempted to use the
technique mentioned here
(http://discourse.haproxy.org/t/unique-id-adding-only-if-header-not-present/67/2)
and listed below but it is not working for me. Basically, I'm unable to
check/get value for unique-id-header (e.g. req.hdr(TMP-X-Request-Id)).
Any ideas?

frontend public
bind *:80
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header TMP-X-Request-Id
default_backend ui

backend ui
http-request set-header X-Request-Id %[req.hdr(TMP-X-Request-Id)]
unless { req.hdr(X-Request-Id) -m found }
http-request del-header TMP-X-Request-Id


Instead of using "unique-id-header" and temporary headers, you can use 
the "unique-id" fetch sample [1] :


frontend public
bind *:80
unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
default_backend ui

backend ui
http-request set-header X-Request-Id %[unique-id] unless { 
req.hdr(X-Request-Id) -m found }



[1] 
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-unique-id



--
Cyril Bonté



Re: HTTP redirects while still allowing keep-alive

2017-01-27 Thread Ciprian Dorin Craciun
On Wed, Jan 11, 2017 at 8:59 PM, Willy Tarreau  wrote:
>> [I can't speak with much confidence as this is the first time I see
>> the HAProxy code, but...]
>>
>>
>> >From what I see the main culprit for the connection close is the code:
>>
>>  [starting with line 4225 in `proto_http.c`] 
>> if (*location == '/' &&
>> (req->flags & HTTP_MSGF_XFER_LEN) &&
>> ((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) ||
>> (req->msg_state == HTTP_MSG_DONE)) &&
>> ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
>>  (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
>> /* keep-alive possible */
>> 
>>
>>
>> Which might be rewrites just as:
>>
>>  [starting with line 4225 in `proto_http.c`] 
>> if (
>>(req->flags & HTTP_MSGF_XFER_LEN) &&
>> ((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) ||
>> (req->msg_state == HTTP_MSG_DONE)) &&
>> ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
>>  (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
>> /* keep-alive possible */
>> 
>>
>>
>> I.e., just remove `*location == '/' &&`, and I assume not much will be
>> impacted, thus I guess no regressions should stem from this
>> correction.
>
> Absolutely. Feel free to provide a patch doing this (please check
> CONTRIBUTING for the format, the commit message and the subject line),
> tag it BUG/MINOR and I'll merge it.


No patch yet :) but I just wanted to confirm that this small change
seems to work just fine in production for the last two weeks.
(Granted I didn't make a thorough analysis of the traffic, but so far
no one complained, and the traffic isn't quite small.)

Perhaps later this week or next week I'll be back with a patch.

Ciprian.



Re: unique-id-header and req.hdr

2017-01-27 Thread Ciprian Dorin Craciun
On Fri, Jan 27, 2017 at 12:10 AM, sendmaildevnull
 wrote:
> I'm trying generate a unique-id-header only if one is not already provided
> in the request. If I provide the header in my request to haproxy I end up
> with duplicate headers, one with auto generated header and another with the
> user provided header. I attempted to use the technique mentioned here
> (http://discourse.haproxy.org/t/unique-id-adding-only-if-header-not-present/67/2)
> and listed below but it is not working for me. Basically, I'm unable to
> check/get value for unique-id-header (e.g. req.hdr(TMP-X-Request-Id)). Any
> ideas?


I have struggled with this one also, and I reached a simple (but
suboptimal) solution:
(A)  Set an `X-HA-Request-Id` header if it doesn't already exist;
(B)  Configure `unique-id-*` as follows:
  unique-id-format %[req.hdr(X-HA-Request-Id)]
  unique-id-header X-HA-Request-Id-2

(I.e. in the end there are two exact headers sent to the backend, with
equal values, either the original value of the `X-HA-Request-Id`
header, or a random one.)


BTW, I always set the request header (if it doesn't already hold a
value) as a random token which would yield 128 bits of entropy, and
doesn't provide any personal identifiable information.

  http-request set-header X-HA-Request-Id
%[rand(4294967295)].%[rand(4294967295)].%[rand(4294967295)].%[rand(4294967295)]
if ...


Hope it helps,
Ciprian.


P.S.:  It sure would be helpful to have some cryptographic
transformation functions like the SHA2 family, HMAC-based functions,
and a cryptographic random function.  :)



Re: Propagating agent-check weight change to tracking servers

2017-01-27 Thread Michał
Hello,

So here's patch, which includes all functionalities I think about.
It propagates the response for every tracking server without changing it
and without intercepting it. In my opinion we should propagate relative
and absolute weights, because if you use weight=0 server's to offload
checks then you can send relative weight change and 0 will stay where it is.

Regards,
Michał


2017-01-20 10:54 GMT+01:00 Willy Tarreau :

> Hi Michal,
>
> On Thu, Jan 19, 2017 at 11:45:57PM +0100, Micha?? wrote:
> > Hello,
> >
> > We use track's in haproxy to minimize check traffic in some situations
> and
> > after my last patch we are probably going to switch to agent-checks for
> > live management of weights and statuses. One problem I see now - track
> > don't propagate weight setting to trackers, so if we set agent-check on
> > track we can manage status only.
> >
> > My first PoC solution works good, so I thought about introducing
> something
> > like agent-track or track-agent directive set on backends (or maybe it
> > should be default, non-configurable behaviour) to propagate agent-check
> > responses from main check to all tracking backends. Both default
> behaviour
> > and directive approach are small changes in code, but a little bigger in
> > functionality.
> >
> > In my opinion if we set agent-check on backend which is tracked by
> others -
> > it should propagate agent-check weight response to those tracking backend
> > and set weight on them too. Configurable or not - it will be good
> feature.
>
> I think we at least propagate the DRAIN state which is equivalent to
> weight == 0. If so I too think we should propagate *relative* weights.
> Agent-checks can return a relative weight (eg: 50%, 100%, 150%) or an
> absolute weight (eg: 10, 20). If you have two farms configured like this :
>
>backend farm1
>  server new1 1.1.1.1:8000 weight 10 agent-check
>  server new2 1.1.1.2:8000 weight 10 agent-check
>
>backend farm2
>  server new1 1.1.1.1:8000 weight 20 track farm1/new1
>  server new2 1.1.1.2:8000 weight 20 track farm1/new2
>  server old1 1.1.1.3:8000 weight 10 check
>  server old2 1.1.1.4:8000 weight 10 check
>
> Then you want the weight changes on farm1 to be applied proportionally
> to farm2 (ie: a ratio of the configured absolute weight, which is iweight
> IIRC).
>
> Otherwise that sounds quite reasonable to me given that the agent-check's
> purpose is to provide a more accurate vision of the server's health, and
> that tracking is made to share the same vision across multiple farms.
>
> Regards,
> Willy
>


0001-MINOR-checks-propagate-agent-check-weight-to-tracker.patch
Description: Binary data


Re: Possible bug with haproxy 1.6.9/1.7.0: multiproc + resolvers cause DNS timeouts

2017-01-27 Thread Baptiste
Hi All,

Sorry I missed it
I'll see what I can do to fix it asap.

Thanks for reporting.

Baptiste



On Thu, Jan 26, 2017 at 6:40 PM, Lukas Tribus  wrote:

> Hello,
>
>
>
> Am 29.11.2016 um 09:53 schrieb Willy Tarreau:
>
>> Hi Joshua,
>>
>> [ccing Baptiste]
>>
>> On Tue, Nov 29, 2016 at 02:17:17AM -0500, Joshua M. Boniface wrote:
>>
>>> Hello list!
>>>
>>> I believe I've found a bug in haproxy related to multiproc and a set of
>>> DNS
>>> resolvers. What happens is, when combining these two features (multiproc
>>> and
>>> dynamic resolvers), I get the following problem: the DNS resolvers, one
>>> per
>>> process it seems, will fail intermittently and independently for no
>>> obvious
>>> reason, and this triggers a DOWN event in the backend; a short time
>>> later,
>>> the resolution succeeds and the backend goes back UP for a short time,
>>> before
>>> repeating indefinitely. This bug also seems to have a curious effect of
>>> causing the active record type to switch from A to  and then back to
>>> A
>>> repeatedly in a dual-stack setup, though the test below shows that this
>>> bug
>>> occurs in an IPv4-only environment as well, and this failure is not
>>> documented in my tests.
>>>
>> (...)
>>
>> I've just taken a quick look at how the socket is created and I understand
>> the issue you're facing. The socket is created before the fork, so all
>> processes share the same socket, and responses can be sent to processes
>> which did not emit the request. We'll have to change this (I don't know
>> how for now) so that each process has its own socket. We do the same for
>> the epoll FD after fork.
>>
>> I think that we should keep this initialization where it is as it is
>> able to spot config issues, and just close these sockets and reopen
>> them after fork, keeping fingers crossed for a new error not happening
>> right after fork(). BTW, by analogy with what is done with peers or
>> even regular servers, we should probably only create the socket when
>> needed and simply report socket creation errors in the logs.
>>
>> Another option would be to create many sockets and later close them but
>> that's ugly and more complicated.
>>
>> In the mean time the best thing to do will be to disable multi-proc with
>> DNS.
>>
>>
>>
>
> FYI, this was just reported on discourse as well:
> http://discourse.haproxy.org/t/dns-resolution-sigh-v1-7-1/960/2
>
>
> cheers,
> lukas
>
>