new article

2018-07-03 Thread Anna Kucirkova
Hello there,

Your page http://feedjunkie.com/feed/242/10 has some good references to 

depression and suicide so I wanted to get in touch with you. I've recently 
written an article about

HOW TO HELP THOSE CONSIDERING SUICIDE and was wondering if you thought my 
article could be a good addition to your page.

You can read my article right here: 

https://careersinpsychology.org/how-help-those-considering-suicide/

I would like to hear your opinion on this article.  Also, if you find it 
useful, please consider linking to it from your page I mentioned earlier. If 
you prefer you may republish the article. Let me know what you think.

Thank you very much,

Anna.



Re: haproxy bug: healthcheck not passing after port change when statefile is enabled

2018-07-03 Thread Sven Wiltink
Hey Baptiste,


Thank you for looking into it.


The bug is triggered by running haproxy with the following config:


global
maxconn 32000
tune.maxrewrite 2048
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
nbproc 1
maxcompcpuusage 85
spread-checks 0
stats socket /var/run/haproxy.sock mode 600 level admin process 1 user 
haproxy group haproxy
server-state-file test
server-state-base /var/run/haproxy/state
master-worker no-exit-on-failure

defaults
load-server-state-from-file global
log global
timeout http-request 5s
timeout connect  2s
timeout client   300s
timeout server   300s
mode http
option dontlog-normal
option http-server-close
option redispatch
option log-health-checks

listen stats
bind :1936
bind-process 1
mode http
stats enable
stats uri /
stats admin if TRUE

listen banaan-443-ipv4
bind :443
mode tcp
server banaan-vps 127.0.0.1:443 check inter 2000


- Then start haproxy (it will do healthchecks to port 443)
- change server banaan-vps 127.0.0.1:443 check inter 2000 to server banaan-vps 
127.0.0.1:80 check inter 2000
- save the state using /bin/sh -c "echo show servers state | /usr/bin/socat 
/var/run/haproxy.sock - > /var/run/haproxy/state/test" (this is normally done 
using the systemd file on reload, see initial mail)
- reload haproxy (it still does healthchecks to port 443 while port 80 was 
expected)

if you delete the statefile and reload haproxy it will start healthchecks for 
port 80 as expected

-Sven








Van: Baptiste 
Verzonden: dinsdag 3 juli 2018 11:38:14
Aan: Sven Wiltink
CC: haproxy@formilux.org
Onderwerp: Re: haproxy bug: healthcheck not passing after port change when 
statefile is enabled

Hi Sven,

Thanks a lot for your feedback!
I'll check how we could handle this use case with the state file.

Just to ensure I'm going to troubleshoot the right issue, could you please 
summarize how you trigger this issue in a few simple steps?
IE:
- conf v1, server port is X
- generate server state (where port is X)
- update conf to v2, where port is Y
reload HAProxy => X is applied, while you expect to get Y instead

Baptiste



On Mon, Jun 25, 2018 at 12:55 PM, Sven Wiltink 
mailto:swilt...@transip.nl>> wrote:

Hello,


So we've dug a little deeper and the issue seems to be caused by the port value 
in the statefile. When the target port of a server has changed between reloads 
the port specified in the state file is leading. When running tcpdump you can 
see the healthchecks are being performed for the old port. After stopping 
haproxy and removing the statefile the healthcheck is performed for the right 
port. When manually editing the statefile to a random port the healthchecks 
will be performed for that port instead of the one specified by the config.


The code responsible for this is line 
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/server.c;h=523289e3bda7ca6aa15575f1928f5298760cf582;hb=HEAD#l2931

from commit 
http://git.haproxy.org/?p=haproxy-1.8.git;a=commitdiff;h=3169471964fdc49963e63f68c1fd88686821a0c4.


A solution would be invalidating the state when the ports don't match.


-Sven




Van: Sven Wiltink
Verzonden: dinsdag 12 juni 2018 17:01:18
Aan: haproxy@formilux.org
Onderwerp: haproxy bug: healthcheck not passing after port change when 
statefile is enabled

Hello,

There seems to be a bug in the loading of state files after a configuration 
change. When changing the destination port of a server the healthchecks never 
start passing if the state before the reload was down. This bug has been 
introduced after 1.7.9 as we cannot reproduce it on machines running that 
version of haproxy. You can use the following steps to reproduce the issue:

Start with a fresh debian 9 install
install socat
install haproxy 1.8.9 from backports

create a systemd file 
/etc/systemd/system/haproxy.service.d/60-haproxy-server_state.conf  with the 
following contents:
[Service]
ExecStartPre=/bin/mkdir -p /var/run/haproxy/state
ExecReload=
ExecReload=/usr/sbin/haproxy -f ${CONFIG} -c -q $EXTRAOPTS
ExecReload=/bin/sh -c "echo show servers state | /usr/bin/socat 
/var/run/haproxy.sock - > /var/run/haproxy/state/test"
ExecReload=/bin/kill -USR2 $MAINPID

create the following files:
/etc/haproxy/haproxy.cfg.disabled:
global
maxconn 32000
tune.maxrewrite 2048
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
nbproc 1
maxcompcpuusage 85
spread-checks 0
stats socket /var/run/haproxy.sock mode 600 level admin process 1 user 
haproxy group haproxy
server-state-file test
server-state-base /var/run/haproxy/state
master-worker no-exit-on-failure

defaults
load-server-state-from-file global
log global
timeout http-request 5s
timeout connect  2s
timeout client   300s

Re: Issue with parsing DNS from AWS

2018-07-03 Thread Baptiste
Ah yes, I also added the following "init-addr none" statement on the
server-template line.
This prevents HAProxy from using libc resolvers, which might end up in
unpredictible behavior in that enviroment

Baptiste

On Tue, Jul 3, 2018 at 3:18 PM, Baptiste  wrote:

> Well, I can partially reproduce the issue you're facing and I can see some
> weird behavior of AWS's DNS servers.
>
> First, by default, HAProxy only support DNS over UDP and can accept up to
> 512 bytes of payload in the DNS response.
> DNS over TCP is not yet available and accepted payload size can be
> increased using EDNS0 extension.
>
> There is a "magic" number of SRV records with AWS and default HAProxy
> accepted payload size, at around 4 SRV records, the response payload may be
> bigger than 512 bytes.
> And so, AWS DNS server does not return any data, simply returns an empty
> response, with the TRUNCATED flag.
> In such case, a client is supposed to replay the request over TCP...
>
> An other magic value with AWS DNS servers is that it won't return more
> than 8 SRV records, even if you have 10 servers in your service. (even in
> TCP)
> AWS DNS servers will simply return a round robin list of the records, some
> will disappear, some will reappear at some point in time.
>
>
> Conclusion, to make HAProxy work in such environment, you want to
> configure it that way:
> resolvers awsdns
>   nameserver dns0 NAMESERVER:53 # <=== please remove the doule quotes
>   accepted_payload_size 8192 # <=== workaround for too
> short accepted payload
>   hold obsolete 30s   # <=== workaround
> for limited number of records returned by AWS
>
> You may want to read the documentation of HAProxy's resolver. There are a
> few other timeout / hold period you could tune.
>
> With the configuration above, I could easily scale from 2 to 10, back to
> 2, passing through 4, 8, etc... successfully and without any server
> flapping.
> I did not try to go higher than 10. Bear in mind the "hold obsolete"
> period is the period during which HAProxy considers a server as available
> even if the DNS server did not return it in the SRV record list.
>
> Baptiste
>
>
>
>
>
>
>
> On Tue, Jul 3, 2018 at 1:26 PM, Baptiste  wrote:
>
>> Answering myself... I found my way in the menu to be able to allow port
>> 9000 to read the stats page and to find the public IP associated to my
>> "app".
>> That said, I still can't get a shell on the running container, but I
>> think I found an AWS documentation page for this purpose.
>>
>> I keep you updated.
>>
>> On Tue, Jul 3, 2018 at 1:06 PM, Baptiste  wrote:
>>
>>> Hi Jim,
>>>
>>> I think I have something running...
>>> At least, terraform did not complain and I can see "stuff" in my AWS
>>> dashoard.
>>> Now, I have no idea how I can get connected to my running HAProxy
>>> container, neither how I can troubleshoot what's happening :)
>>>
>>> Any help would be (again) appreciated.
>>>
>>> Baptiste
>>>
>>>
>>>
>>> On Tue, Jul 3, 2018 at 11:39 AM, Baptiste  wrote:
>>>
 Hi Jim,

 Sorry for the long pause :)
 I was dealing with some travel, conferences and catching up on my
 backlog.
 So, the good news, is that this issue is now my priority :)

 I'll try to first reproduce it and come back to you if I have any issue
 during that step.
 (by the way, thanks for the github repo to help me speed up in that
 step).

 Baptiste




 On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville <
 jdevi...@malwarebytes.com> wrote:

> Hi Bapiste,
>
>
> I just wanted to follow up to see if you were able to repro and
> perhaps had a patch we could try?
>
>
> Jim
> --
> *From:* Jim Deville
> *Sent:* Thursday, June 21, 2018 1:05:49 PM
> *To:* Baptiste
> *Cc:* haproxy@formilux.org; Jonathan Works
> *Subject:* Re: Issue with parsing DNS from AWS
>
>
> Thanks for the reply, we were able to extract a minimal repro to
> demonstrate the problem: https://github.com/jg
> works/haproxy-servicediscovery
>
>
> The docker folder contains a version of the config we're using and a
> startup script to determine the local private DNS zone (AWS puts it at the
> subnet's +2).
>
>
> Jim
> --
> *From:* Baptiste 
> *Sent:* Thursday, June 21, 2018 11:02:26 AM
> *To:* Jim Deville
> *Cc:* haproxy@formilux.org; Jonathan Works
> *Subject:* Re: Issue with parsing DNS from AWS
>
> and by the way, I had a quick look at the pcap file and could not find
> anything weird.
> The function you're pointing seem to say there is not enough space to
> store a server's dns name, but the allocated space is larger that your
> current records.
>
> Baptiste
>


>>>
>>
>


Re: Issue with parsing DNS from AWS

2018-07-03 Thread Baptiste
Well, I can partially reproduce the issue you're facing and I can see some
weird behavior of AWS's DNS servers.

First, by default, HAProxy only support DNS over UDP and can accept up to
512 bytes of payload in the DNS response.
DNS over TCP is not yet available and accepted payload size can be
increased using EDNS0 extension.

There is a "magic" number of SRV records with AWS and default HAProxy
accepted payload size, at around 4 SRV records, the response payload may be
bigger than 512 bytes.
And so, AWS DNS server does not return any data, simply returns an empty
response, with the TRUNCATED flag.
In such case, a client is supposed to replay the request over TCP...

An other magic value with AWS DNS servers is that it won't return more than
8 SRV records, even if you have 10 servers in your service. (even in TCP)
AWS DNS servers will simply return a round robin list of the records, some
will disappear, some will reappear at some point in time.


Conclusion, to make HAProxy work in such environment, you want to configure
it that way:
resolvers awsdns
  nameserver dns0 NAMESERVER:53 # <=== please remove the doule quotes
  accepted_payload_size 8192 # <=== workaround for too
short accepted payload
  hold obsolete 30s   # <=== workaround for
limited number of records returned by AWS

You may want to read the documentation of HAProxy's resolver. There are a
few other timeout / hold period you could tune.

With the configuration above, I could easily scale from 2 to 10, back to 2,
passing through 4, 8, etc... successfully and without any server flapping.
I did not try to go higher than 10. Bear in mind the "hold obsolete" period
is the period during which HAProxy considers a server as available even if
the DNS server did not return it in the SRV record list.

Baptiste







On Tue, Jul 3, 2018 at 1:26 PM, Baptiste  wrote:

> Answering myself... I found my way in the menu to be able to allow port
> 9000 to read the stats page and to find the public IP associated to my
> "app".
> That said, I still can't get a shell on the running container, but I think
> I found an AWS documentation page for this purpose.
>
> I keep you updated.
>
> On Tue, Jul 3, 2018 at 1:06 PM, Baptiste  wrote:
>
>> Hi Jim,
>>
>> I think I have something running...
>> At least, terraform did not complain and I can see "stuff" in my AWS
>> dashoard.
>> Now, I have no idea how I can get connected to my running HAProxy
>> container, neither how I can troubleshoot what's happening :)
>>
>> Any help would be (again) appreciated.
>>
>> Baptiste
>>
>>
>>
>> On Tue, Jul 3, 2018 at 11:39 AM, Baptiste  wrote:
>>
>>> Hi Jim,
>>>
>>> Sorry for the long pause :)
>>> I was dealing with some travel, conferences and catching up on my
>>> backlog.
>>> So, the good news, is that this issue is now my priority :)
>>>
>>> I'll try to first reproduce it and come back to you if I have any issue
>>> during that step.
>>> (by the way, thanks for the github repo to help me speed up in that
>>> step).
>>>
>>> Baptiste
>>>
>>>
>>>
>>>
>>> On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville >> > wrote:
>>>
 Hi Bapiste,


 I just wanted to follow up to see if you were able to repro and perhaps
 had a patch we could try?


 Jim
 --
 *From:* Jim Deville
 *Sent:* Thursday, June 21, 2018 1:05:49 PM
 *To:* Baptiste
 *Cc:* haproxy@formilux.org; Jonathan Works
 *Subject:* Re: Issue with parsing DNS from AWS


 Thanks for the reply, we were able to extract a minimal repro to
 demonstrate the problem: https://github.com/jg
 works/haproxy-servicediscovery


 The docker folder contains a version of the config we're using and a
 startup script to determine the local private DNS zone (AWS puts it at the
 subnet's +2).


 Jim
 --
 *From:* Baptiste 
 *Sent:* Thursday, June 21, 2018 11:02:26 AM
 *To:* Jim Deville
 *Cc:* haproxy@formilux.org; Jonathan Works
 *Subject:* Re: Issue with parsing DNS from AWS

 and by the way, I had a quick look at the pcap file and could not find
 anything weird.
 The function you're pointing seem to say there is not enough space to
 store a server's dns name, but the allocated space is larger that your
 current records.

 Baptiste

>>>
>>>
>>
>


Re: Issue with parsing DNS from AWS

2018-07-03 Thread Baptiste
Answering myself... I found my way in the menu to be able to allow port
9000 to read the stats page and to find the public IP associated to my
"app".
That said, I still can't get a shell on the running container, but I think
I found an AWS documentation page for this purpose.

I keep you updated.

On Tue, Jul 3, 2018 at 1:06 PM, Baptiste  wrote:

> Hi Jim,
>
> I think I have something running...
> At least, terraform did not complain and I can see "stuff" in my AWS
> dashoard.
> Now, I have no idea how I can get connected to my running HAProxy
> container, neither how I can troubleshoot what's happening :)
>
> Any help would be (again) appreciated.
>
> Baptiste
>
>
>
> On Tue, Jul 3, 2018 at 11:39 AM, Baptiste  wrote:
>
>> Hi Jim,
>>
>> Sorry for the long pause :)
>> I was dealing with some travel, conferences and catching up on my backlog.
>> So, the good news, is that this issue is now my priority :)
>>
>> I'll try to first reproduce it and come back to you if I have any issue
>> during that step.
>> (by the way, thanks for the github repo to help me speed up in that step).
>>
>> Baptiste
>>
>>
>>
>>
>> On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville 
>> wrote:
>>
>>> Hi Bapiste,
>>>
>>>
>>> I just wanted to follow up to see if you were able to repro and perhaps
>>> had a patch we could try?
>>>
>>>
>>> Jim
>>> --
>>> *From:* Jim Deville
>>> *Sent:* Thursday, June 21, 2018 1:05:49 PM
>>> *To:* Baptiste
>>> *Cc:* haproxy@formilux.org; Jonathan Works
>>> *Subject:* Re: Issue with parsing DNS from AWS
>>>
>>>
>>> Thanks for the reply, we were able to extract a minimal repro to
>>> demonstrate the problem: https://github.com/jg
>>> works/haproxy-servicediscovery
>>>
>>>
>>> The docker folder contains a version of the config we're using and a
>>> startup script to determine the local private DNS zone (AWS puts it at the
>>> subnet's +2).
>>>
>>>
>>> Jim
>>> --
>>> *From:* Baptiste 
>>> *Sent:* Thursday, June 21, 2018 11:02:26 AM
>>> *To:* Jim Deville
>>> *Cc:* haproxy@formilux.org; Jonathan Works
>>> *Subject:* Re: Issue with parsing DNS from AWS
>>>
>>> and by the way, I had a quick look at the pcap file and could not find
>>> anything weird.
>>> The function you're pointing seem to say there is not enough space to
>>> store a server's dns name, but the allocated space is larger that your
>>> current records.
>>>
>>> Baptiste
>>>
>>
>>
>


Re: Issue with parsing DNS from AWS

2018-07-03 Thread Baptiste
Hi Jim,

I think I have something running...
At least, terraform did not complain and I can see "stuff" in my AWS
dashoard.
Now, I have no idea how I can get connected to my running HAProxy
container, neither how I can troubleshoot what's happening :)

Any help would be (again) appreciated.

Baptiste



On Tue, Jul 3, 2018 at 11:39 AM, Baptiste  wrote:

> Hi Jim,
>
> Sorry for the long pause :)
> I was dealing with some travel, conferences and catching up on my backlog.
> So, the good news, is that this issue is now my priority :)
>
> I'll try to first reproduce it and come back to you if I have any issue
> during that step.
> (by the way, thanks for the github repo to help me speed up in that step).
>
> Baptiste
>
>
>
>
> On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville 
> wrote:
>
>> Hi Bapiste,
>>
>>
>> I just wanted to follow up to see if you were able to repro and perhaps
>> had a patch we could try?
>>
>>
>> Jim
>> --
>> *From:* Jim Deville
>> *Sent:* Thursday, June 21, 2018 1:05:49 PM
>> *To:* Baptiste
>> *Cc:* haproxy@formilux.org; Jonathan Works
>> *Subject:* Re: Issue with parsing DNS from AWS
>>
>>
>> Thanks for the reply, we were able to extract a minimal repro to
>> demonstrate the problem: https://github.com/jg
>> works/haproxy-servicediscovery
>>
>>
>> The docker folder contains a version of the config we're using and a
>> startup script to determine the local private DNS zone (AWS puts it at the
>> subnet's +2).
>>
>>
>> Jim
>> --
>> *From:* Baptiste 
>> *Sent:* Thursday, June 21, 2018 11:02:26 AM
>> *To:* Jim Deville
>> *Cc:* haproxy@formilux.org; Jonathan Works
>> *Subject:* Re: Issue with parsing DNS from AWS
>>
>> and by the way, I had a quick look at the pcap file and could not find
>> anything weird.
>> The function you're pointing seem to say there is not enough space to
>> store a server's dns name, but the allocated space is larger that your
>> current records.
>>
>> Baptiste
>>
>
>


Re: Issue with parsing DNS from AWS

2018-07-03 Thread Baptiste
Hi Jim,

Sorry for the long pause :)
I was dealing with some travel, conferences and catching up on my backlog.
So, the good news, is that this issue is now my priority :)

I'll try to first reproduce it and come back to you if I have any issue
during that step.
(by the way, thanks for the github repo to help me speed up in that step).

Baptiste




On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville 
wrote:

> Hi Bapiste,
>
>
> I just wanted to follow up to see if you were able to repro and perhaps
> had a patch we could try?
>
>
> Jim
> --
> *From:* Jim Deville
> *Sent:* Thursday, June 21, 2018 1:05:49 PM
> *To:* Baptiste
> *Cc:* haproxy@formilux.org; Jonathan Works
> *Subject:* Re: Issue with parsing DNS from AWS
>
>
> Thanks for the reply, we were able to extract a minimal repro to
> demonstrate the problem: https://github.com/jgworks/haproxy-
> servicediscovery
>
>
> The docker folder contains a version of the config we're using and a
> startup script to determine the local private DNS zone (AWS puts it at the
> subnet's +2).
>
>
> Jim
> --
> *From:* Baptiste 
> *Sent:* Thursday, June 21, 2018 11:02:26 AM
> *To:* Jim Deville
> *Cc:* haproxy@formilux.org; Jonathan Works
> *Subject:* Re: Issue with parsing DNS from AWS
>
> and by the way, I had a quick look at the pcap file and could not find
> anything weird.
> The function you're pointing seem to say there is not enough space to
> store a server's dns name, but the allocated space is larger that your
> current records.
>
> Baptiste
>


Re: haproxy bug: healthcheck not passing after port change when statefile is enabled

2018-07-03 Thread Baptiste
Hi Sven,

Thanks a lot for your feedback!
I'll check how we could handle this use case with the state file.

Just to ensure I'm going to troubleshoot the right issue, could you please
summarize how you trigger this issue in a few simple steps?
IE:
- conf v1, server port is X
- generate server state (where port is X)
- update conf to v2, where port is Y
reload HAProxy => X is applied, while you expect to get Y instead

Baptiste



On Mon, Jun 25, 2018 at 12:55 PM, Sven Wiltink  wrote:

> Hello,
>
>
> So we've dug a little deeper and the issue seems to be caused by the port
> value in the statefile. When the target port of a server has changed
> between reloads the port specified in the state file is leading. When
> running tcpdump you can see the healthchecks are being performed for the
> old port. After stopping haproxy and removing the statefile the healthcheck
> is performed for the right port. When manually editing the statefile to a
> random port the healthchecks will be performed for that port instead of the
> one specified by the config.
>
>
> The code responsible for this is line http://git.haproxy.org/?p=
> haproxy-1.8.git;a=blob;f=src/server.c;h=523289e3bda7ca6aa15575f1928f52
> 98760cf582;hb=HEAD#l2931
>
> from commit http://git.haproxy.org/?p=haproxy-1.8.git;a=commitdiff;h=
> 3169471964fdc49963e63f68c1fd88686821a0c4.
>
>
> A solution would be invalidating the state when the ports don't match.
>
>
> -Sven
>
>
>
> --
> *Van:* Sven Wiltink
> *Verzonden:* dinsdag 12 juni 2018 17:01:18
> *Aan:* haproxy@formilux.org
> *Onderwerp:* haproxy bug: healthcheck not passing after port change when
> statefile is enabled
>
> Hello,
>
> There seems to be a bug in the loading of state files after a
> configuration change. When changing the destination port of a server the
> healthchecks never start passing if the state before the reload was down.
> This bug has been introduced after 1.7.9 as we cannot reproduce it on
> machines running that version of haproxy. You can use the following steps
> to reproduce the issue:
>
> Start with a fresh debian 9 install
> install socat
> install haproxy 1.8.9 from backports
>
> create a systemd file /etc/systemd/system/haproxy.
> service.d/60-haproxy-server_state.conf  with the following contents:
> [Service]
> ExecStartPre=/bin/mkdir -p /var/run/haproxy/state
> ExecReload=
> ExecReload=/usr/sbin/haproxy -f ${CONFIG} -c -q $EXTRAOPTS
> ExecReload=/bin/sh -c "echo show servers state | /usr/bin/socat
> /var/run/haproxy.sock - > /var/run/haproxy/state/test"
> ExecReload=/bin/kill -USR2 $MAINPID
>
> create the following files:
> /etc/haproxy/haproxy.cfg.disabled:
> global
> maxconn 32000
> tune.maxrewrite 2048
> user haproxy
> group haproxy
> daemon
> chroot /var/lib/haproxy
> nbproc 1
> maxcompcpuusage 85
> spread-checks 0
> stats socket /var/run/haproxy.sock mode 600 level admin process 1 user
> haproxy group haproxy
> server-state-file test
> server-state-base /var/run/haproxy/state
> master-worker no-exit-on-failure
>
> defaults
> load-server-state-from-file global
> log global
> timeout http-request 5s
> timeout connect  2s
> timeout client   300s
> timeout server   300s
> mode http
> option dontlog-normal
> option http-server-close
> option redispatch
> option log-health-checks
>
> listen stats
> bind :1936
> bind-process 1
> mode http
> stats enable
> stats uri /
> stats admin if TRUE
>
> /etc/haproxy/haproxy.cfg.different-port:
> global
> maxconn 32000
> tune.maxrewrite 2048
> user haproxy
> group haproxy
> daemon
> chroot /var/lib/haproxy
> nbproc 1
> maxcompcpuusage 85
> spread-checks 0
> stats socket /var/run/haproxy.sock mode 600 level admin process 1 user
> haproxy group haproxy
> server-state-file test
> server-state-base /var/run/haproxy/state
> master-worker no-exit-on-failure
>
> defaults
> load-server-state-from-file global
> log global
> timeout http-request 5s
> timeout connect  2s
> timeout client   300s
> timeout server   300s
> mode http
> option dontlog-normal
> option http-server-close
> option redispatch
> option log-health-checks
>
> listen stats
> bind :1936
> bind-process 1
> mode http
> stats enable
> stats uri /
> stats admin if TRUE
>
> listen banaan-443-ipv4
> bind :443
> mode tcp
> server banaan-vps 127.0.0.1:80 check inter 2000
> listen banaan-80-ipv4
> bind :80
> mode tcp
> server banaan-vps 127.0.0.1:80 check inter 2000
>
> /etc/haproxy/haproxy.cfg.same-port:
> global
> maxconn 32000
> tune.maxrewrite 2048
> user haproxy
> group haproxy
> daemon
> chroot /var/lib/haproxy
> nbproc 1
> maxcompcpuusage 85
> spread-checks 0
> stats socket /var/run/haproxy.sock mode 600 level admin process 1 user
> 

Re: Observations about reloads and DNS SRV records

2018-07-03 Thread Baptiste
Hi,

Actually, the problem was deeper than my first thought.
In its current state, statefile and SRV records are simply not compatible.
I had to add a new field in the state file format to add support to this.

Could you please confirm the patch attached fixes your issues?

Baptiste



On Mon, Jun 25, 2018 at 11:48 AM, Baptiste  wrote:

> Hi,
>
> Forget the backend id, it's the wrong answer to that problem.
> I was investigating an other potential issue, but this does not fix the
> original problem reported here.
>
> Here is the answer I delivered today on discourse, where other people have
> also reported the same issue:
>
>Just to let you know that I think I found the cause of the issue but I
> don’t have a fix yet.
>I’ll come back to you this week with more info and hopefully a fix.
>The issue seem to be in srv_init_addr(), because srv->hostname is not
> set (null).
>
> Baptiste
>
>
>
From 6899b19b9686b6dadc65b89adfb32c8792004663 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Mon, 2 Jul 2018 17:00:54 +0200
Subject: [PATCH 1/2] BUG/MEDIUM: dns: fix incomatibility between SRV
 resolution and server state file

Server state file has no indication that a server is currently managed
by a DNS SRV resolution.
And thus, both feature (DNS SRV resolution and server state), when used
together, does not provide the expected behavior: a smooth experience...

This patch introduce the "SRV record name" in the server state file and
loads and applies it if found and wherever required.
---
 include/types/server.h |  7 ---
 src/proxy.c| 10 --
 src/server.c   | 45 +
 3 files changed, 57 insertions(+), 5 deletions(-)

diff --git a/include/types/server.h b/include/types/server.h
index 0cd20c0..29b88f8 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -126,10 +126,11 @@ enum srv_initaddr {
 "bk_f_forced_id " \
 "srv_f_forced_id "\
 "srv_fqdn "   \
-"srv_port"
+"srv_port"\
+"srvrecord"
 
-#define SRV_STATE_FILE_MAX_FIELDS 19
-#define SRV_STATE_FILE_NB_FIELDS_VERSION_1 18
+#define SRV_STATE_FILE_MAX_FIELDS 20
+#define SRV_STATE_FILE_NB_FIELDS_VERSION_1 19
 #define SRV_STATE_LINE_MAXLEN 512
 
 /* server flags -- 32 bits */
diff --git a/src/proxy.c b/src/proxy.c
index c262966..c1c41ba 100644
--- a/src/proxy.c
+++ b/src/proxy.c
@@ -1429,6 +1429,7 @@ static int dump_servers_state(struct stream_interface *si, struct chunk *buf)
 	char srv_addr[INET6_ADDRSTRLEN + 1];
 	time_t srv_time_since_last_change;
 	int bk_f_forced_id, srv_f_forced_id;
+	char *srvrecord;
 
 	/* we don't want to report any state if the backend is not enabled on this process */
 	if (px->bind_proc && !(px->bind_proc & pid_bit))
@@ -1458,18 +1459,23 @@ static int dump_servers_state(struct stream_interface *si, struct chunk *buf)
 		bk_f_forced_id = px->options & PR_O_FORCED_ID ? 1 : 0;
 		srv_f_forced_id = srv->flags & SRV_F_FORCED_ID ? 1 : 0;
 
+		srvrecord = NULL;
+		if (srv->srvrq && srv->srvrq->name)
+			srvrecord = srv->srvrq->name;
+
 		chunk_appendf(buf,
 "%d %s "
 "%d %s %s "
 "%d %d %d %d %ld "
 "%d %d %d %d %d "
-"%d %d %s %u"
+"%d %d %s %u %s"
 "\n",
 px->uuid, px->id,
 srv->puid, srv->id, srv_addr,
 srv->cur_state, srv->cur_admin, srv->uweight, srv->iweight, (long int)srv_time_since_last_change,
 srv->check.status, srv->check.result, srv->check.health, srv->check.state, srv->agent.state,
-bk_f_forced_id, srv_f_forced_id, srv->hostname ? srv->hostname : "-", srv->svc_port);
+bk_f_forced_id, srv_f_forced_id, srv->hostname ? srv->hostname : "-", srv->svc_port,
+srvrecord ? srvrecord : "-");
 		if (ci_putchk(si_ic(si), ) == -1) {
 			si_applet_cant_put(si);
 			return 0;
diff --git a/src/server.c b/src/server.c
index 277d140..cb13793 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2678,6 +2678,7 @@ static void srv_update_state(struct server *srv, int version, char **params)
 	const char *fqdn;
 	const char *port_str;
 	unsigned int port;
+	char *srvrecord;
 
 	fqdn = NULL;
 	port = 0;
@@ -2701,6 +2702,7 @@ static void srv_update_state(struct server *srv, int version, char **params)
 			 * srv_f_forced_id:  params[12]
 			 * srv_fqdn: params[13]
 			 * srv_port: params[14]
+			 * srvrecord:params[15]
 			 */
 
 			/* validating srv_op_state */
@@ -2833,6 +2835,13 @@ static void srv_update_state(struct server *srv, int version, char **params)
 }
 			}
 
+			/* SRV record
+			 * NOTE: in HAProxy, SRV records must start with an underscore '_'
+			 */
+			srvrecord = params[15];
+			if (srvrecord && *srvrecord != '_')
+srvrecord = NULL;
+
 			/* don't apply anything if one error has been detected */
 			if (msg->len)
 goto out;
@@ -2965,6 +2974,41 @@ static void srv_update_state(struct server *srv, int version, char **params)
 	}
 }
 			}
+			/* If