process of release to debian, backports ?

2023-05-09 Thread Jim Freeman
Just Curious ... (probably a Vincent et al question?)

What considerations/timings trigger gating an haproxy release getting into
debian (and backports) archives ?

Severity of problems fixed in release (e.g. CVE, ...) ?
Available bandwidth/fatigue of uploaders ?
Debian guidelines ?

As always - gratitude and kudos to all for a stellar and useful system.
...jfree


Re: Puzzlement : empty field vs. ,field() -m

2023-04-18 Thread Jim Freeman
On Tue, Apr 18, 2023 at 12:56 AM Willy Tarreau  wrote:
>
> Hi Jim,
>
> [side note: please guys, avoid top-posting, it makes it very difficult
>  to quote context in responses]

Repenting as fast as I can ...
 * https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
( does a list etiquette guide exist ? )

> On Mon, Apr 17, 2023 at 09:17:03PM -0600, Jim Freeman wrote:
> > Aleksandar - thanks for the feedback ! (haproxy -vv : attached)
> >
> > I'd spent a good long while scouring the config docs (and Google) seeking
> > enlightenment, but ...
> > No joy using either of '! -m found -m int 0' or '! -m found'.
> >
> > Here's hoping someone/anyone else has experience making an empty field()
> > work ...
> (...)
...
> Just a warning below though:
>
> > > >acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found -m 
> > > > str ''
>
> I have no idea how that parses to be honest, because a single match
> method may be used and normally whatever follows the first pattern are
> other patterns. Since the "found" match doesn't take any pattern, it's
> likely that "-m str ''" is still parsed and possibly replaces -m found,
> but I wouldn't count on that.
>
> So if you want to consider as missing a cookie that is either not present
> or that is empty, I would probably do it this way:
>
> acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found
> acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) -m len 0
>
> This ACL will be true if the cookie's value was not found, or if it
> was found but with an empty length.
>
> hoping this helps,
> Willy

Expressing the acl's OR across multiple lines did indeed help - many thanks !
My sense from googling was that the multiple '-m ...' clauses should
work (and would be more succinct) is hereby proven wrong, and your
explication of the vagaries of second '-m ...' relative to the
preceding one is a very helpful insight and caution flag.
The crux of that I guess is understanding that "whatever follows the
first pattern are other patterns" - that didn't register in my study
of the config docs.
I'd wish to not have to double up the number of lines per field, but
understand that the in-memory expression is very efficient.

As always - HAProxy rocks hardcore and world-class !!

Thanks again,
...jfree



Re: Puzzlement : empty field vs. ,field() -m

2023-04-17 Thread Jim Freeman
Aleksandar - thanks for the feedback ! (haproxy -vv : attached)

I'd spent a good long while scouring the config docs (and Google) seeking
enlightenment, but ...
No joy using either of '! -m found -m int 0' or '! -m found'.

Here's hoping someone/anyone else has experience making an empty field()
work ...

On Mon, Apr 17, 2023 at 5:29 PM Aleksandar Lazic  wrote:

> Hi.
>
> On 18.04.23 00:55, Jim Freeman wrote:
> > In splitting out fields from req.cook, populated fields work well, but
> > detecting an unset field has me befuddled:
> >
> >acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found -m
> str ''
> >
> > does not detect that a cookie/field is empty ?
> >
> > Running the attached 'hdrs' script against the attached haproxy.cfg sees
> :
> > ===
> > ...
> > cookie: cook2hdr=#
> > bar: bar
> > baz: baz
> > meta: ,bar,baz
> > foo:
> > ===
> > when foo: should not be created, and meta: should only have 2 fields.
> >
> > Am I just getting the idiom/incantation wrong ?
> >
> > [ stock/current haproxy 2.6 from Debian/Ubuntu LTS backports ]
>
> A `haproxy -vv` is better then guessing which version this is :-)
>
> Looks like the doc does not mention the empty field case.
>
> https://docs.haproxy.org/2.6/configuration.html#7.3.1-field
>
>  From the code looks like that the data is set to 0
> https://github.com/haproxy/haproxy/blob/master/src/sample.c#L2432
>
> I would just try to make a '! -m found' but that's untested, I'm pretty
> sure that some persons on this list have much more experience with empty
> return values test.
>
> Regards
> Alex
>
$ /usr/sbin/haproxy -vv
HAProxy version 2.6.9-1~bpo11+1 2023/02/15 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2027.
Known bugs: http://www.haproxy.org/bugs/bugs-2.6.9.html
Running on: Linux 5.10.0-21-cloud-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) 
x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 -fstack-protector-strong -Wformat 
-Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference 
-fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 
USE_SYSTEMD=1 USE_PROMEX=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE +LIBCRYPT 
+LINUX_SPLICE +LINUX_TPROXY +LUA -MEMORY_PROFILING +NETFILTER +NS 
-OBSOLETE_LINKER +OPENSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL +PROMEX -QUIC +RT +SLZ -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO 
+THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1n  15 Mar 2022
Running on OpenSSL version : OpenSSL 1.1.1n  15 Mar 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-ex

Puzzlement : empty field vs. ,field() -m

2023-04-17 Thread Jim Freeman
In splitting out fields from req.cook, populated fields work well, but
detecting an unset field has me befuddled:

  acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found -m str ''

does not detect that a cookie/field is empty ?

Running the attached 'hdrs' script against the attached haproxy.cfg sees :
===
...
cookie: cook2hdr=#
bar: bar
baz: baz
meta: ,bar,baz
foo:
===
when foo: should not be created, and meta: should only have 2 fields.

Am I just getting the idiom/incantation wrong ?

[ stock/current haproxy 2.6 from Debian/Ubuntu LTS backports ]


hdrs
Description: Binary data


haproxy.cfg
Description: Binary data


Re: changed IP messages overrunning /var/log ?

2022-11-10 Thread Jim Freeman
FWIW : our updates are automated, so all we care about on haproxy
reloads/restarts is
the exit code - stdout/stderr are dead to us.

But the service.c ha_warning() + send_log() doubling up was chewing up
/var/log/ space.
Hoping there was a way to shut up all the Heroku "changed its IP"
[WARNING]s noise showing
up in /var/log/syslog+messages (Debian), we found systemd's recent
StandardError= idiom and
created an /etc/systemd/system/haproxy.service.d/override.conf ("systemctl
edit haproxy"), with :

  [Service]
  StandardError=null

"systemctl daemon-reload" , "systemctl reload-or-restart haproxy" - no joy
(systemd 247.3-7+deb11u1).
Installed systemd/bullseye-backports (251.3-1~bpo11+1) - and the noise is
now gone. Phew ...

On Thu, Nov 10, 2022 at 11:39 AM Jim Freeman  wrote:

> Heroku, for instance, uses constantly rotating/changing pools of IPs with
> short (10s ?) TTLs.
>
> So any backends pointing to Heroku-hosted services will pretty much
> always/constantly get
> "changed its IP" [WARNING]s spewing to configured logs, plus stderr (on
> default Debian, syslog+messages
> => one for the price of three).
>
> The more Heroku et al being used, the more redundant noise to extraneous
> logs/facilities.
>
>
> https://help.heroku.com/VKRNVVF5/what-is-the-correct-dns-cname-target-for-my-custom-domains
>
> dig +short app.herokuapp.com
> top answer differs every time ...
>
> On Tue, Apr 20, 2021 at 11:00 PM Willy Tarreau  wrote:
>
>> On Fri, Apr 16, 2021 at 08:18:30AM -0600, Jim Freeman wrote:
>> > Root cause - haproxy intentionally double logs :
>> > https://github.com/haproxy/haproxy/blob/master/src/server.c
>> > srv_update_addr(...) { ... /* generates a log line and a warning on
>> > stderr */ ... }
>>
>> A number of such important updates (like servers going down for example)
>> are emitted both on stderr and logs. However I find it strange that the
>> resolvers complain every single second that the server changed address,
>> it sounds like something broke there and that it fails to conserve its
>> previous address (or maybe the DNS server oscillates all the time ?).
>>
>> Willy
>>
>


Re: changed IP messages overrunning /var/log ?

2022-11-10 Thread Jim Freeman
Heroku, for instance, uses constantly rotating/changing pools of IPs with
short (10s ?) TTLs.

So any backends pointing to Heroku-hosted services will pretty much
always/constantly get
"changed its IP" [WARNING]s spewing to configured logs, plus stderr (on
default Debian, syslog+messages
=> one for the price of three).

The more Heroku et al being used, the more redundant noise to extraneous
logs/facilities.

https://help.heroku.com/VKRNVVF5/what-is-the-correct-dns-cname-target-for-my-custom-domains

dig +short app.herokuapp.com
top answer differs every time ...

On Tue, Apr 20, 2021 at 11:00 PM Willy Tarreau  wrote:

> On Fri, Apr 16, 2021 at 08:18:30AM -0600, Jim Freeman wrote:
> > Root cause - haproxy intentionally double logs :
> > https://github.com/haproxy/haproxy/blob/master/src/server.c
> > srv_update_addr(...) { ... /* generates a log line and a warning on
> > stderr */ ... }
>
> A number of such important updates (like servers going down for example)
> are emitted both on stderr and logs. However I find it strange that the
> resolvers complain every single second that the server changed address,
> it sounds like something broke there and that it fails to conserve its
> previous address (or maybe the DNS server oscillates all the time ?).
>
> Willy
>


Re: [ANNOUNCE] haproxy-2.2.18

2021-11-08 Thread Jim Freeman
Great to hear - thanks !

On Sat, Nov 6, 2021 at 12:58 AM Vincent Bernat  wrote:

>  ❦  5 November 2021 17:05 -06, Jim Freeman:
>
> > Might this (or something 2.4-ish) be heading towards bullseye-backports ?
> > https://packages.debian.org/search?keywords=haproxy
> > https://packages.debian.org/bullseye-backports/
>
> 2.4 will be in bullseye-backports.
> --
> Don't patch bad code - rewrite it.
> - The Elements of Programming Style (Kernighan & Plauger)
>


Re: [ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Jim Freeman
Might this (or something 2.4-ish) be heading towards bullseye-backports ?
https://packages.debian.org/search?keywords=haproxy
https://packages.debian.org/bullseye-backports/

Thanks,
...jfree

On Fri, Nov 5, 2021 at 8:51 AM Christopher Faulet 
wrote:

> Hi,
>
> HAProxy 2.2.18 was released on 2021/11/04. It added 66 new commits
> after version 2.2.17.
>
...


Re: PH disconnects, but "show errors" has 0 entries ?

2021-10-19 Thread Jim Freeman
OK - this is weird (so don't shoot the messenger?).
With more tcpdump-ing and examination, the back-end service logs that it
sent a response, but
 1) tcpdump running on the haproxy instance never sees the response !
 a) 2 proxies - an AWS ELB and on-instance nginx - lie between HAProxy
instance and the service
 2) sans any response (and within 0.2 to 13 seconds of the request send),
HAProxy initiates the PH/500 to the client!

It would make sense to me if any timeouts or disconnects were involved -
HAProxy would report an [sS][DH] or somesuch.

And reverting the sending of the "content-security-policy: frame-ancestors
..." and "x-frame-options: ..." response(!) headers makes the problem
disappear again.  You'll rightly point out that HTTP/1.1 is stateless, and
that the prior history of the request/response stream (and response headers
sent to the client) shouldn't affect the (non-)response to a given request.

Any clues as to how/why the PH/500 might be generated without a response to
trigger it would be most eagerly received.  While it is entirely likely
this will wind up being a "nut loose on the keyboard" issue, I just thought
I'd share my observations and befuddlement ...

https://www.mail-archive.com/haproxy@formilux.org/msg41308.html

"This computer stuff is hard ..."

On Tue, Oct 19, 2021 at 3:24 AM Christopher Faulet 
wrote:

> Le 10/13/21 à 8:30 PM, Jim Freeman a écrit :
> > In adding a couple of new security response headers via haproxy.cfg (one
> is 112
> > bytes, the other 32), a few requests are now getting 500 status (PH
> session
> > state) responses, but "show errors" has 0 entries?  Most responses
> succeed (all
> > have the additional headers), so it's not a problem with the new headers
> themselves.
> >
> > If haproxy generates a PH/500, shouldn't "show errors" show details of
> the
> > offending response ?
> >
> > Thanks,
> > ...jfree
> > ==
> > # echo "show info" | socat stdio /run/haproxy/stats.sock | grep ^Version:
> > Version: 2.2.8-1~bpo10+1
> >
> > #  echo "show errors -1" | socat - /run/haproxy/stats.sock
> > Total events captured on [13/Oct/2021:18:24:15.819] : 0
> >
> > # cat /etc/debian_version
> > 10.11
>
> Hi,
>
> Only parsing errors are reported by "show errors" command. Here PH/500
> error is
> most probably due to a header rewrite error. I have not deeply checked
> however.
> You can verify my assumption by checking the "wrew" counter in "show
> stats"
> command output on the stats socket.
>
> Header rewrite errors are triggered when there is not enough space in the
> buffer
> to perform the rewrites. By default, 1024 Bytes are reserved in the
> buffer, to
> be sure to have enough space to perform some rewrites. If you add many
> headers
> in the response, it may be the problem. You can increase the reserve by
> setting
> "tune.maxrewrite" global parameter.
>
> When such error is encountered, HAProxy returns a 500-Internal-Error
> response.
> You can change that to make it fails silently. To do so, take a look at
> the
> "strict-mode" http-response action.
>
> --
> Christopher Faulet
>


Re: PH disconnects, but "show errors" has 0 entries ?

2021-10-19 Thread Jim Freeman
Many thanks for your insight and response - I'll check that out.

On Tue, Oct 19, 2021 at 3:24 AM Christopher Faulet 
wrote:

> Le 10/13/21 à 8:30 PM, Jim Freeman a écrit :
> > In adding a couple of new security response headers via haproxy.cfg (one
> is 112
> > bytes, the other 32), a few requests are now getting 500 status (PH
> session
> > state) responses, but "show errors" has 0 entries?  Most responses
> succeed (all
> > have the additional headers), so it's not a problem with the new headers
> themselves.
> >
> > If haproxy generates a PH/500, shouldn't "show errors" show details of
> the
> > offending response ?
> >
> > Thanks,
> > ...jfree
> > ==
> > # echo "show info" | socat stdio /run/haproxy/stats.sock | grep ^Version:
> > Version: 2.2.8-1~bpo10+1
> >
> > #  echo "show errors -1" | socat - /run/haproxy/stats.sock
> > Total events captured on [13/Oct/2021:18:24:15.819] : 0
> >
> > # cat /etc/debian_version
> > 10.11
>
> Hi,
>
> Only parsing errors are reported by "show errors" command. Here PH/500
> error is
> most probably due to a header rewrite error. I have not deeply checked
> however.
> You can verify my assumption by checking the "wrew" counter in "show
> stats"
> command output on the stats socket.
>
> Header rewrite errors are triggered when there is not enough space in the
> buffer
> to perform the rewrites. By default, 1024 Bytes are reserved in the
> buffer, to
> be sure to have enough space to perform some rewrites. If you add many
> headers
> in the response, it may be the problem. You can increase the reserve by
> setting
> "tune.maxrewrite" global parameter.
>
> When such error is encountered, HAProxy returns a 500-Internal-Error
> response.
> You can change that to make it fails silently. To do so, take a look at
> the
> "strict-mode" http-response action.
>
> --
> Christopher Faulet
>


Re: PH disconnects, but "show errors" has 0 entries ?

2021-10-18 Thread Jim Freeman
Nope - never mind.  Plenty of successful traffic with the sec-ch-ua*
headers.

I'll keep poking re: PH/500 w/o "show errors", and confess here when/how I
find it is the result of being ignernt.

On Mon, Oct 18, 2021 at 11:41 AM Jim Freeman  wrote:

> Ran tcpdump on the proxy in search of useful detail.
> Saw these unfamiliar (to me) headers on the PH/500 'd request :
>
>  sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="90"
>  sec-ch-ua-mobile: ?0
>
> Googled, found : https://www.chromium.org/updates/ua-ch, was a tad
> FUD'd by
> ===
> Possible Site Compatibility Issue
> UA-CH is an additive feature, which adds two new request headers that are
> sent by default: `sec-ch-ua` and `sec-ch-ua-mobile`. Those request headers
> are based off of Structured Field Values, an emerging standard related to
> HTTP header values. They contain characters that, though permitted in the
> HTTP specification, weren’t previously common in request headers, such as
> double-quotes (“), equal signs (=), forward-slashes (/), and question marks
> (?). Some Web-Application-Firewall (WAF) software, as well as backend
> security measures, may mis-categorize those new characters as “suspicious”,
> and as such, block those requests.
> ===
>
> HAProxy tends to be up on all such things, but any chance the PH/500 could
> be related ?
>
> Thanks,
> ...jfree
>

 https://www.mail-archive.com/haproxy@formilux.org/msg41272.html
Added headers :

content-security-policy: frame-ancestors 'self' https://*.primarydomain.org
https://*.related.domain.org;

x-frame-options: SAMEORIGIN


Re: PH disconnects, but "show errors" has 0 entries ?

2021-10-18 Thread Jim Freeman
Ran tcpdump on the proxy in search of useful detail.
Saw these unfamiliar (to me) headers on the PH/500 'd request :

 sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="90"
 sec-ch-ua-mobile: ?0

Googled, found : https://www.chromium.org/updates/ua-ch, was a tad FUD'd by
===
Possible Site Compatibility Issue
UA-CH is an additive feature, which adds two new request headers that are
sent by default: `sec-ch-ua` and `sec-ch-ua-mobile`. Those request headers
are based off of Structured Field Values, an emerging standard related to
HTTP header values. They contain characters that, though permitted in the
HTTP specification, weren’t previously common in request headers, such as
double-quotes (“), equal signs (=), forward-slashes (/), and question marks
(?). Some Web-Application-Firewall (WAF) software, as well as backend
security measures, may mis-categorize those new characters as “suspicious”,
and as such, block those requests.
===

HAProxy tends to be up on all such things, but any chance the PH/500 could
be related ?

Thanks,
...jfree


PH disconnects, but "show errors" has 0 entries ?

2021-10-13 Thread Jim Freeman
In adding a couple of new security response headers via haproxy.cfg (one is
112 bytes, the other 32), a few requests are now getting 500 status (PH
session state) responses, but "show errors" has 0 entries?  Most responses
succeed (all have the additional headers), so it's not a problem with the
new headers themselves.

If haproxy generates a PH/500, shouldn't "show errors" show details of the
offending response ?

Thanks,
...jfree
==
# echo "show info" | socat stdio /run/haproxy/stats.sock | grep ^Version:
Version: 2.2.8-1~bpo10+1

#  echo "show errors -1" | socat - /run/haproxy/stats.sock
Total events captured on [13/Oct/2021:18:24:15.819] : 0

# cat /etc/debian_version
10.11


Re: Proxy Protocol - any browser proxy extensions that support ?

2021-06-05 Thread Jim Freeman
Thanks much for the link!
I'd seen that it had been haxx'd into curl, but your link to the patch
really pointed up how elegantly and exquisitely simple it is.
Would that it were so simply and readily available in extensions to
lesser browsers. ;-)  As always, hats off to Willy, Daniel, et al !

On Fri, Jun 4, 2021 at 4:43 PM Aleksandar Lazic  wrote:
>
> On 04.06.21 21:32, Jim Freeman wrote:
> > https://developer.chrome.com/docs/extensions/reference/proxy/
> > supports SOCKS4/SOCKS5
> >
> > Does anyone know of any in-browser VPN/proxy extensions that support
> > Willy's Proxy Protocol ?
> > https://www.haproxy.com/blog/haproxy/proxy-protocol/ enumerates some
> > of the state of support, but doesn't touch on browser VPN/proxy
> > extensions, and my due-diligence googling is coming up short ...
>
> Well not a real browser but a Swedish army knife :-)
>
> https://github.com/curl/curl/commit/6baeb6df35d24740c55239f24b5fc4ce86f375a5
>
> `haproxy-protocol`
>
> > Thanks,
> > ...jfree
> >
>



Proxy Protocol - any browser proxy extensions that support ?

2021-06-04 Thread Jim Freeman
https://developer.chrome.com/docs/extensions/reference/proxy/
supports SOCKS4/SOCKS5

Does anyone know of any in-browser VPN/proxy extensions that support
Willy's Proxy Protocol ?
https://www.haproxy.com/blog/haproxy/proxy-protocol/ enumerates some
of the state of support, but doesn't touch on browser VPN/proxy
extensions, and my due-diligence googling is coming up short ...

Thanks,
...jfree



Re: Illegal instruction - 2.2 on AMD/Sempron ?

2021-04-22 Thread Jim Freeman
stock 1.8.19 (which runs fine) doesn't also use cmpxchg16b ?
https://en.wikipedia.org/wiki/X86-64#Older_implementations
May be time for a new motherboard (or a Pi4?) ...

Dump of assembler code for function ha_random64:
   0x5566bac0 <+0>: push   %rbx
   0x5566bac1 <+1>: sub$0x10,%rsp
   0x5566bac5 <+5>: mov0x17d494(%rip),%rsi# 0x557e8f60
   0x5566bacc <+12>: mov0x17d495(%rip),%rdx# 0x557e8f68
   0x5566bad3 <+19>: mov%fs:0x28,%rax
   0x5566badc <+28>: mov%rax,0x8(%rsp)
   0x5566bae1 <+33>: xor%eax,%eax
   0x5566bae3 <+35>: mov%rsi,%rcx
   0x5566bae6 <+38>: mov%rsi,%rbx
   0x5566bae9 <+41>: xor%rdx,%rcx
   0x5566baec <+44>: rol$0x18,%rbx
   0x5566baf0 <+48>: mov%rcx,%rax
   0x5566baf3 <+51>: xor%rcx,%rbx
   0x5566baf6 <+54>: ror$0x1b,%rcx
   0x5566bafa <+58>: shl$0x10,%rax
   0x5566bafe <+62>: xor%rax,%rbx
   0x5566bb01 <+65>: mov%rsi,%rax
=> 0x5566bb04 <+68>: lock cmpxchg16b 0x17d453(%rip)#
0x557e8f60
   0x5566bb0d <+77>: sete   %cl
   0x5566bb10 <+80>: test   %cl,%cl
   0x5566bb12 <+82>: je 0x5566bb40 
   0x5566bb14 <+84>: lea(%rsi,%rsi,4),%rax
   0x5566bb18 <+88>: rol$0x7,%rax
   0x5566bb1c <+92>: mov0x8(%rsp),%rdi
   0x5566bb21 <+97>: xor%fs:0x28,%rdi
   0x5566bb2a <+106>: lea(%rax,%rax,8),%rax
   0x5566bb2e <+110>: jne0x5566bb45 
   0x5566bb30 <+112>: add$0x10,%rsp

On Thu, Apr 22, 2021 at 8:31 AM Willy Tarreau  wrote:
>
> Hi Jim,
>
> On Wed, Apr 21, 2021 at 04:46:17AM -0600, Jim Freeman wrote:
> > Stock 1.8.19-1+deb10u3 on Debian10 runs fine, but when I install
> > 2.2.8-1~bpo10+1 from buster-backports, I get "Illegal instruction" ?
> > Is my CPU just too historic ?
>
> Possible but it makes me think that it could also be a matter of lib
> or toolchain that was built for a slightly different arch with certain
> extensions (e.g. sse etc).
>
> Since it seems to happen easily, you should try it again under gdb,
> then disassemble the code around the location:
>
>  # gdb --args ./haproxy -f ~/haproxy.cfg
>  > run
>
> Once it crashes, issue:
>
>  > bt
>
> it will report the backtrace and current function where it died,
> then:
>
>  > disassemble $rip
>
> and press Enter until you see a line with "=>" indicating the current
> location. Please post a copy of the surrounding lines here, we may
> possibly figure that we're using an instruction we ought not to use.
>
> > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
> > pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm
> > 3dnowext 3dnow rep_good nopl cpuid pni lahf_lm 3dnowprefetch vmmcall
> > bugs : fxsave_leak sysret_ss_attrs null_seg swapgs_fence spectre_v1 
> > spectre_v2
>
> I'm not seeing cmpxchg16b here ("cx16"), which could be a serious
> concern, as we've never been aware of any x86_64 CPU without it and
> have been enabling it by default on x86_64 (and it cannot be enabled
> nor disabled at run time as it allows to replace certain structures
> with other ones).
>
> Willy



Illegal instruction - 2.2 on AMD/Sempron ?

2021-04-21 Thread Jim Freeman
Stock 1.8.19-1+deb10u3 on Debian10 runs fine, but when I install
2.2.8-1~bpo10+1 from buster-backports, I get "Illegal instruction" ?
Is my CPU just too historic ?

# strace -f ./haproxy -f ~/haproxy.cfg
...
openat(AT_FDCWD, "/etc/haproxy/errors/504.http", O_RDONLY) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=195, ...}) = 0
read(4, "HTTP/1.0 504 Gateway Time-out\r\nC"..., 195) = 195
close(4)= 0
read(3, "", 4096)   = 0
close(3)= 0
gettimeofday({tv_sec=1619001723, tv_usec=235475}, NULL) = 0
brk(0x55bc43e58000) = 0x55bc43e58000
brk(0x55bc43e7c000) = 0x55bc43e7c000
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPN, si_addr=0x55bc420f2b04} ---
+++ killed by SIGILL +++
Illegal instruction

# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 47
model name : AMD Sempron(tm) Processor 3200+
stepping : 2
cpu MHz : 1790.686
cache size : 256 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm
3dnowext 3dnow rep_good nopl cpuid pni lahf_lm 3dnowprefetch vmmcall
bugs : fxsave_leak sysret_ss_attrs null_seg swapgs_fence spectre_v1 spectre_v2
bogomips : 3581.37
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc



Re: changed IP messages overrunning /var/log ?

2021-04-16 Thread Jim Freeman
Root cause - haproxy intentionally double logs :
https://github.com/haproxy/haproxy/blob/master/src/server.c
srv_update_addr(...) { ... /* generates a log line and a warning on
stderr */ ... }

On Thu, Apr 15, 2021 at 11:06 PM Jim Freeman  wrote:
...
> The duplication of logging the new(?) 'changed its IP' messages to daemon.log
> (when only local* facilities are configured) is still getting root-cause 
> analysis.
...



Re: changed IP messages overrunning /var/log ?

2021-04-15 Thread Jim Freeman
More info on the over-quick+newly-noisy resolves that were triggering this ...

We've been running 1.8.19
(https://packages.debian.org/stretch-backports/haproxy)
with 'hold valid 60s' configured, which was acting-ish like 'timeout
resolve 60s'
(which  was *not* configured).

So when we moved to current 2.0 , with the fix for /issues/345 ,
resolutions which
had been happening every 60s now happened every 1s (the default?), with each
IP change now making noise it had not made before => ergo, logs/disk filled.

Adding 'timeout resolve 60s' reduces the noise by a factor of 60.
The duplication of logging the new(?) 'changed its IP' messages to daemon.log
(when only local* facilities are configured) is still getting
root-cause analysis.
===
https://github.com/haproxy/haproxy/issues/345
https://github.com/haproxy/haproxy/commit/f50e1ac4442be41ed8b9b7372310d1d068b85b33
http://www.haproxy.org/download/1.8/src/CHANGELOG
 * 2019/11/25 : 1.8.23
  * BUG: dns: timeout resolve not applied for valid resolutions

On Thu, Apr 15, 2021 at 1:43 AM Jim Freeman  wrote:
>
> Migrating from stock stretch-backports+1.8 to Debian_10/Buster+2.0 (to
> purge 'reqrep' en route to 2.2), /var/log/ is suddenly filling disk
> with messages about changed IPs :
> ===
> 2021-04-14T01:08:40.123303+00:00 ip-10-36-217-169 haproxy[569]:
> my_web_service changed its IP from 52.208.198.117 to 34.251.174.55 by
> DNS cache.
> ===
> daemon.log and syslog (plus the usual haproxy.log) all get hammered.
>
> Many of the backends (200+) are of the flavor :
> server-template my_web_service 8 my_web_service.herokuapp.com:443 ...
> resolvers fs_resolvers resolve-prefer ipv4
>
> The herokuapp.com addresses change constantly, but this has not been a
> problem heretofore.
>
> This is puzzling, since haproxy.cfg directs all logs to local*
> After some investigation, it turns out that the daemon.log and syslog
> entries arrive via facility.level=daemon.info.  I've made rsyslog cfg
> changes that now stop the haproxy msgs from overrunning daemon.log and
> syslog (and allow only a representative fraction to hit haproxy.log).
>
> Two questions :
>  1) What is different about 2.0 that "changed its IP" entries are so
> voluminous ?
>  2) Why is daemon.info involved in the logging, when the haproxy.cfg
> settings only designate local* facilities ?
>
> Thanks for any insights (and for stupendous software) !
> 
> Running HAProxy 2.0 from :
> https://haproxy.debian.net/#?distribution=Debian&release=buster&version=2.0
>
> on stock Buster AWS AMI :
> https://wiki.debian.org/Cloud/AmazonEC2Image
> https://wiki.debian.org/Cloud/AmazonEC2Image



Re: changed IP messages overrunning /var/log ?

2021-04-15 Thread Jim Freeman
Yes - as a systemd service.  But the puzzlement remains that the same complaints
get logged via *both* daemon.info and local*.info, when local* is all
we configure
in haproxy.cfg.  /var/log gets doubly over-taxed because the daemon.info entries
wind up in both syslog and daemon.log (per stock /etc/rsyslog.conf).

We now avoid that via an entry under rsyslog.d/ :
daemon.info {
  if $programname startswith 'haproxy' then {
stop
  }
}
,  but I'm still curious about why the entries get sent to daemon.info
(systemd's
stdout/stderr ?), when local* is explicitly configured to receive ...

On Thu, Apr 15, 2021 at 2:02 AM Jarno Huuskonen  wrote:
>
> Hello,
>
> On Thu, 2021-04-15 at 01:43 -0600, Jim Freeman wrote:
> > This is puzzling, since haproxy.cfg directs all logs to local*
> > After some investigation, it turns out that the daemon.log and syslog
> > entries arrive via facility.level=daemon.info.  I've made rsyslog cfg
> > changes that now stop the haproxy msgs from overrunning daemon.log and
> > syslog (and allow only a representative fraction to hit haproxy.log).
> >
> > Two questions :
> >  1) What is different about 2.0 that "changed its IP" entries are so
> > voluminous ?
> >  2) Why is daemon.info involved in the logging, when the haproxy.cfg
> > settings only designate local* facilities ?
>
> Are you running haproxy as a systemd service ? Those logs could be
> coming from systemd (haproxy stdout/stderr).
>
> -Jarno
>
> --
> Jarno Huuskonen



changed IP messages overrunning /var/log ?

2021-04-15 Thread Jim Freeman
Migrating from stock stretch-backports+1.8 to Debian_10/Buster+2.0 (to
purge 'reqrep' en route to 2.2), /var/log/ is suddenly filling disk
with messages about changed IPs :
===
2021-04-14T01:08:40.123303+00:00 ip-10-36-217-169 haproxy[569]:
my_web_service changed its IP from 52.208.198.117 to 34.251.174.55 by
DNS cache.
===
daemon.log and syslog (plus the usual haproxy.log) all get hammered.

Many of the backends (200+) are of the flavor :
server-template my_web_service 8 my_web_service.herokuapp.com:443 ...
resolvers fs_resolvers resolve-prefer ipv4

The herokuapp.com addresses change constantly, but this has not been a
problem heretofore.

This is puzzling, since haproxy.cfg directs all logs to local*
After some investigation, it turns out that the daemon.log and syslog
entries arrive via facility.level=daemon.info.  I've made rsyslog cfg
changes that now stop the haproxy msgs from overrunning daemon.log and
syslog (and allow only a representative fraction to hit haproxy.log).

Two questions :
 1) What is different about 2.0 that "changed its IP" entries are so
voluminous ?
 2) Why is daemon.info involved in the logging, when the haproxy.cfg
settings only designate local* facilities ?

Thanks for any insights (and for stupendous software) !

Running HAProxy 2.0 from :
https://haproxy.debian.net/#?distribution=Debian&release=buster&version=2.0

on stock Buster AWS AMI :
https://wiki.debian.org/Cloud/AmazonEC2Image
https://wiki.debian.org/Cloud/AmazonEC2Image



Re: ELB scaling => sudden backend tragedy

2019-10-30 Thread Jim Freeman
I had come to think that haproxy was not parsing a Truncate-flagged DNS
response that had usable entries in it.

After further investigation, tcpdump made clear that the truncated DNS
response enumerated *no* 'A' records, expecting the client to switch to TCP
for the query.
So we'll be looking at a safe non-default for accepted_payload_size to
address this issue in future.

Thanks to all,
...jfree

On Thu, Oct 24, 2019 at 2:29 PM Jim Freeman  wrote:

> https://github.com/haproxy/haproxy/issues/341
>
> On Thu, Oct 24, 2019 at 11:44 AM Lukas Tribus  wrote:
>
>> Hello,
>>
>> On Thu, Oct 24, 2019 at 5:53 PM Jim Freeman  wrote:
>> >
>> > Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of
>> the servers in that backend were suddenly marked down, e.g. :
>> >
>> >Server www26 is going DOWN for maintenance (unspecified DNS error)
>> >
>> > Ergo, ALL requests to that backend got 503s ==> complete outage
>> >
>> > Mayhap src/dns.c::dns_validate_dns_response() bravely running away when
>> DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers,
>> abandoning TTL updates to perfectly good endpoints) is not the best course
>> of action ?
>> >
>> > Of course we'll hope (MTUs allowing) that we'll be able to paper this
>> over for awhile using an accepted_payload_size >default(512).
>>
>> I agree this is basically a ticking time-bomb for everyone not
>> thinking about the DNS payload size every single day.
>>
>> However we also need to make sure people will become aware of it when
>> they are hitting truncation size. This would have to be at least a
>> warning on critical syslog level.
>>
>>
>> Reliable DNS resolution for everyone without surprises will only
>> happen with TCP based DNS:
>> https://github.com/haproxy/haproxy/issues/185
>>
>> For the issue in question on the other hand: can you file a bug on github?
>>
>>
>>
>> Thanks,
>>
>> Lukas
>>
>


Re: ELB scaling => sudden backend tragedy

2019-10-24 Thread Jim Freeman
https://github.com/haproxy/haproxy/issues/341

On Thu, Oct 24, 2019 at 11:44 AM Lukas Tribus  wrote:

> Hello,
>
> On Thu, Oct 24, 2019 at 5:53 PM Jim Freeman  wrote:
> >
> > Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of
> the servers in that backend were suddenly marked down, e.g. :
> >
> >Server www26 is going DOWN for maintenance (unspecified DNS error)
> >
> > Ergo, ALL requests to that backend got 503s ==> complete outage
> >
> > Mayhap src/dns.c::dns_validate_dns_response() bravely running away when
> DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers,
> abandoning TTL updates to perfectly good endpoints) is not the best course
> of action ?
> >
> > Of course we'll hope (MTUs allowing) that we'll be able to paper this
> over for awhile using an accepted_payload_size >default(512).
>
> I agree this is basically a ticking time-bomb for everyone not
> thinking about the DNS payload size every single day.
>
> However we also need to make sure people will become aware of it when
> they are hitting truncation size. This would have to be at least a
> warning on critical syslog level.
>
>
> Reliable DNS resolution for everyone without surprises will only
> happen with TCP based DNS:
> https://github.com/haproxy/haproxy/issues/185
>
> For the issue in question on the other hand: can you file a bug on github?
>
>
>
> Thanks,
>
> Lukas
>


ELB scaling => sudden backend tragedy

2019-10-24 Thread Jim Freeman
Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of the
servers in that backend were suddenly marked down, e.g. :

   Server www26 is going DOWN for maintenance (unspecified DNS error)

Ergo, ALL requests to that backend got 503s ==> complete outage

Mayhap src/dns.c::dns_validate_dns_response() bravely running away when
DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers,
abandoning TTL updates to perfectly good endpoints) is not the best course
of action ?

Of course we'll hope (MTUs allowing) that we'll be able to paper this over
for awhile using an accepted_payload_size >default(512).

But as-is, this looks to be an avoidable pathology?

Thoughts?

Yours, endlessly impressed with haproxy,
...jfree

https://packages.debian.org/stretch-backports/haproxy
1.8.19-1~bpo9+1


Re: 'sni' parameter - reasonable default/implicit setting ?

2019-07-27 Thread Jim Freeman
That looks right on - thanks for the pointer !

I couldn't tell from the brief gander I took - works the same for
'server-template' as for 'server' ?

On Sat, Jul 27, 2019 at 2:53 AM Aleksandar Lazic  wrote:

> Hi.
>
> Am 27.07.2019 um 00:24 schrieb Jim Freeman:
> > For outgoing TLS connections, might haproxy be taught to use a reasonable
> > default/implicit value 'sni' [1] expression/behavior that would 'first
> do no
> > harm'[2], and usually be correct, in the absence of an explicit
> expression ?
> > (Understood that haproxy depends on an SSL lib)
> >
> > E.g.; req.hdr(host) if it is set, else server(-template)  (if
> it is
> > cfg'd as name, not IP), else ssl_fc_sni for bridged HTTPS, else ... ?
> >
> > If SNI [3] is used vs. an endpoint that doesn't require/utilize it, is
> it always
> > innocuous ?
> >
> > Are increasing demands by service providers that clients (e.g.; haproxy
> vs. an
> > SSL endoint) send SNI inevitable?  Or is some alternative pending?
>
> I think this is similar Ideas as the vhost patch intend to solve.
>
> https://www.mail-archive.com/haproxy@formilux.org/msg34532.html
>
> I think the patch should be adopted for `mode tcp` also, jm2c.
>
> > Just wondering,
> > ...jfree
>
> Best Regards
> Aleks
>
> > [1] http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#sni
> > [2] https://en.wikipedia.org/wiki/Primum_non_nocere
> >  https://en.wikipedia.org/wiki/Robustness_principle
> > [3] https://en.wikipedia.org/wiki/Server_Name_Indication
> >
> >
>
>


'sni' parameter - reasonable default/implicit setting ?

2019-07-26 Thread Jim Freeman
For outgoing TLS connections, might haproxy be taught to use a reasonable
default/implicit value 'sni' [1] expression/behavior that would 'first do
no harm'[2], and usually be correct, in the absence of an explicit
expression ?  (Understood that haproxy depends on an SSL lib)

E.g.; req.hdr(host) if it is set, else server(-template)  (if it
is  cfg'd as name, not IP), else ssl_fc_sni for bridged HTTPS, else ... ?

If SNI [3] is used vs. an endpoint that doesn't require/utilize it, is it
always innocuous ?

Are increasing demands by service providers that clients (e.g.; haproxy vs.
an SSL endoint) send SNI inevitable?  Or is some alternative pending?

Just wondering,
...jfree

[1] http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#sni
[2] https://en.wikipedia.org/wiki/Primum_non_nocere
 https://en.wikipedia.org/wiki/Robustness_principle
[3] https://en.wikipedia.org/wiki/Server_Name_Indication


Re: redirect vs. logging Location hdr

2018-06-27 Thread Jim Freeman
Much thanks for the insights, clarifications, and solution possibilities.

Will chew on this for a bit ...

On Wed, Jun 27, 2018 at 10:40 PM, Willy Tarreau  wrote:

> On Wed, Jun 27, 2018 at 04:08:27PM -0600, Jim Freeman wrote:
> > With a configuration having many 'redirect's (and wanting to know which
> > 'Location' a given request was redirected to), I configured
> >
> >   capture response header Location len 64
> >   log-format %hs
> >
> > , but see no log entries for redirects I know haproxy is generating.
>
> captures work for input data, not generated data.
>
> > With further testing, I know that Location responses from downstream are
> > being logged - just not those generated on-host via 'redirect'.
>
> exactly.
>
> > I've scoured the docs for insight/reasoning re: this behavior, but can't
> > find anything.
> > Assuming this is as-designed, I'd appreciate any further illumination ...
> > Is there a way to log  Location from haproxy redirects ?
>
> There could be a solution I guess, involving a variable, though it would
> require you to either double the number of redirect rules, or add a
> condition to each rule. The principle would be the following :
>
> http-request set-var(txn.location) string(/path/to/location) if ! {
> var(txn.location) -m found } { your condition here }
> ... many other such rules ...
> http-request redirect location %[var(txn.location)] if {
> var(txn.location) -m found }
>
> Then you can add a field for var(txn.location) in your log-format.
>
> The only thing is that it only supports sample expressions so you don't
> have
> the flexibility you currently have with log-format in the redirect header.
> You should be able to achieve the same by using the concat() converter in
> the expression but I agree that it's less readable.
>
> Note that there is a "set-var()" converter so if your redirect expressions
> are simple, you could as well set the variable on the fly. It will just not
> take the result of the log-format. Example :
>
> http-request redirect location 
> %[string("/path/to/location"),set-var(txn.location)]
> if { your condition here }
>
> It should be able to always replace my other example above and be more
> efficient. But the example below won't do what you expect :
>
> http-request redirect location 
> /path/to/location/?src=%[src,set-var(txn.location)]
> if { your condition here }
>
> as it would only capture the output of the "src" fetch function instead of
> the whole line. Maybe it could be useful to support an extra "redirect"
> option to duplicate the output into a variable. I don't think it would
> be very complicated to have this :
>
> http-request redirect location /path/to/foo code 302
> set-var(txn.location)
>
> But I could be wrong as I remember that variable allocation at config
> parsing
> time is tricky.
>
> Regards,
> Willy
>


redirect vs. logging Location hdr

2018-06-27 Thread Jim Freeman
With a configuration having many 'redirect's (and wanting to know which
'Location' a given request was redirected to), I configured

  capture response header Location len 64
  log-format %hs

, but see no log entries for redirects I know haproxy is generating.

With further testing, I know that Location responses from downstream are
being logged - just not those generated on-host via 'redirect'.

I've scoured the docs for insight/reasoning re: this behavior, but can't
find anything.
Assuming this is as-designed, I'd appreciate any further illumination ...
Is there a way to log  Location from haproxy redirects ?

Thanks!
...jfree


Re: [PATCH][MINOR] config: Implement 'parse-resolv-conf' directive for resolvers

2018-05-24 Thread Jim Freeman
Would that I could gift you time away from lesser things (fix the
plumbing?  make breakfast?) from across the ocean ...

I do have some small sense of how ...
overwhelming/consuming/pressing/stressful/... driving a project the size
and stature (and awesome capability) of haproxy would be.

Huge thanks and kudos to the whole crew !!


On Thu, May 24, 2018 at 9:02 AM, Ben Draut  wrote:

> Willy, I think you've reviewed this one already. :) I fixed a few
> things after your review, then you said you just wanted to wait
> for Baptiste to ACK back on 4/27.
>
> I pinged Baptiste independently, just to make sure he had
> seen your note. He replied, but he's been busy too. (Sorry
> to add to the pile!) My understanding was that we're just
> waiting for him.
>
> Thanks,
>
> Ben
>
> On Thu, May 24, 2018 at 8:58 AM, Willy Tarreau  wrote:
>
>> Hi Jim,
>>
>> On Thu, May 24, 2018 at 08:50:29AM -0600, Jim Freeman wrote:
>> > I'm not seeing any signs of this feature sliding into 1.9 source - any
>> > danger of it not going in to the current dev branch?
>> > Are there further concerns/problems/... standing in the way ?  (it
>> > addresses one of my few haproxy gripes)
>>
>> Sorry but it's my fault. I'm totally overwhelmed at the moment with
>> tons of e-mails that take time to process and that I can't cope with
>> anymore. I already have in my todo list to review Ben's patch and
>> Patrick's patches and I cannot find any single hour to do this. I'm
>> spending some time finishing slides, which are totally incompatible
>> with code review, I'll get back to this ASAP.
>>
>> At least it's not lost at all, and indeed it's not yet in 1.9 but
>> I don't see any reason why this wouldn't go there.
>>
>> Thanks,
>> Willy
>>
>
>


Re: [PATCH][MINOR] config: Implement 'parse-resolv-conf' directive for resolvers

2018-05-24 Thread Jim Freeman
I'm not seeing any signs of this feature sliding into 1.9 source - any
danger of it not going in to the current dev branch?
Are there further concerns/problems/... standing in the way ?  (it
addresses one of my few haproxy gripes)

...jfree
[ grateful/impressed haproxy user - thanks to all involved ]

On Fri, Apr 27, 2018 at 10:59 PM, Willy Tarreau  wrote:

> On Fri, Apr 27, 2018 at 08:58:52PM -0600, Ben Draut wrote:
> > > >   newnameserver->addr = *sk;
> > > >   }
> > > > + else if (strcmp(args[0], "parse-resolv-conf") == 0) {
> > >
> > > I think you should register a config keyword and parse this in its own
> > > function if at all possible, but I don't know if resolvers can use
> > > registered config keywords, so if it's not possible, please ignore this
> > > comment.
> > >
> >
> > The resolvers section isn't registering any config keywords at the
> moment,
> > so I'm going to leave it the way it is to be consistent.
>
> OK.
>
> > > > + free(sk);
> > > > + free(resolv_line);
> > > > + if (fclose(f) != 0) {
> > > > + ha_warning("parsing [%s:%d] : failed to close
> > > handle to /etc/resolv.conf.\n",
> > > > +file, linenum);
> > > > + err_code |= ERR_WARN;
> > > > + }
> > >
> > > In practice you don't need to run this check on a read-only file, as it
> > > cannot fail, and if it really did, the user couldn't do anything about
> > > it anyway.
> > >
> >
> > Great, removed.
> >
> > I also fixed the memory leaks that you pointed out. (I think) But I did
> > notice that
> > valgrind reports that the 'newnameserver' allocation is being leaked
> > anyway, both
> > when using parse-resolv-conf as well as the regular nameserver
> > directive...Let
> > me know if I should do something about that. To me it seems the resolvers
> > code
> > should be freeing that.
>
> It means there's nothing in the deinit() function to take care of the
> nameservers. It would be better to do it just to avoid the warnings
> you're seeing. Do not hesitate to propose a patch for this if you want,
> and please mark it for backporting.
>
> > +resolv_out:
> > + if (sk != NULL)
> > + free(sk);
>
> Here you don't need the test because free(NULL) is a NOP.
>
> > + if (resolv_line != NULL)
> > + free(resolv_line);
>
> Same here.
>
> If you want I can take care of them when merging. Let's wait for Baptiste's
> ACK now.
>
> Thanks,
> Willy
>
>


Re: Haproxy 1.8 with OpenSSL 1.1.1-pre4 stops working after 1 hour

2018-05-23 Thread Jim Freeman
Or kludge around it with eg; http://www.issihosts.com/haveged/ ?

On Wed, May 23, 2018 at 1:48 PM, Lukas Tribus  wrote:

> Hello,
>
>
> On 23 May 2018 at 18:29, Emeric Brun  wrote:
> > This issue was due to openssl-1.1.1 which re-seed after an elapsed time
> or number of request.
> >
> > If /dev/urandom is used as seeding source when haproxy is chrooted it
> fails to re-open /dev/urandom 
> >
> > By defaut the openssl-1.1.1 configure script uses the syscall getrandom
> as seeding source and fallback on /dev/urandom if not available.
> >
> > So you don't face the issue if your openssl-1.1.1 is compiled to use
> getrandom
> >
> > But getrandom syscall is available only since kernel > 3.17 and the main
> point: for glibc > 2.25.
> >
> > With openssl-1.1.1 you can check this this way:
> > # ./openssl-1.1.1/openssl version -r
> > Seeding source: getrandom-syscall
>
> I have glibc 2.23 (Ubuntu 16.04) and openssl shows "os-specific", even
> if kernel headers are installed while compiling, yet -pre6 does not
> hang for me in chroot (-pre4 did):
>
> lukas@dev:~/libsslbuild/bin$ uname -r
> 4.4.0-109-generic
> lukas@dev:~/libsslbuild/bin$ ./openssl version
> OpenSSL 1.1.1-pre6 (beta) 1 May 2018
> lukas@dev:~/libsslbuild/bin$ ./openssl version -r
> Seeding source: os-specific
> lukas@dev:~/libsslbuild/bin$
>
>
> But, stracing haproxy shows that the library IS ACTUALLY using
> getrandom(). So the "Seeding source" output of the executable is
> wrong. Gonna dig into this as well, but seeing how my haproxy
> executable uses getrandom() calls, this perfectly explains why I did
> not see this in -pre6 (which has the build-workaround for < libc 2.25,
> while pre4 did not, so it did not use the getrandom() call).
>
>
> @Sander it looks like openssl folks won't change their mind about
> this. You have to either upgrade to a kernel more recent than 3.17 so
> that getrandom() can be used, or make /dev/xrandom available within
> your chroot.
>
>
>
> Lukas
>
>


Re: DNS resolver and mixed case responses

2018-04-12 Thread Jim Freeman
It will be important to know which behavior AWS's Route53/DNS servers use ?

Using stock Debian/Stretch BIND9 (1:9.10.3.dfsg.P4-12.3+deb9u4), we see
haproxy downing backend servers with
"Server is going DOWN for maintenance (unspecified DNS error)."
https://github.com/haproxy/haproxy/search?q=unspecified+dns+error

We're expecting/testing to see if bind9's "no-case-compress { any; }"
directive
addresses this, but many folks do not control their DNS services (and as
requisite
AWS/Route53 capabilities mature, neither will we).


On Tue, Apr 10, 2018 at 3:11 PM, Ben Draut  wrote:

> It's interesting that the default behavior of HAProxy resolvers can
> conflict with the default behavior of bind. (If you're unlucky with
> whatever bind has cached)
>
> By default, bind uses case-insensitive compression, which can cause it to
> use a different case in the ANSWER than in the QUESTION. (See
> 'no-case-compress': https://ftp.isc.org/isc/bind9/cur/9.9/
> doc/arm/Bv9ARM.ch06.html) We were impacted by this recently.
>
> Also interesting: https://indico.dns-oarc.net/event/20/session/
> 2/contribution/12/material/slides/0.pdf
>
>
> On Mon, Apr 9, 2018 at 2:12 AM, Baptiste  wrote:
>
>> So, it seems that responses that does not match the case should be
>> dropped:
>> https://twitter.com/PowerDNS_Bert/status/983254222694240257
>>
>> Baptiste
>>
>
>


Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Jim Freeman
Your proposal aligns with what I was thinking over the weekend.

I'll try to be clean/small enough to tempt a back-port to 1.8 :-)

On Mon, Jan 8, 2018 at 1:17 PM, Baptiste  wrote:

> Hi Jim,
>
> I very welcome this feature. Actually, I wanted to add it myself for some
> time now.
> I currently work it around using init script, whenever I want to use name
> servers provided by resolv.conf.
>
> I propose the following: if no nameserver directives are found in the
> resolvers section, then we fallback to resolv.conf parsing.
>
> If you fill comfortable enough, please send me / the ml a patch and I can
> review it.
> If you have any questions on the design, don't hesitate to ask.
>
> Baptiste
>
>
> On Mon, Jan 8, 2018 at 1:56 PM, Jim Freeman  wrote:
>
>> No new libs needed.
>>
>> libc/libresolv 's res_ninit() suffices ...
>>
>> http://man7.org/linux/man-pages/man3/resolver.3.html
>>
>> On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:
>>
>>> Hi Jim,
>>>
>>>
>>> On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
>>> > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
>>> > nameservers [resolv.h], so haproxy won't have to parse it either ...
>>> >
>>> > Will keep poking.
>>>
>>> Do give it some time to discuss the implementation here first though,
>>> before you invest a lot of time in a specific direction (especially if
>>> you link to new libraries).
>>>
>>> CC'ing Baptise and Willy.
>>>
>>>
>>>
>>> cheers,
>>> lukas
>>>
>>
>>
>


Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Jim Freeman
No new libs needed.

libc/libresolv 's res_ninit() suffices ...

http://man7.org/linux/man-pages/man3/resolver.3.html

On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:

> Hi Jim,
>
>
> On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
> > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
> > nameservers [resolv.h], so haproxy won't have to parse it either ...
> >
> > Will keep poking.
>
> Do give it some time to discuss the implementation here first though,
> before you invest a lot of time in a specific direction (especially if
> you link to new libraries).
>
> CC'ing Baptise and Willy.
>
>
>
> cheers,
> lukas
>


Re: 1.8 resolvers - start vs. run

2017-12-29 Thread Jim Freeman
I'm not proposing use of /etc/resolv.conf *instead* of haproxy's other
configs, only as *a* (default) config [the same default that is good enough
for haproxy to use at start-time].

So if that config suffices (as I suspect it usually does), config is
simplified.

Attached is a trivial program that prints IPv4 nameservers listed in
/etc/resolv.conf (with libresolv doing the parsing).


On Fri, Dec 29, 2017 at 2:56 PM, Andrew Smalley 
wrote:

> Hello Jim.
>
> I've seen the thread and that you're "befuddled" a little about the use of
> DNS.,
>
> Think of it this way, with the resolvers in HAProxy you can resolve
> the real server names of real server pool, this may be very dynamic in
> nature and separate to /etc/resolve.conf
>
> Now imagine a farm of Haproxy servers with different resolves
> configured internally, but you want the Haproxy instance to have
> public DNS resolved while there may be many split horizon dns
> available and maybe not public. Haproxy then ensures it uses the DNS
> servers you want it to and not the system resolver
>
> Personally and this is just an opinion I think the Haproxy resolver is
> and should be separate to /etc/resolv.conf
>
>
> Andruw Smalley
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review | Deployment Guides | Blog
>
>
> On 29 December 2017 at 21:26, Lukas Tribus  wrote:
> > Hi Jim,
> >
> >
> > On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
> >> Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
> >> nameservers [resolv.h], so haproxy won't have to parse it either ...
> >>
> >> Will keep poking.
> >
> > Do give it some time to discuss the implementation here first though,
> > before you invest a lot of time in a specific direction (especially if
> > you link to new libraries).
> >
> > CC'ing Baptise and Willy.
> >
> >
> >
> > cheers,
> > lukas
> >
>
#include 
#include 
#include 

int main(int argc, char** argv)
{
res_state res = malloc(sizeof(struct __res_state));
res_ninit(res);

int i;
char buf[INET_ADDRSTRLEN];
for( i=0; inscount; i++) {
fprintf(stderr, "ns%d: %s\n", i+1,
inet_ntop(AF_INET, &res->nsaddr_list[i].sin_addr, buf, sizeof buf)
);
}
}


Re: 1.8 resolvers - start vs. run

2017-12-29 Thread Jim Freeman
Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
nameservers [resolv.h], so haproxy won't have to parse it either ...

Will keep poking.



On Fri, Dec 29, 2017 at 12:59 PM, Lukas Tribus  wrote:

> Hello,
>
>
> On Fri, Dec 29, 2017 at 7:00 PM, Jim Freeman  wrote:
> > I'm a bit befuddled by the different nameserver config 'twixt these 2
> modes?
> > [ Methinks I grok the need for an internal non-libc/libresolv resolver ]
> >
> > Why isn't the the /etc/resolv.conf start-time config used (or at least
> > available) as a default run-time config (chroot notwithstanding)?
> > Under what circumstances do nameservers/settings need to be different in
> > start vs. run modes?
>
> Haproxy never reads in /etc/resolv.conf; libc does it for us (for libc
> based resolution).
>
>
>
> > I'd expect that for most installations, the run-time config could/should
> be
> > the same as the start-time config ?  Having to create a run-time config
> that
> > will just be the same as the start-time gets in the way of automating of
> > config across different environments ...
>
> I can see it wouldn't scale if you have a large number of different
> nameserver sets. I guess that is not usually a problem and people have
> the same name servers sets or at least provisioning groups using the
> same nameserver sets, so automation can handle it in a scalable way.
> Or they automate it away in other ways, like with placeholders in
> haproxy.cfg and scripts that replace the placeholders locally.
>
> I can certainly see how this would simplify things, but writing a
> /etc/resolv.conf parser in userspace is something that I would
> consider a specific feature for which someone has to write actual code
> for it.
>
>
> nginx does not parse resolv.conf either, btw:
> http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
>
>
>
> Lukas
>


Re: 1.8 resolvers - start vs. run

2017-12-29 Thread Jim Freeman
Great feedback - thanks !

I'll take a look at the code ...

On Fri, Dec 29, 2017 at 12:59 PM, Lukas Tribus  wrote:

> Hello,
>
>
> On Fri, Dec 29, 2017 at 7:00 PM, Jim Freeman  wrote:
> > I'm a bit befuddled by the different nameserver config 'twixt these 2
> modes?
> > [ Methinks I grok the need for an internal non-libc/libresolv resolver ]
> >
> > Why isn't the the /etc/resolv.conf start-time config used (or at least
> > available) as a default run-time config (chroot notwithstanding)?
> > Under what circumstances do nameservers/settings need to be different in
> > start vs. run modes?
>
> Haproxy never reads in /etc/resolv.conf; libc does it for us (for libc
> based resolution).
>
>
>
> > I'd expect that for most installations, the run-time config could/should
> be
> > the same as the start-time config ?  Having to create a run-time config
> that
> > will just be the same as the start-time gets in the way of automating of
> > config across different environments ...
>
> I can see it wouldn't scale if you have a large number of different
> nameserver sets. I guess that is not usually a problem and people have
> the same name servers sets or at least provisioning groups using the
> same nameserver sets, so automation can handle it in a scalable way.
> Or they automate it away in other ways, like with placeholders in
> haproxy.cfg and scripts that replace the placeholders locally.
>
> I can certainly see how this would simplify things, but writing a
> /etc/resolv.conf parser in userspace is something that I would
> consider a specific feature for which someone has to write actual code
> for it.
>
>
> nginx does not parse resolv.conf either, btw:
> http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
>
>
>
> Lukas
>


1.8 resolvers - start vs. run

2017-12-29 Thread Jim Freeman
I'm a bit befuddled by the different nameserver config 'twixt these 2 modes?
[ Methinks I grok the need for an internal non-libc/libresolv resolver ]

Why isn't the the /etc/resolv.conf start-time config used (or at least
available) as a default run-time config (chroot notwithstanding)?
Under what circumstances do nameservers/settings need to be different in
start vs. run modes?

I'd expect that for most installations, the run-time config could/should be
the same as the start-time config ?  Having to create a run-time config
that will just be the same as the start-time gets in the way of automating
of config across different environments ...

Or am I just not reading the docs right ?


Re: in-house vulnerability scan vs. stats socket

2017-06-19 Thread Jim Freeman
Dunno - not in my purview ...

On Mon, Jun 19, 2017 at 1:40 PM, Gibson, Brian (IMS)  wrote:
> What scanner did you use?
>
> -Original Message-
> From: Jim Freeman [sovr...@gmail.com]
> Received: Monday, 19 Jun 2017, 3:36PM
> To: HAProxy [haproxy@formilux.org]
> Subject: in-house vulnerability scan vs. stats socket
>
> FWIW / FYI -
>
> # haproxy -v
> HA-Proxy version 1.5.18 2016/05/10
>
> An in-house vulnerability scanner found our haproxy stats sockets and
> started probing, sending bogus requests, HTTP_* methods, etc.
>
> The many requests, even though the request paths were not valid at the
> stats socket, made for a DoS attack (with haproxy's CPU consumption
> often pegging at 100% generating stats pages).
>
> Since it looks like the only valid stats socket requests are GETs to
> '/' (with possible ';', '#', and '?' modifiers), we ameliorated the
> in-house DoS using these 2 lines in the cfg for the stats socket :
>
>   http-request tarpit unless { path_reg ^/($|\?|\#|\;) }
>   http-request tarpit unless METH_GET # silent-drop > 1.5
>
>
> 
>
> Information in this e-mail may be confidential. It is intended only for the 
> addressee(s) identified above. If you are not the addressee(s), or an 
> employee or agent of the addressee(s), please note that any dissemination, 
> distribution, or copying of this communication is strictly prohibited. If you 
> have received this e-mail in error, please notify the sender of the error.



in-house vulnerability scan vs. stats socket

2017-06-19 Thread Jim Freeman
FWIW / FYI -

# haproxy -v
HA-Proxy version 1.5.18 2016/05/10

An in-house vulnerability scanner found our haproxy stats sockets and
started probing, sending bogus requests, HTTP_* methods, etc.

The many requests, even though the request paths were not valid at the
stats socket, made for a DoS attack (with haproxy's CPU consumption
often pegging at 100% generating stats pages).

Since it looks like the only valid stats socket requests are GETs to
'/' (with possible ';', '#', and '?' modifiers), we ameliorated the
in-house DoS using these 2 lines in the cfg for the stats socket :

  http-request tarpit unless { path_reg ^/($|\?|\#|\;) }
  http-request tarpit unless METH_GET # silent-drop > 1.5



Re: resolvers default nameservers ?

2017-04-18 Thread Jim Freeman
Bingo - that's exactly what I'd hope for.  The default default could
be /etc/resolv.conf's nameservers (or eg; chroot context's
equivalent),

I grok that the runtime is different than parsetime, which makes
parsetime the right time to get at the system's info as default.
Dunno if there are any system/resolver calls that could be exploited
that report the default nameservers (looked a bit, didn't find) to
save parsing or possible resolvconf/... complexities - what the system
"knows" (and will tell) is handiest.

My preference is to use the stock init file, and have other machinery
for auto- tweaking/configuring, so sensible inherent default cfg
mechanisms rock.

World class system software - thanks, and thanks, and thanks again !

On Tue, Apr 18, 2017 at 1:03 AM, Baptiste  wrote:
>
>
> On Fri, Apr 14, 2017 at 4:58 PM, Jim Freeman  wrote:
>>
>> The "resolvers" section doc discusses default values for all its
>> paramaters except "nameservers".
>>
>> If I have a on-line "resolvers" eg;
>>
>>   "resolvers default"
>>
>> with no parameters listed, are the system (or context eg; chroot)
>> /etc/resolv.conf nameservers used?  [ this would be a boon to cfg
>> automation ]
>>
>> Thanks,
>> ...jfree
>
>
> Hi Jim,
>
> Name resolution performed at runtime is not the same as the one performed
> when parsing the configuration file.
> At runtime, HAProxy uses the IP addresses provided by the nameserver
> directives. At configuration parsing time, HAProxy uses the libc, hence
> resolv.conf.
> The runtime resolver don't read the resolv.conf file. As a workaround, your
> init script may be able to update the cfg file quite easily.
>
> This gave me an idea, since you speak about automation :)
> We could improve the "resolvers" section parser with a couple of new
> features:
> - parsing a 'resolv.conf' file style (you provide a path to the file) to
> read the nameserver directives only (for now)
> - using environment variables
>
> Baptiste



[PATCH]: CLEANUP

2017-04-15 Thread Jim Freeman
trivial typo in log.c
From 6b51be4bc3b71eda400a7c0012a4642f393acaae Mon Sep 17 00:00:00 2001
From: Jim Freeman 
Date: Sat, 15 Apr 2017 08:01:59 -0600
Subject: [PATCH] CLEANUP - typo: simgle => single

---
 src/log.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/log.c b/src/log.c
index be1ebdc..003e42b 100644
--- a/src/log.c
+++ b/src/log.c
@@ -575,7 +575,7 @@ int parse_logformat_string(const char *fmt, struct proxy *curproxy, struct list
 cformat = LF_TEXT;
 pformat = LF_TEXT; /* finally we include the previous char as well */
 sp = str - 1; /* send both the '%' and the current char */
-memprintf(err, "unexpected variable name near '%c' at position %d line : '%s'. Maybe you want to write a simgle '%%', use the syntax ''",
+memprintf(err, "unexpected variable name near '%c' at position %d line : '%s'. Maybe you want to write a single '%%', use the syntax ''",
   *str, (int)(str - backfmt), fmt);
 return 0;
 
-- 
2.1.4



resolvers default nameservers ?

2017-04-14 Thread Jim Freeman
The "resolvers" section doc discusses default values for all its
paramaters except "nameservers".

If I have a on-line "resolvers" eg;

  "resolvers default"

with no parameters listed, are the system (or context eg; chroot)
/etc/resolv.conf nameservers used?  [ this would be a boon to cfg
automation ]

Thanks,
...jfree



typo nits @doc

2017-04-10 Thread Jim Freeman
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html
s/formated/formatted/g



simgle ?

2017-04-10 Thread Jim Freeman
https://github.com/haproxy/haproxy/search?q=simgle

single ?
simple ?



Re: 100% cpu , epoll_wait()

2016-04-21 Thread Jim Freeman
[ Apologies for consuming yet more vertical space ]

With this in .cfg :
log-format 
±{"date":"%t","lbtype":"haproxy","lbname":"%H","cip":"%ci","pid":"%pid","name_f":"%f","name_b":"%b","name_s":"%s","time_cr":"%Tq","time_dq":"%Tw","time_sc":"%Tc","time_sr":"%Tr","time_t":"%Tt","scode":"%ST","bytes_c":"%U","bytes_s":"%B","termstat":"%ts","con_act":"%ac","con_frnt":"%fc","con_back":"%bc","con_srv":"%sc","rtry":"%rc","queue_s":"%sq","queue_b":"%bq","rqst":"%r","hdrs":"%hr"}

, these requests logged with large %Tt (one request for favicon.ico,
which gets answered?):
=
4/21/16
3:06:36.268 PM
{ [-]
bytes_c:  578
bytes_s:  2485558
cip:  10.107.152.81
con_act:  43
con_back:  0
con_frnt:  0
con_srv:  0
date:  21/Apr/2016:21:06:36.268
hdrs:  {Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.}
lbname:  haproxy01
lbtype:  haproxy
name_b:  haproxy_stats
name_f:  haproxy_stats
name_s:  
pid:  20030
queue_b:  0
queue_s:  0
rqst:  GET /favicon.ico HTTP/1.1
rtry:  0
scode:  200
termstat:  LR
time_cr:  5874
time_dq:  0
time_sc:  0
time_sr:  0
time_t:  992288
}
host = haproxy01.a source = /logs/haproxy.log sourcetype = haproxy

4/21/16
3:06:36.268 PM
{ [-]
bytes_c:  577
bytes_s:  3091670
cip:  10.107.152.81
con_act:  198
con_back:  0
con_frnt:  1
con_srv:  0
date:  21/Apr/2016:21:06:36.268
hdrs:  {Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.}
lbname:  haproxy01
lbtype:  haproxy
name_b:  haproxy_stats
name_f:  haproxy_stats
name_s:  
pid:  20030
queue_b:  0
queue_s:  0
rqst:  GET / HTTP/1.1
rtry:  0
scode:  200
termstat:  LR
time_cr:  107
time_dq:  0
time_sc:  0
time_sr:  0
time_t:  2493
}
host = haproxy01.a source = /logs/haproxy.log sourcetype = haproxy

4/21/16
3:05:06.722 PM
{ [-]
bytes_c:  577
bytes_s:  2448514
cip:  10.107.152.81
con_act:  1133
con_back:  0
con_frnt:  0
con_srv:  0
date:  21/Apr/2016:21:05:06.722
hdrs:  {Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.}
lbname:  haproxy01
lbtype:  haproxy
name_b:  haproxy_stats
name_f:  haproxy_stats
name_s:  
pid:  20030
queue_b:  0
queue_s:  0
rqst:  GET / HTTP/1.1
rtry:  0
scode:  200
termstat:  LR
time_cr:  126
time_dq:  0
time_sc:  0
time_sr:  0
time_t:  88490
}
host = haproxy01.a source = /logs/haproxy.log sourcetype = haproxy

On Thu, Apr 21, 2016 at 5:10 PM, Jim Freeman  wrote:
> Another alert+followup :
>
> Cpu pegged again - connected to host and ran :
> ==
> # netstat -pantu | egrep "(^Proto|:5)"
> Proto Recv-Q Send-Q Local Address   Foreign Address
> State   PID/Program name
> tcp0  0 0.0.0.0:5   0.0.0.0:*
> LISTEN  7944/haproxy
> tcp0  0 10.33.176.98:5  10.34.157.166:53155
> TIME_WAIT   -
> tcp0 191520 10.33.176.98:5  10.107.152.81:59029
> ESTABLISHED 20030/haproxy
> tcp0  0 10.33.176.98:5  10.34.155.182:43154
> TIME_WAIT   -
> tcp0  0 10.33.176.98:5  10.34.157.165:37806
> TIME_WAIT   -
>
> # the request with un-ACK'd Send-Q data looks suspicious - kill it
> # ./killcx 10.107.152.81:59029
> killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/
> [PARENT] checking connection with [10.107.152.81:59029]
> [PARENT] found connection with [10.33.176.98:5] (ESTABLISHED)
> [PARENT] forking child
> [CHILD]  interface not defined, will use [eth0]
> [CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
> [PARENT] sending spoofed SYN to [10.33.176.98:5] with bogus SeqNum
> [CHILD]  hooked ACK from [10.33.176.98:5]
> [CHILD]  found AckNum [2424084881] and SeqNum [2973703732]
> [CHILD]  sending spoofed RST to [10.33.176.98:5] with SeqNum [2424084881]
> [CHILD]  sending RST to remote host as well with SeqNum [2973703732]
> [CHILD]  all done, sending USR1 signal to parent [8077] and exiting
> [PARENT] received child signal, checking results...
>  => success : connection has been closed !
> ==
>
> Right after that, cpu/latency show normal.
>
> I'm unsure if this is a leading or 

Re: 100% cpu , epoll_wait()

2016-04-21 Thread Jim Freeman
Another alert+followup :

Cpu pegged again - connected to host and ran :
==
# netstat -pantu | egrep "(^Proto|:5)"
Proto Recv-Q Send-Q Local Address   Foreign Address
State   PID/Program name
tcp0  0 0.0.0.0:5   0.0.0.0:*
LISTEN  7944/haproxy
tcp0  0 10.33.176.98:5  10.34.157.166:53155
TIME_WAIT   -
tcp0 191520 10.33.176.98:5  10.107.152.81:59029
ESTABLISHED 20030/haproxy
tcp0  0 10.33.176.98:5  10.34.155.182:43154
TIME_WAIT   -
tcp0  0 10.33.176.98:5  10.34.157.165:37806
TIME_WAIT   -

# the request with un-ACK'd Send-Q data looks suspicious - kill it
# ./killcx 10.107.152.81:59029
killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/
[PARENT] checking connection with [10.107.152.81:59029]
[PARENT] found connection with [10.33.176.98:5] (ESTABLISHED)
[PARENT] forking child
[CHILD]  interface not defined, will use [eth0]
[CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
[PARENT] sending spoofed SYN to [10.33.176.98:5] with bogus SeqNum
[CHILD]  hooked ACK from [10.33.176.98:5]
[CHILD]  found AckNum [2424084881] and SeqNum [2973703732]
[CHILD]  sending spoofed RST to [10.33.176.98:5] with SeqNum [2424084881]
[CHILD]  sending RST to remote host as well with SeqNum [2973703732]
[CHILD]  all done, sending USR1 signal to parent [8077] and exiting
[PARENT] received child signal, checking results...
 => success : connection has been closed !
==

Right after that, cpu/latency show normal.

I'm unsure if this is a leading or lagging anomaly - it seems to
follow another strangeness, where ~5 minutes prior, the cpu usage
across several haproxy hosts drops by 40 %-points [graph attached]

On Thu, Apr 21, 2016 at 11:44 AM, Jim Freeman  wrote:
> Followup: alert triggered this AM - I'll provide what bits I was able
> to glean.  [ HA-Proxy version 1.5.17 ]
>
> A proxy's CPU1 pegged @10:21.  To isolate the connections to a
> non-listening nanny proc, did a '-sf' reload at 10:24.
>
> After the reload, latencies on the proxy of interest rose by an order
> of magnitude (historically, when this condition lingers, request
> timings across all proxies/system often suffer substantially).
>
> At about  10:35 the pegged CPU resolved spontaneously (connections on
> the nanny process were finishing - a connection triggering the
> epoll_wait() busyloop terminated?), and timings returned to normal.
>
> Splunk graphs attached (if they're allowed through).
> cpuBusy.png (y-axis => %cpuBusy)
> latency.png (y-axis => Td = Tt - (Tq + Tw + Tc + Tr)
>
> If its of any use, here's the splunk searcht that triggers the alert :
> index=os sourcetype=cpu host=haproxy0* | multikv | search CPU=1 | eval
> cpuBusy=100-pctIdle | anomalousvalue pthresh=0.02 maxanofreq=0.2
> minsupcount=50 action=annotate cpuBusy | search cpuBusy=100
> Anomaly_Score_Num\(cpuBusy\)>0 | stats count dc(host) as hosts | where
> count > hosts
>
> On Fri, Apr 15, 2016 at 3:20 PM, Jim Freeman  wrote:
>> I have haproxy slaved to 2d cpu (CPU1), with frequent config changes
>> and a '-sf' soft-stop with the now-old non-listening process nannying
>> old connections.
>>
>> Sometimes CPU1 goes to %100, and then a few minutes later request
>> latencies suffer across multiple haproxy peers.
>>
>> An strace of the nanny haproxy process shows a tight loop of :
>>
>> epoll_wait(0, {}, 200, 0)   = 0
>> epoll_wait(0, {}, 200, 0)   = 0
>> epoll_wait(0, {}, 200, 0)   = 0
>>
>> I've searched the archives and found similar but old-ish complaints
>> about similar circumstances, but with fixes/patches mentioned.
>>
>> This has happened with both 1.5.3 and 1.5.17.
>>
>> Insights ?
>>
>> ===
>>
>> # cat  /proc/version
>> Linux version 3.16.0-0.bpo.4-amd64 (debian-ker...@lists.debian.org)
>> (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian
>> 3.16.7-ckt25-1~bpo70+1 (2016-04-02)
>>
>> # haproxy -vv
>> HA-Proxy version 1.5.17 2016/04/13
>> Copyright 2000-2016 Willy Tarreau 
>>
>> Build options :
>>   TARGET  = linux2628
>>   CPU = generic
>>   CC  = gcc
>>   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
>> -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
>>   OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1
>>
>> Default settings :
>>   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
>>
>> Encrypted password support via crypt(3): yes
>> Built with zlib version : 1.2.7
>> Compres

Re: 100% cpu , epoll_wait()

2016-04-15 Thread Jim Freeman
Did a bit more digging on the most recent instance, and found that the
haproxy pid doing the hogging was handling a connection to the stats
port :

listen haproxy_stats :5
stats enable
stats uri /
no log

, with this 'netstat -pantlu' entry :
tcp0  99756 10.34.176.98:5  10.255.247.189:54484
ESTABLISHED 9499/haproxy

I'm suspecting that a connection to the stats port goes wonky with a
'-sf' reload, but I'll have to wait for it to re-appear to poke
further.  I'll look first for a stats port connection handled by the
pegged process, then use 'tcpkill' to kill just that connection
(rather than the whole process, which may be handling other
connections).

Its been happening 2 to 3 times a week, and I now have alerting around
the event - I'll post more info as I get it ...


On Fri, Apr 15, 2016 at 4:28 PM, Cyril Bonté  wrote:
> Hi Jim,
>
> Le 15/04/2016 23:20, Jim Freeman a écrit :
>>
>> I have haproxy slaved to 2d cpu (CPU1), with frequent config changes
>> and a '-sf' soft-stop with the now-old non-listening process nannying
>> old connections.
>>
>> Sometimes CPU1 goes to %100, and then a few minutes later request
>> latencies suffer across multiple haproxy peers.
>>
>> An strace of the nanny haproxy process shows a tight loop of :
>>
>> epoll_wait(0, {}, 200, 0)   = 0
>> epoll_wait(0, {}, 200, 0)   = 0
>> epoll_wait(0, {}, 200, 0)   = 0
>>
>> I've searched the archives and found similar but old-ish complaints
>> about similar circumstances, but with fixes/patches mentioned.
>>
>> This has happened with both 1.5.3 and 1.5.17.
>>
>> Insights ?
>
>
> Can you provide your configuration (without sensible data) ?
> Are you using peers ?
>
> Also, do you have a reproductible testcase that we can play with, or is it
> absolutely random ?
>
>
>
>>
>> ===
>>
>> # cat  /proc/version
>> Linux version 3.16.0-0.bpo.4-amd64 (debian-ker...@lists.debian.org)
>> (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian
>> 3.16.7-ckt25-1~bpo70+1 (2016-04-02)
>>
>> # haproxy -vv
>> HA-Proxy version 1.5.17 2016/04/13
>> Copyright 2000-2016 Willy Tarreau 
>>
>> Build options :
>>TARGET  = linux2628
>>CPU = generic
>>CC  = gcc
>>CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
>> -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
>>OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1
>>
>> Default settings :
>>maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
>>
>> Encrypted password support via crypt(3): yes
>> Built with zlib version : 1.2.7
>> Compression algorithms supported : identity, deflate, gzip
>> Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
>> Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
>> OpenSSL library supports TLS extensions : yes
>> OpenSSL library supports SNI : yes
>> OpenSSL library supports prefer-server-ciphers : yes
>> Built with PCRE version : 8.30 2012-02-04
>> PCRE library supports JIT : no (USE_PCRE_JIT not set)
>> Built with transparent proxy support using: IP_TRANSPARENT
>> IPV6_TRANSPARENT IP_FREEBIND
>>
>> Available polling systems :
>>epoll : pref=300,  test result OK
>> poll : pref=200,  test result OK
>>   select : pref=150,  test result OK
>> Total: 3 (3 usable), will use epoll.
>>
>
>
> --
> Cyril Bonté



100% cpu , epoll_wait()

2016-04-15 Thread Jim Freeman
I have haproxy slaved to 2d cpu (CPU1), with frequent config changes
and a '-sf' soft-stop with the now-old non-listening process nannying
old connections.

Sometimes CPU1 goes to %100, and then a few minutes later request
latencies suffer across multiple haproxy peers.

An strace of the nanny haproxy process shows a tight loop of :

epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0

I've searched the archives and found similar but old-ish complaints
about similar circumstances, but with fixes/patches mentioned.

This has happened with both 1.5.3 and 1.5.17.

Insights ?

===

# cat  /proc/version
Linux version 3.16.0-0.bpo.4-amd64 (debian-ker...@lists.debian.org)
(gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian
3.16.7-ckt25-1~bpo70+1 (2016-04-02)

# haproxy -vv
HA-Proxy version 1.5.17 2016/04/13
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



Re: METH_CONNECT, HTTPS forward proxy

2016-03-22 Thread Jim Freeman
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4.2-option%20http_proxy

is probably the answer to my question, but does the system's
libresolv() get used to dynamically map name to IP?  (no resolvers
list needed?)

http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3.2-resolvers

On Tue, Mar 22, 2016 at 11:21 PM, Jim Freeman  wrote:
> a la squid, but w/o caching, and a syntax I'm more comfortable with,
> and a resolvers mechanism that can handle AWS ELB elasticity.
>
> http://wiki.squid-cache.org/Features/HTTPS#CONNECT_tunnel
> http://gc-taylor.com/blog/2011/11/10/nginx-aws-elb-name-resolution-resolvers
>
> On Tue, Mar 22, 2016 at 11:03 PM, Jim Freeman  wrote:
>> I see METH_CONNECT as a pre-defined acl, but much googling leaves me
>> without a clue as to how to use it.
>>
>> I hope to have haproxy act as a forward proxy target for browsers
>> using a proxy.pac file.  I believe proxied traffic (both HTTP and
>> HTTPS) usually goes to the same proxy port, with HTTPS wrapped as a
>> CONNECT HTTP method, but from there, I can't see clearly how to tunnel
>> it on to the remote HTTPS endpoint.
>>
>> I'd have thought that if someone was successfully doing this I'd see
>> an illuminating idiom floating around, but no such luck (or insight).
>>
>> Thanks,
>> ...jfree



Re: METH_CONNECT, HTTPS forward proxy

2016-03-22 Thread Jim Freeman
a la squid, but w/o caching, and a syntax I'm more comfortable with,
and a resolvers mechanism that can handle AWS ELB elasticity.

http://wiki.squid-cache.org/Features/HTTPS#CONNECT_tunnel
http://gc-taylor.com/blog/2011/11/10/nginx-aws-elb-name-resolution-resolvers

On Tue, Mar 22, 2016 at 11:03 PM, Jim Freeman  wrote:
> I see METH_CONNECT as a pre-defined acl, but much googling leaves me
> without a clue as to how to use it.
>
> I hope to have haproxy act as a forward proxy target for browsers
> using a proxy.pac file.  I believe proxied traffic (both HTTP and
> HTTPS) usually goes to the same proxy port, with HTTPS wrapped as a
> CONNECT HTTP method, but from there, I can't see clearly how to tunnel
> it on to the remote HTTPS endpoint.
>
> I'd have thought that if someone was successfully doing this I'd see
> an illuminating idiom floating around, but no such luck (or insight).
>
> Thanks,
> ...jfree



METH_CONNECT, HTTPS forward proxy

2016-03-22 Thread Jim Freeman
I see METH_CONNECT as a pre-defined acl, but much googling leaves me
without a clue as to how to use it.

I hope to have haproxy act as a forward proxy target for browsers
using a proxy.pac file.  I believe proxied traffic (both HTTP and
HTTPS) usually goes to the same proxy port, with HTTPS wrapped as a
CONNECT HTTP method, but from there, I can't see clearly how to tunnel
it on to the remote HTTPS endpoint.

I'd have thought that if someone was successfully doing this I'd see
an illuminating idiom floating around, but no such luck (or insight).

Thanks,
...jfree



Re: case @req.hdr puzzlement

2016-03-19 Thread Jim Freeman
Indeed - I hardcode the frontend_name in the .cfg (instead of using
%f), and it works.

Thanks much!

On Fri, Mar 18, 2016 at 3:30 PM, Cyril Bonté  wrote:
> Hi Jim,
>
> Le 18/03/2016 21:52, Jim Freeman a écrit :
>>
>> I'm trying to add a header only if the last occurrence of it is not
>> the frontend_name (%f), but the header field name comparison seems to
>> be case sensitive when it should not be ?
>
>
> The analysis is not correct.
>
>> haproxy.cfg
>> 
>> listen foo.bar
>>bind  :10001
>>mode  http
>>log   127.0.0.1:514 local2 debug info
>>
>>acl XOH_OK req.hdr(X-Orig-Host,-1) -m str -i %f
>
>
> The issue is here : this is not supposed to work (well, not as you thought).
> ACLs don't support log-format variables for string comparison.
> Here, you are asking to compare the "X-Orig-Host" header with the string
> "%f".
>
>>http-request add-header X-Orig-Host %f unless XOH_OK
>># http-request add-header X-Orig-Host %f if !{
>> req.hdr(x-orig-host,-1) -m str -i %f }
>>
>>capture request header X-Orig-HoST len 64
>>
>>server local localhost:80
>>
>> curl test
>> ===
>> curl -I -H 'X-Orig-Host: baz' -H 'x-oRiG-hOsT: foo.bar' -H 'Host:
>> foo.bar' localhost:10001/
>>
>> headers as seen by lighttpd
>> =
>> 2016-03-18 14:45:26: (request.c.311) fd: 7 request-len: 135
>> HEAD / HTTP/1.1
>> User-Agent: curl/7.38.0
>> Accept: */*
>> X-Orig-Host: baz
>> x-oRiG-hOsT: foo.bar
>> Host: foo.bar
>> X-Orig-Host: foo.bar
>>
>> haproxy with this config should *not* have added the last header ???
>
>
> To illustrate what I previously wrote, you can try :
> curl -I -H 'X-Orig-Host: baz' -H 'x-oRiG-hOsT: %f' -H 'Host: foo.bar'
> localhost:10001/
>
> You'll probably see that haproxy will not add a third header.
>
>
>
>>
>> System:
>> root@jfree:~# haproxy -v
>> HA-Proxy version 1.6.3 2015/12/25Debian Jessie, haproxy from backports
>> https://packages.debian.org/jessie-backports/haproxy
>>
>> BTW - haproxy well and truly rocks ...
>>
>
>
> --
> Cyril Bonté



case @req.hdr puzzlement

2016-03-18 Thread Jim Freeman
I'm trying to add a header only if the last occurrence of it is not
the frontend_name (%f), but the header field name comparison seems to
be case sensitive when it should not be ?

haproxy.cfg

listen foo.bar
  bind  :10001
  mode  http
  log   127.0.0.1:514 local2 debug info

  acl XOH_OK req.hdr(X-Orig-Host,-1) -m str -i %f
  http-request add-header X-Orig-Host %f unless XOH_OK
  # http-request add-header X-Orig-Host %f if !{
req.hdr(x-orig-host,-1) -m str -i %f }

  capture request header X-Orig-HoST len 64

  server local localhost:80

curl test
===
curl -I -H 'X-Orig-Host: baz' -H 'x-oRiG-hOsT: foo.bar' -H 'Host:
foo.bar' localhost:10001/

headers as seen by lighttpd
=
2016-03-18 14:45:26: (request.c.311) fd: 7 request-len: 135
HEAD / HTTP/1.1
User-Agent: curl/7.38.0
Accept: */*
X-Orig-Host: baz
x-oRiG-hOsT: foo.bar
Host: foo.bar
X-Orig-Host: foo.bar

haproxy with this config should *not* have added the last header ???

System:
root@jfree:~# haproxy -v
HA-Proxy version 1.6.3 2015/12/25Debian Jessie, haproxy from backports
https://packages.debian.org/jessie-backports/haproxy

BTW - haproxy well and truly rocks ...



acl's re-calculated after reqrep ?

2016-02-23 Thread Jim Freeman
[ using 1.6.3 on Debian8 ]

Are acl's re-calculated after a 'reqrep' of the request line?

I'm seeing evidenced that they are, but am finding no mention in the
docs/google, and am somewhat taken aback.

...jfree



DOC: set-log-level in Logging section preamble

2015-05-26 Thread Jim Freeman
As best I can tell, no mention is made of "set-log-level" in the Logging
[Section 8] of the doc.

Something akin to the following in the doc would have saved a good chunk of
time/angst in addressing a logging issue I encountered :


diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9a04200..95ab0e8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12546,6 +12546,8 @@ send logs to different sources at the same time
with dif
   - per-instance external troubles (servers up/down, max connections)
   - per-instance activity (client connections), either at the
establishment or
 at the termination.
+  - per-request control of log-level, eg;
+ http-requestset-log-level silentif sensitive_request

 The ability to distribute different levels of logs to different log servers
 allow several production teams to interact and to fix their problems as
soon


Endless kudos/thanks to the haproxy team for your truly impressive and
useful software.


rspitarpit ?

2015-01-07 Thread Jim Freeman
We're getting some congestion from blind-shooting (or maybe just
stupid-shooting) scrapers who make (mostly bad) requests, with
occasional successes.

We'd like to tarpit unsuccessful responses.

Any experience on how to accomplish that ?

( A rspitarpit directive would be awesome )


Kudos on an awesome tool,
...jfree



puzzled : timeout tarpit

2014-11-04 Thread Jim Freeman
We have :
defaults
  ...
  timeout connect 5000
  timeout client 30
  timeout server 30
...
backend foo
  ...
timeout tarpit 29s
acl SRC_abuserhdr_ip(X-Forwarded-For,-1)  1.2.3.4
acl busy be_sess_rate gt 10
reqitarpit . if SRC_abuser busy

Our logs are telling us that the tarpitted connections are sending an
http status of 500, but after 30 ms ?

It should be 29s (if the 'timeout tarpit 29s' governed, or if not,
then the 5000 ms from 'timeout connect' per the docs), but it seems to
be taking from the client or server timeout setting ?

==

root@va-dlb01.a|ssldmz:~# haproxy -vv
HA-Proxy version 1.5.3 2014/07/25
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
  OPTIONS = USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



Re: acl set as side effect of reqrep ?

2014-08-27 Thread Jim Freeman
On Wed, Aug 27, 2014 at 4:39 AM, Baptiste  wrote:
> On Tue, Aug 26, 2014 at 5:31 PM, Jim Freeman  wrote:
>> Is there an easy/efficient way to set an acl as a direct side-effect
>> of a reqrep (not) matching/replacing ?
>>
>> Thanks,
>> ...jfree
>
> Hi Jim,
>
> Please clearly explain us what you want to do; step by step.
> And we'll be able to help you.
>
> Baptiste

I'd hope for something like (near-trivial case):

acl req_dynamic ! reqrep ^([^\ :]*)\ /static/(.*) \1\ /\2
...
use_backend dynamic_content   req_dynamic

where the acl criterion is met if the request regexp is found/replaced.

IOW - does(/could) reqrep constitute a fetch (where the fetched data
has now been replaced)?  The fact that the data was there (though it
is now replaced) is latched in the acl.



acl set as side effect of reqrep ?

2014-08-26 Thread Jim Freeman
Is there an easy/efficient way to set an acl as a direct side-effect
of a reqrep (not) matching/replacing ?

Thanks,
...jfree



'observe layer4' - passive healthcheck ?

2014-08-13 Thread Jim Freeman
In a 2-tier haproxy setup (tier1 instances/VMs do domain steering (and some
SSL termination) across many 10s of backends to tier2 path-steering
instances),.

I'd like to scale either/both tiers horizontally without compounding

  <#tier1 instances> * <#backends> * <#tier2 instances>

healthcheck overhead.

The 'observe' keyword intrigues as a possible passive/implicit healthcheck
mode?
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#observe

My read of the docs (and googling for more insight or sample configs), with
a code perusal imminent) leaves me not-yet-enlightened as to "observe
layer4"'s fitness to my purpose.  Am I barking up a non-existent tree?  If
it would work, how would haproxy know to bring a backend back into rotation
?

...
backend BACKEND_025
balance roundrobin
server dlb01 dlb01:10025 check port 6 inter 5s rise 2 fall 1
[ ... ]
server dlb07 dlb07:10025 check port 6 inter 5s rise 2 fall 1
...

would become
...
backend BACKEND_025
balance roundrobin
server dlb01 dlb01:10025 check observe layer4 inter 5s rise 2 fall 1
[ ... ]
server dlb22 dlb22:10025 check observe layer4 inter 5s rise 2 fall 1

?

Thanks,
...jfree


Re: SIGQUIT, silence

2014-01-25 Thread Jim Freeman
Ah - tragic :-) - it's been handy for us to search/share our system stuff
using log analytics ...

Thanks again, and again, and again ...


On Sat, Jan 25, 2014 at 10:18 AM, Willy Tarreau  wrote:

> On Sat, Jan 25, 2014 at 10:15:30AM -0700, Jim Freeman wrote:
> > Since the man page description says the output for both goes to the
> logs, I
> > thought that would be the place to look:
> >
> > - SIGHUP
> > Dumps the status of all proxies and servers into the logs. Mostly used
> for
> > trouble-shooting purposes.
> >
> > - SIGQUIT
> > Dumps information about memory pools into the logs. Mostly used for
> > debugging purposes.
>
> Ah sorry, I didn't notice. It is possible that this has been true in a
> distant past, or that the doc was written by copy-paste. I'll fix the
> doc to reflect reality.
>
> Thanks,
> Willy
>
>


Re: SIGQUIT, silence

2014-01-25 Thread Jim Freeman
Since the man page description says the output for both goes to the logs, I
thought that would be the place to look:

- SIGHUP
Dumps the status of all proxies and servers into the logs. Mostly used for
trouble-shooting purposes.

- SIGQUIT
Dumps information about memory pools into the logs. Mostly used for
debugging purposes.

[ BTW - many, many thanks for this insanely great and useful software ]

...jfree

On Sat, Jan 25, 2014 at 3:49 AM, Willy Tarreau  wrote:

> On Thu, Jan 23, 2014 at 04:19:35PM -0700, Jim Freeman wrote:
> > Using haproxy-1.5-dev19 on Debian/Wheezy, and haproxy-1.5-dev21 on
> > CentOS6.2, killing haproxy with SIGQUIT gets me nothing in the system
> logs.
> >
> > SIGHUP gets proxy/server status info into the logs just fine.
> > I'm using them the same way, but SIGQUIT seems to just do ... nothing?
>
> SIGQUIT only dumps to stderr and when not in daemon nor quiet mode.
> Maybe you're looking for the output at the wrong place :-)
>
> Willy
>


SIGQUIT, silence

2014-01-23 Thread Jim Freeman
Using haproxy-1.5-dev19 on Debian/Wheezy, and haproxy-1.5-dev21 on
CentOS6.2, killing haproxy with SIGQUIT gets me nothing in the system logs.

SIGHUP gets proxy/server status info into the logs just fine.
I'm using them the same way, but SIGQUIT seems to just do ... nothing?

Nut loose on the keyboard?

Thanks,
...jfree