options to set-log-level silent for failed or error connections

2022-11-02 Thread mihe...@gmx.de



Hi everyone,

wanted to ask for help regarding error logs and log silencing.

I played around with silencing some monitoring hosts with the
"set-log-level silent" option.
During testing I noticed that silencing the logs worked, but only on
"successfull" connections. As soon as the connect is regarded as some
sort of failure, the silencing does not work and the log gets written.

For example when you are recevinging TCP checks on a TLS listener, all
of them are regarded as failures and hence don't get silenced.

Cant find a solution in the descriptions provided via the
cbonte.github.io docs configuration.html.

Is there a reason error/failed connections are regarded differently when
it comes to "set-log-level" ?
Does it have to do with the connect not beeing in the stage of
"tcp-request content", but "tcp-request connection" when the failure occurs?
Do I have other options of working around this?

Please find my configuration and information to reproduce the steps at
the end.

Thanks and best Regards,
Micha



OS:

# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 20.04.5 LTS"

I tested with these 2 versions of haproxy:

# haproxy -version
HAProxy version 2.5.9-1ppa1~focal 2022/09/24 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2023.
Known bugs: http://www.haproxy.org/bugs/bugs-2.5.9.html
Running on: Linux 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22
UTC 2022 x86_64

# haproxy -version
HAProxy version 2.6.6-1ppa1~focal 2022/09/22 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2
2027.
Known bugs: http://www.haproxy.org/bugs/bugs-2.6.6.html
Running on: Linux 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22
UTC 2022 x86_64


Here is the configuration i used (should work out of the box, once the
snakeoil certs are symlinked).
I used the openssl/nc commands at the end for testing (via localhost).


# cat haproxy.cfg.set-log-level
global
   log stdout  format raw  local0  info
   pidfile /var/run/haproxytest.pid
   crt-base /etc/ssl/private
defaults
    timeout connect 3s
    timeout client 3s
    timeout server 3s
frontend fend
    # ln -s /etc/ssl/certs/ssl-cert-snakeoil.pem
/etc/ssl/private/ssl-cert-snakeoil.pem
    # ln -s /etc/ssl/private/ssl-cert-snakeoil.key
/etc/ssl/private/ssl-cert-snakeoil.pem.key
    bind :1234 ssl crt ssl-cert-snakeoil.pem
    mode tcp
    log global
    no option dontlognull
    tcp-request content set-log-level silent if { src 127.0.0.1/32 }
    log-format "[%t] %ci:%cp > %fi:%fp %ft %b %s %Tw/%Tc/%Tt
rtx:%U/%B %ts"
    #option log-separate-errors
    error-log-format 'ERROR: [%t] %ci:%cp > %fi:%fp %ft %b %s
%Tw/%Tc/%Tt rtx:%U/%B %ts'
    default_backend bend
backend bend
    server local-nc-k-l8000 localhost:8000
# echo | openssl s_client -connect localhost:1234 -quiet
# echo | nc -vw1 localhost 1234



Re: payload inspection using req.payload

2020-02-12 Thread mihe...@gmx.de

Hey Mathias,

wow, brilliant! Made my day, really! - I was about getting frustrated
during troubleshooting :)
That was exactly what I needed. Thanks a bunch!
Failed to find something like that, because I was not exactly knowing
what to search for.

> As a side note: In case you want to match the payload in a binary
(non-HTTP) protocol,
> make sure you convert the payload to hex first, see section 7.1.3 in the
> newest configuration docs, here's the excerpt:

Yes, thats right. Luckily I already had some experience how to handle
that type of stuff from previous scripting jobs.

I wrote a bin2hex function for the LUA script I am testing. Not sure,
maybe in terms of performance(?) it makes more sense to leave that to
haproxy "payload(),hex" and just evaluate the converted result in my
script. Will have a look into that.

So far I got the impression tshooting and testing patterns is more
"obvious" and debug-able when implemented in my own LUA script.
Felt a bit "blind" on tracking decision making when testing a haproxy
ACLs equivalent (maybe just my first impression)
Used "set-var" + "if acl" and printed that via log-format, not sure if
there is a better way when testing ACLs?

Thanks again, BR
Micha



On 12.02.2020 12:09, Mathias Weiersmüller (cyberheads GmbH) wrote:

Hi Micha,


My problem is that the "req.payload(0,10)" fetch, which I am using for
that purpose, does not seem to reliably have access to the payload at
all times.

The problem is not the fetch per se, it is the timing of the evaluation
of the rule: tcp-request content rules are evaluated very early - there's
a high probability the payload buffer is empty at this moment.

if you add a condition to check if there is already any content present,
it will always match (checked using your config, thanks!):

example:
tcp-request content set-var(txn.rawPayload) req.payload(0,2),hex if { req_len 
gt 0 }

As a side note: In case you want to match the payload in a binary (non-HTTP) 
protocol,
make sure you convert the payload to hex first, see section 7.1.3 in the
newest configuration docs, here's the excerpt:

Do not use string matches for binary fetches which might contain null bytes
(0x00), as the comparison stops at the occurrence of the first null byte.
Instead, convert the binary fetch to a hex string with the hex converter first.

Example:

# matches if the string  is present in the binary sample
acl tag_found req.payload(0,0),hex -m sub 3C7461673E


Best regards

Mathias​




payload inspection using req.payload

2020-02-12 Thread mihe...@gmx.de

Hi everyone,

writing to get some help on a setup I am building with haproxy.

Part of the setup is a content inspection of the tcp payload (binary
stream), for which the load balancing will be done.
Testing with content inspection based on simple ACL pattern matches but
also tried evaluating the payload in LUA scripts. Where the latter is my
personal preference.
In the end the incoming requests should be accepted/rejected, based on
the payload evaluation result.
My target is to process multiple hundreds of simultaneous requests at
peak times, which *ALL* should undergo a payload inspection for the
initial request. Scenario will also terminate TLS later on, but this
should make no difference for the inspection (at least to my understanding)

My problem is that the "req.payload(0,10)" fetch, which I am using for
that purpose, does not seem to reliably have access to the payload at
all times.
So far I was not able to find out, what the cause of that could be.
There were several mitigation hints on that problem, but somehow I am
failing to get it to work.

For troubleshooting I got down to a very simplistic setup, which just
accesses the payload and prints it to the logfile.

I am using apache benchmark "ab" to generate ingress traffic for larger
batches. An apache server acts as a test backend.
Please not this is just for testing purposes. The final protocol is
*NOT* http.
I think this is negligible atm(?) as the part I am focussing on is
actual inboud/eval stuff, before the backend is contacted.

So out of a 100 requests sent with "ab" about 10-50% of the requests are
failing to display payload content.
I also noticed, that localhost generated ab requests have a much higher
chance of failing to print the payload.

Have the strong feeling that the payload is trying to be accessed before
its fully available to haproxy - even if its just a few bytes (testing
with 2-8)

Kind of lost here at the moment and I would really be grateful for any
suggestions and help on that one.
Is there a reasonable way to reliably "wait" for incoming requests w/o
delaying the requests too much in the end?

Best Regards
Micha


Below you can find the setup I came up with:



# VERSIONS

$ grep VERSION= /etc/os-release
VERSION="18.04.4 LTS (Bionic Beaver)"

$ grep 2.1 /etc/apt/sources.list.d/vbernat-ubuntu-haproxy-2_0-bionic.list
deb http://ppa.launchpad.net/vbernat/haproxy-2.1/ubuntu bionic main


$ haproxy -vv
HA-Proxy version 2.1.2-1ppa1~bionic 2019/12/21 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.2.html
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -O2
-fdebug-prefix-map=/build/haproxy-HuTwKZ/haproxy-2.1.2=.
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wno-implicit-fallthrough
-Wno-stringop-overflow -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD
-PTHREAD_PSHARED +REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO
+OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO
+NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER
+PRCTL +THREAD_DUMP -EVPORTS
Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with the Prometheus exporter as a service
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2