Hello, If you want "definitive proof" that you're not using AES-NI instructions during your benchmark, you could simply compile OpenSSL (and then HAproxy, linking it to this OpenSSL version) passing "-noaes" flag to GCC in the process.
Then, to make sure your compilation succeeded, check both resulting binaries (haproxy + libcrypto.so) for any aesni instructions: $ objdump --disassemble-all libcrypto.so | grep -E 'aes(enc|dec)' There should be none. BR., Emerson Em sex., 29 de out. de 2021 às 00:09, Shawn Heisey <elyog...@elyograg.org> escreveu: > On 10/28/21 2:11 PM, Lukas Tribus wrote: > > You would have to run a single request causing a large download, and > > run haproxy through a cpu profiler, like perf, and compare outputs. > > I am learning all sorts of useful things. I see evidence of acceleration > when pulling a large file with curl! Average transfer speed is visibly > lower with acceleration disabled. First test is haproxy started > normally, second is haproxy started with the environment variable to > disable the aes-ni CPU flag: > > root@sauron:~# curl --ciphers ECDHE-RSA-AES256-GCM-SHA384 > https://server.domain.tld/4gbrandom > /dev/null > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 100 4096M 100 4096M 0 0 63.4M 0 0:01:04 0:01:04 --:--:-- > 63.5M > root@sauron:~# curl --ciphers ECDHE-RSA-AES256-GCM-SHA384 > https://server.domain.tld/4gbrandom > /dev/null > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 100 4096M 100 4096M 0 0 52.2M 0 0:01:18 0:01:18 --:--:-- > 61.4M > > The file I transferred is 4GB in size, copied from /dev/urandom with > dd. Did the pull from another machine on the same gigabit LAN. I > picked the cipher by watching for TLS 1.2 ciphers shown by testssl.sh > and choosing one that mentioned AES. The server has plenty of memory to > cache that entire 4GB file, so disk speed should be irrelevant. > > Thank you for hanging onto enough patience to help me navigate this > rabbit hole. > > Thanks, > Shawn > >