Hi Gary,

On Fri, Jan 22, 2016 at 06:04:07PM -0800, Gary Barrueto wrote:
> > Do you have a way to ensure the same algorithms are
> > negociated on both versions ? I've run a diff between 1.5.14 and 1.6.3
> > regarding SSL, and it's very limited. Most of the changes affect OpenSSL
> > 1.0.2 (you're on 1.0.1), or automatic DH params and in your case they're
> > already forced.
> >
> 
> That what I'm exactly doing now by forcing the client to only negotiate the
> specific protocol/cipher. The largest difference we see is
> with ECDHE-RSA-AES256-SHA384/TlS1.2+keepalive 16% slower then compared to
> 1.5.14.

OK thank you.

> > There's something though, I'm seeing SSL_MODE_SMALL_BUFFERS being added
> > in 1.6. It only comes with a patch and is not standard, it allows openssl
> > to use less memory for small messages. Could you please run the following
> > command to see what SSL_MODE_* options are defined on your system :
> >
> >    $ grep -rF SSL_MODE_ /usr/include/openssl/
> >
> > ???Here is the output from the command.
> 
> gary:~$ grep -rF SSL_MODE_ /usr/include/openssl/
> /usr/include/openssl/ssl.h:#define SSL_MODE_ENABLE_PARTIAL_WRITE
> 0x00000001L
> /usr/include/openssl/ssl.h:#define SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
> 0x00000002L
> /usr/include/openssl/ssl.h:#define SSL_MODE_AUTO_RETRY 0x00000004L
> /usr/include/openssl/ssl.h:#define SSL_MODE_NO_AUTO_CHAIN 0x00000008L
> /usr/include/openssl/ssl.h:#define SSL_MODE_RELEASE_BUFFERS 0x00000010L
> /usr/include/openssl/ssl.h:#define SSL_MODE_SEND_FALLBACK_SCSV 0x00000080L

So that's totally standard and same as what we have on other systems

> > > I have the 'haproxy -vv' output and hardware specs listed below. Also
> > > attaching the haproxy/nginx configs being used.
> >
> > Thank you, I'm really not seeing a
> > ??????
> > nything suspicious there. There's
> > something that you should definitely do if you're running on a kernel 3.9
> > or later, which is to use as many "bind" lines per frontend as you have
> > processes. That makes use of the kernel's SO_REUSEPORT mechanism to balance
> > the load across all processes much more evenly than when there's a single
> > queue. It might be possible that your load is imbalanced right now.
> >
> >
> ???I've just tested with a 3.13 kernel (backported from ubuntu 14.04/trusty)
> and we see near same results.???

OK.

> ???Here is a small sample of what we've seen with a 1m payload.
> 
> cipher protocol mode reqs/sec reqs/sec % difference haproxy 1.5.14 haproxy 
> 1.6.3
> ECDHE-RSA-AES256-SHA384 TLS1.2 non-keepalive 208.92 184.25 -13.39%
> ECDHE-RSA-AES256-SHA384 TLS1.2 keepalive 224.76 192.12 -16.99%
> ECDHE-RSA-AES128-SHA256 TLS1.2 keepalive 174.91 159.67 -9.54%
> ADH-AES128-SHA TLS1.1 keepalive 363.38 336.24 -8.07%

OK so in short, in the worst case the performance dropped from 2 Gbps
to 1.7 Gbps. That's particularly low for a multi-process config. The
typical performance you should get on AES256 and keep-alive is around
3-5 Gbps per core depending on the CPU's frequency.

Could you possibly run the same test in a single-process config ? Please
just run the ECDHE-RSA-AES256-SHA384-keepalive test since it's the most
visible one.

Also another test worth doing is to start a second load generator (I
don't know if you have another machine available) to ensure that in
no way there is anything in the middle limiting the performance,
including the load generator itself. Because quite frankly, these
numbers are suspiciously low. I've reached 19 Gbps of SSL traffic
in keep-alive with 1M objects on a quad-core. I'm not saying that
you should have seen 80 Gbps, but at least you should have seen
much more than 2 Gbps...

Regards,
Willy


Reply via email to