On 02/22/2017 10:34 AM, Niklas Edmundsson wrote:
To make results less confusing, any specific patches/branch I should
test? My baseline is httpd-2.4.25 + httpd-2.4.25-deps
--with-included-apr FWIW.

2.4.25 is just fine. We'll have to make sure there's nothing substantially different about it performance-wise before we backport patches anyway, so it'd be good to start testing it now.

- The OpenSSL test server, writing from memory: 1.2 GiB/s
- httpd trunk with `EnableMMAP on` and serving from disk: 850 MiB/s
- httpd trunk with 'EnableMMAP off': 580 MiB/s
- httpd trunk with my no-mmap-64K-block file bucket: 810 MiB/s

At those speeds your results might be skewed by the latency of
processing 10 MiB GET:s.

Maybe, but keep in mind I care more about the difference between the numbers than the absolute throughput ceiling here. (In any case, I don't see significantly different numbers between 10 MiB and 1 GiB files. Remember, I'm testing via loopback.)

Discard the results from the first warm-up
access and your results delivering from memory or disk (cache) shouldn't
differ.

Ah, but they *do*, as Yann pointed out earlier. We can't just deliver the disk cache to OpenSSL for encryption; it has to be copied into some addressable buffer somewhere. That seems to be a major reason for the mmap() advantage, compared to a naive read() solution that just reads into a small buffer over and over again.

(I am trying to set up Valgrind to confirm where the test server is spending most of its time, but it doesn't care for the large in-memory static buffer, or for OpenSSL's compressed debugging symbols, and crashes. :( )

As I said, our live server does 600 MB/s aes-128-gcm and can deliver 300
MB/s https without mmap. That's only a factor 2 difference between
aes-128-gcm speed and delivered speed.

Your results above are almost a factor 4 off, so something's fishy :-)

Well, I can only report my methodology and numbers -- whether the numbers are actually meaningful has yet to be determined. ;D More testers are welcome!

--Jacob

Reply via email to