On Thu, 16 Feb 2017, Jacob Champion wrote:

On 02/16/2017 02:49 AM, Yann Ylavic wrote:
+#define FILE_BUCKET_BUFF_SIZE (64 * 1024 - 64) /* > APR_BUCKET_BUFF_SIZE */

So, I had already hacked my O_DIRECT bucket case to just be a copy of APR's file bucket, minus the mmap() logic. I tried making this change on top of it...

...and holy crap, for regular HTTP it's *faster* than our current mmap() implementation. HTTPS is still slower than with mmap, but faster than it was without the change. (And the HTTPS performance has been really variable.)

I'm guessing that this is with a low-latency storage device, say a
local SSD with low load? O_DIRECT on anything with latency would require way bigger blocks to hide the latency... You really want the OS readahead in the generic case, simply because it performs reasonably well in most cases.

Yes, you can avoid a memcpy using O_DIRECT, but compared to the SSL stuff a memcpy is rather cheap...

Can you confirm that you see a major performance improvement with the with the new 64K file buffer? I'm pretty skeptical of my own results at this point... but if you see it too, I think we need to make *all* these hard-coded numbers tunable in the config.

I think the big win here is to use appropriate block sizes, you do more useful work and less housekeeping. I have no clue on when the block size choices were made, but it's likely that it was a while ago. Assuming that things will continue to evolve, I'd say making hard-coded numbers tunable is a Good Thing to do.

Is there interest in more real-life numbers with increasing FILE_BUCKET_BUFF_SIZE or are you already on it? I have an older server that can do 600 MB/s aes-128-gcm per core, but is only able to deliver 300 MB/s https single-stream via its 10 GBps interface. My guess is too small blocks causing CPU cycles being spent not delivering data...


/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     ni...@acc.umu.se
---------------------------------------------------------------------------
 Fortunately... no one's in control.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Reply via email to