Just to give some numbers from my laptop, testing against localhost:

httpd mpm worker, OS X, client h2load, transfer rates in GByte/s    

Version      #requests    2.4.18      2.4.x
1 GB resource
h2c clear       1*1         0.017       1.0
                8*1                     2.2
h2  tls         1*1         0.430       0.425
                8*1                     0.75
10 MB resource   
h2c clear      1*100        1.0         0.94
               8*100        2.4         2.4
h2  tls        1*100        0.425       0.418
               8*100        0.8         0.8

There is a clear bug in 2.4.18 for the cleartext case where client/server are 
stalling and 0% CPU is used. With TLS or with smaller resources, this is not 
observable. This has been fixed in the upcoming 2.4.19 release.

-Stefan


> Am 11.03.2016 um 12:40 schrieb Molina <jose.mol...@cern.ch>:
> 
> Hello again,
> 
> I installed the newest version now (2.4.18), but I still get the same results.
> 
> Answering by parts:
> 1. I do not use SSL
> 2. I have tried now with h2o and nghttpd. Obtained results are explained below
> 3. Increasing or decreasing the number of concurrent streams does not 
> significantly modify the transfer time as long as it remains in a decent 
> number of files (say, minimum 100)
> 4. Varying the size of each stream did not significantly modify the transfer 
> time
> 
> I will now explain the results I’ve observed when trying with other 
> server/clients:
> 
> nghttpd
> I started to suspect this issue is related to the window size, so I run 
> nghttp with a window size of 2^16 (64KB, the default one). I got the 
> following sample output in the client when downloading a 1GB-file:
> 
> [ 12.087] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [ 12.089] recv DATA frame <length=16383, flags=0x00, stream_id=13>
> [ 12.089] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=32767)
> [ 12.089] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=13>
>          (window_size_increment=32767)
> 
> So it seems it’s waiting for window update requests often, which causes a 
> huge delay with such a big distance between server and client. The download 
> speed was 450KB/s on average.
> 
> Then I increased the TCP buffer in both client and server to 2MB. I then 
> tried with a HTTP/2 per-connection windows size of 2^21 (2MB) I got the 
> following (repeated many times until the file was downloaded):
> 
> [  8.005] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.006] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.095] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.096] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.102] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.128] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.130] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.131] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.134] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.139] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.140] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.142] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.143] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.231] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.232] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.237] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.260] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.266] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.267] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.269] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.276] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.277] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.278] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.280] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.367] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.370] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.371] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.397] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.402] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.403] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.405] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.413] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.414] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.416] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.418] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.502] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.505] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.506] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.511] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.537] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.543] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.544] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.549] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.550] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.551] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.553] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.637] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.641] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.643] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.646] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.668] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.678] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.680] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  8.680] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=1048576)
> [  8.680] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=13>
>          (window_size_increment=1048576)
> 
> Which allows the server to send much more data before waiting for a window 
> update request, hence increasing the download speed up to 50MB/s. It 
> concretely sends 53 packages of ~16KB, which is ~1MB, which is half of the 
> window. I happened the same in the previous case.
> 
> 
> h2o
> I performed the following test:
> 
> Download a big file (1GB): both protocols had the same download speed, which 
> was expected.
> Download in parallel 10000 10KB files: in HTTP/1.1 it took 14 seconds, while 
> in HTTP/2 it took 7.
> 
> Hence in long distances and with many parallel downloads HTTP/2 performed 
> better, which is expected.
> However, when performing both tests it was necessary to use the nghttp 
> utility to increase the per-connection window size to 16MB and the per-stream 
> window size to 1MB, having 16 parallel streams.
> 
> 
> 
> Back to Apache, I tried nghttp wit a per-connection window size of 2^17. Then 
> I tried to download the 1GB-file used in the previous test. For my own 
> surprise, the server (and client!) hanged at some point, provoking a timeout. 
> Here’s the full nghttp log:
> 
> [  0.137] Connected
> [  0.138] send SETTINGS frame <length=12, flags=0x00, stream_id=0>
>          (niv=2)
>          [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
>          [SETTINGS_INITIAL_WINDOW_SIZE(0x04):131071]
> [  0.138] send PRIORITY frame <length=5, flags=0x00, stream_id=3>
>          (dep_stream_id=0, weight=201, exclusive=0)
> [  0.138] send PRIORITY frame <length=5, flags=0x00, stream_id=5>
>          (dep_stream_id=0, weight=101, exclusive=0)
> [  0.138] send PRIORITY frame <length=5, flags=0x00, stream_id=7>
>          (dep_stream_id=0, weight=1, exclusive=0)
> [  0.138] send PRIORITY frame <length=5, flags=0x00, stream_id=9>
>          (dep_stream_id=7, weight=1, exclusive=0)
> [  0.138] send PRIORITY frame <length=5, flags=0x00, stream_id=11>
>          (dep_stream_id=3, weight=1, exclusive=0)
> [  0.138] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=65536)
> [  0.138] send HEADERS frame <length=69, flags=0x25, stream_id=13>
>          ; END_STREAM | END_HEADERS | PRIORITY
>          (padlen=0, dep_stream_id=11, weight=16, exclusive=0)
>          ; Open new stream
>          :method: GET
>          :path: /cvmfs/test.http2/test/1MB/FOO
>          :scheme: http
>          :authority: hcc-anvil-175-13.unl.edu
>          accept: */*
>          accept-encoding: gzip, deflate
>          user-agent: nghttp2/1.7.1
> [  0.275] recv SETTINGS frame <length=12, flags=0x00, stream_id=0>
>          (niv=2)
>          [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):1000]
>          [SETTINGS_INITIAL_WINDOW_SIZE(0x04):2097151]
> [  0.275] recv WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=2145386496)
> [  0.275] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
>          ; ACK
>          (niv=0)
> [  0.275] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
>          ; ACK
>          (niv=0)
> [  0.275] recv (stream_id=13) :status: 200
> [  0.275] recv (stream_id=13) date: Wed, 09 Mar 2016 11:54:44 GMT
> [  0.275] recv (stream_id=13) server: Apache/2.4.18 (Ubuntu)
> [  0.276] recv (stream_id=13) accept-ranges: bytes
> [  0.276] recv (stream_id=13) content-length: 1048576000
> [  0.276] recv (stream_id=13) cache-control: max-age=259200
> [  0.276] recv (stream_id=13) expires: Sat, 12 Mar 2016 11:54:44 GMT
> [  0.276] recv HEADERS frame <length=96, flags=0x04, stream_id=13>
>          ; END_HEADERS
>          (padlen=0)
>          ; First response header
> [  0.410] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.412] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.546] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.547] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.547] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=65536)
> [  0.547] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=13>
>          (window_size_increment=65536)
> [  0.548] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.680] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.681] recv DATA frame <length=16384, flags=0x00, stream_id=13>
> [  0.683] recv DATA frame <length=16383, flags=0x00, stream_id=13>
> [  0.683] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=0>
>          (window_size_increment=65535)
> [  0.683] send WINDOW_UPDATE frame <length=4, flags=0x00, stream_id=13>
>          (window_size_increment=65535)
> [ERROR] Timeout
> 
> 
> 
> And the apache debug log:
> 
> [Thu Mar 10 10:07:11.576838 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_conn.c(147): [client 128.141.155.247:51137] 
> h2_conn_process start
> [Thu Mar 10 10:07:11.577041 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_conn.c(172): [client 128.141.155.247:51137] 
> h2_session(8): starting on 127.0.0.1:80
> [Thu Mar 10 10:07:11.577129 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send SETTINGS[length=12, stream=0]
> [Thu Mar 10 10:07:11.577185 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send WINDOW_UPDATE[length=4, stream=0]
> [Thu Mar 10 10:07:11.577272 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_stream.c(168): [client 128.141.155.247:51137] 
> h2_stream(8-13): opened
> [Thu Mar 10 10:07:11.577374 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_stream.c(313): [client 128.141.155.247:51137] 
> h2_stream(8-13): scheduled GET 
> http://hcc-anvil-175-13.unl.edu/cvmfs/test.http2/test/1MB/FOO
> [Thu Mar 10 10:07:11.577711 2016] [http2:debug] [pid 5363:tid 
> 140323217999616] h2_mplx.c(703): [client 128.141.155.247:51137] 
> h2_mplx(8-13): write with trailers=no
> [Thu Mar 10 10:07:11.577800 2016] [http2:debug] [pid 5363:tid 
> 140323217999616] h2_mplx.c(703): [client 128.141.155.247:51137] 
> h2_mplx(8-13): write with trailers=no
> [Thu Mar 10 10:07:11.577853 2016] [http2:debug] [pid 5363:tid 
> 140323217999616] h2_mplx.c(749): [client 128.141.155.247:51137] 
> h2_mplx(8-13): close with trailers=no
> [Thu Mar 10 10:07:11.577925 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_stream.c(234): [client 128.141.155.247:51137] 
> h2_stream(8-13): set_response(200)
> [Thu Mar 10 10:07:11.578000 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(1231): [client 128.141.155.247:51137] 
> h2_stream(8-13): submit response 200
> [Thu Mar 10 10:07:11.578039 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send SETTINGS[ack=1, stream=0]
> [Thu Mar 10 10:07:11.578118 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send HEADERS[length=96, hend=1, stream=13, eos=0]
> [Thu Mar 10 10:07:11.578147 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578171 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578193 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578215 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578237 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578259 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578281 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.578416 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16383, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.983733 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.983885 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.983978 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:11.984034 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:12.119071 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:12.119156 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:12.119240 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16384, flags=0, stream=13, padlen=0]
> [Thu Mar 10 10:07:12.119282 2016] [http2:debug] [pid 5363:tid 
> 140322990438144] h2_session.c(590): [client 128.141.155.247:51137] 
> h2_session(8): frame_send DATA[length=16383, flags=0, stream=13, padlen=0]
> 
> 
> The timeout I used was 30 seconds, which should be more than enough.
> 
> Is there a bug in Apache, or do I have to tune the HTTP/2-related Apache 
> configuration?
> 
> Cheers,
> José
> 
> 
>> On 08 Mar 2016, at 14:34, Stefan Eissing <stefan.eiss...@greenbytes.de> 
>> wrote:
>> 
>> José,
>> 
>> there is a newer version 2.4.18 of Apache httpd which is be better suited
>> for performance testing. 2.4.17 was the first release of the module and much
>> has happened since.
>> 
>> Test throughput here on localhost, the limit of http/1.1 and http/2 on
>> Apache are several gigabytes/s, without any super special configuration.
>> 
>> It is interesting to know how latency affects this and I would like to
>> see results on this.
>> 
>> As to the calibration of your test client, I recommend getting alternate
>> figures from servers such has h2o and nghttpd and also using other clients
>> like the h2load which is part of the nghttp2 distribution. That way, you
>> can check where difference in client implementation might affect your
>> performance results.
>> 
>> Cheers,
>> 
>> Stefan
>> 
>>> Am 08.03.2016 um 14:05 schrieb Joshua Marantz <jmara...@google.com.INVALID>:
>>> 
>>> That's a great benchmark, José.  A couple of questions:
>>> 1. When using http 1.1, is your connection over SSL?  I assume that your h2
>>> connection is over ssl, and that might account for a speed penalty.
>>> 2. Have you tried this on other h2 implementations (h2o, nginx)?
>>> 3. Can you take some other data points?  If you just have 100 shards rather
>>> than 1000, how does that affect the results?
>>> 4. Can you also vary the size of each shard?
>>> 
>>> Publishing your results in a more comprehensive study would be quite
>>> valuable.
>>> 
>>> -Josh
>>> 
>>> On Tue, Mar 8, 2016 at 6:33 AM, Molina <jose.mol...@cern.ch> wrote:
>>> 
>>>> Hello,
>>>> 
>>>> I’m testing the HTTP/2 performance to add support to the software I’m
>>>> working for. My benchmark consist on an apache 2.4.17 server with the
>>>> following http2-related configuration:
>>>> 
>>>> LoadModule http2_module modules/mod_http2.so
>>>> <IfModule http2_module>
>>>> Protocols h2c http/1.1
>>>> H2WindowSize 1460000000
>>>> H2StreamMaxMemSize 1460000000
>>>> H2MinWorkers 8
>>>> H2Direct on
>>>> LogLevel http2:info
>>>> H2MaxSessionStreams 100
>>>> H2SessionExtraFiles 1000
>>>> H2MaxWorkers 100
>>>> </IfModule>
>>>> 
>>>> The rest is simply the default apache configuration.
>>>> My client is a C application that downloads 1024 files of exactly 1MiB
>>>> each, so totally 1GB. When performing the test with normal HTTP/1.1 the
>>>> download speed is around 20MiB/s, while when using HTTP/2 is 1.5MiB/s. The
>>>> client uses exactly one  multiplexed connection and works as expected
>>>> Lastly, the client and server are located in center Europe and center USA
>>>> respectively to increase the impact of opening many HTTP/1.1 connections
>>>> with such a high physical distance due to latency.
>>>> 
>>>> I tried to increase the Linux window size, as well as the TCP buffer to
>>>> increase the download speed, but without success so far.
>>>> 
>>>> Is there some configuration parameter I’m missing to make it perform
>>>> better? Or perhaps the module itself needs an special configuration for
>>>> such a use case?
>>>> 
>>>> Thanks a lot in advance. Best regards,
>>>> José Molina
>>>> 
>>>> 
>> 
> 

Reply via email to