On 15/09/2020 12:46, Martin Grigorov wrote:
> On Tue, Sep 15, 2020 at 2:37 PM Martin Grigorov <mgrigo...@apache.org>
> wrote:
> 
>> Hi,
>>
>> I am running some load tests on Tomcat and I've noticed that when HTTP2 is
>> enabled the throughput drops considerably.
>>
>> Here are the steps to reproduce:
>>
>> 1) Enable HTTP2, e.g. by commenting out this connector:
>>
>> https://github.com/apache/tomcat/blob/d381d87005fa89d1f19d9091c0954f317c135d9d/conf/server.xml#L103-L112
>>
>> 2) Download Vegeta load tool from:
>> https://github.com/tsenart/vegeta/releases/
>>
>> 3) Run the load tests:
>>
>> 3.1) HTTP/1.1
>> echo -e '{"method": "GET", "url": "http://localhost:8080/examples/"}' |
>> vegeta attack -format=json  -rate=0 -max-workers=1000 -duration=10s |
>> vegeta encode > /tmp/http1.json; and vegeta report -type=json
>> /tmp/http1.json | jq .
>>
>> 3.2) HTTP2
>> echo -e '{"method": "GET", "url": "https://localhost:8443/examples/"}' |
>> vegeta attack -format=json -http2 -rate=0 -max-workers=1000 -insecure
>> -duration=10s | vegeta encode > /tmp/http2.json; and vegeta report
>> -type=json /tmp/http2.json | jq .
>>
>> As explained at https://github.com/tsenart/vegeta#-rate -rate=0 means
>> that Vegeta will try to send as many requests as possible with the
>> configured number of workers.
>> I use '-insecure' because I use self-signed certificate.
>>
>> On my machine I get around 14-15K reqs/sec for HTTP1.1 with only responses
>> with code=200 .
>> But for HTTP2 Tomcat starts returning such kind of errors:
>>
>>  "errors": [
>>     "Get \"https://localhost:8443/examples/\": http2: server sent GOAWAY
>> and closed the connection; LastStreamID=9259, ErrCode=PROTOCOL_ERROR,
>> debug=\"Stream [9,151] has been closed for some time\"",
>>     "http2: server sent GOAWAY and closed the connection;
>> LastStreamID=9259, ErrCode=PROTOCOL_ERROR, debug=\"Stream [9,151] has been
>> closed for some time\"",
>>     "Get \"https://localhost:8443/examples/\": http2: server sent GOAWAY
>> and closed the connection; LastStreamID=239, ErrCode=PROTOCOL_ERROR,
>> debug=\"Stream [49] has been closed for some time\""
>>   ]
>>
>> when I ask for more than 2000 reqs/sec, i.e. -rate=2000/1s

That indicates that the client has sent a frame associated with a stream
that the server closed previously and that that stream has been removed
from the Map of known streams to make room for new ones. See
Http2UpgardeHandler.pruneClosedStreams()

It looks like the client is making assumptions about server behaviour
that go beyond the requirements of RFC 7540, section 5.3.4.

>> All the access logs look like:
>>
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/ HTTP/2.0" 200
>> 1126
>>
>> i.e. there are no error codes, just 200.
>> Vegeta reports the error with status code = 0. I think this just means
>> that it didn't get a proper HTTP response but just TCP error.
>> There are no errors in catalina.out.
>>
>> Are there any settings I can tune to get better throughput with HTTP2 ?
>>
>> Tomcat 10.0.0-M8.

If you really want to maximise throughput then you need to reduce the
number of concurrent requests to (or a little above) the number of cores
available on the server. Go higher and you'll start to see throughput
tail off due to context switching.

If you want to demonstrate throughput with a large number of clients
you'll probably need to experiment with both maxThreads,
maxConcurrentStreams and maxConcurrentStreamExecution.

If I had to guess, I'd expect maxConcurrentStreams ==
maxConcurrentStreamExecution and low numbers for all of them to give the
best results.

Mark


> Forgot to mention that I've also tested with JMeter +
> https://github.com/Blazemeter/jmeter-http2-plugin but there it fails with
> OOM if I use more than 2000 virtual users. Increasing the memory still does
> not give such good results as Vegeta for HTTP/1.1. Also JMeter uses
> sequential model.
> 
> For comparison, I've also tested with a simple Golang based HTTP2 server:
> 
> http2-server.go:
> ==========================================================
> package main
> 
> import (
>     "fmt"
>     "log"
>     "net/http"
>     "os"
> )
> 
> func main() {
> 
>     port := 8080
>     if port == "" {
>       log.Fatal("Please specify the HTTP port as environment variable, e.g.
> env PORT=8081 go run http-server.go")
>     }
> 
>     tls_root := "/path/to/certs/"
>     srv := &http.Server{Addr: ":" + port, Handler: http.HandlerFunc(handle)}
>     log.Fatal(srv.ListenAndServeTLS(tls_root + "server.crt", tls_root +
> "server.key"))
> }
> 
> func handle(w http.ResponseWriter, r *http.Request) {
> //    log.Printf("Got connection: %s", r.Proto) // prints HTTP/2.0
>     fmt.Fprintf(w, "Hello World")
> }
> ==========================================================
> 
> Here Vegeta makes around 13K reqs/sec without error responses.
> 
> To run this app do: go run http2-server.go
> 
> 
>> Regards,
>> Martin
>>
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to