Hi,

Update: We found that Tomcat goes OOM when a client closes and opens new
connections every second. In the memory dump, we see a lot of RequestInfo
objects that are causing the memory spike.

After a while, Tomcat goes OOM and start rejecting request(I get a
request timed out on my client). This seems like a bug to me.

For better understanding, let me explain my use case again:

I have a jetty client that sends HTTP2 requests to Tomcat. My requirement
is to close a connection after a configurable(say 5000) number of
requests/streams and open a new connection that continues to send requests.
I close a connection by sending a GoAway frame.

When I execute this use case under load, I see that after ~2hours my
requests fail and I get a series of errors like request timeouts(5seconds),
invalid window update frame, and connection close exception on my client.
On further debugging, I found that it's a Tomcat memory problem and it goes
OOM after sometime under heavy load with multiple connections being
re-established by the clients.

[image: image.png]

[image: image.png]

Is this a known issue? Or a known behavior with Tomcat?

Please let me know if you any experience with such a situation. Thanks in
advance.

On Sun, Jun 14, 2020 at 11:30 AM Chirag Dewan <chirag.dewa...@gmail.com>
wrote:

> Hi,
>
> This is without load balancer actually. I am directly sending to Tomcat.
>
> Update:
>
> A part issue I found was to be 9.0.29. I observed that when request were
> timed out on client (2seconds), the client would send a RST frame. And the
> GoAway from Tomcat was perhaps a bug. In 9.0.36, RST frame is replied with
> a RST from Tomcat.
>
> Now the next part to troubleshoot is why after about an hour or so,
> requests are timed out at Tomcat.
>
> Could close to 100 HTTP2 connections per second cause this on Tomcat?
>
> Thanks
>
> On Sun, 14 Jun, 2020, 12:27 AM Michael Osipov, <micha...@apache.org>
> wrote:
>
>> Am 2020-06-13 um 08:42 schrieb Chirag Dewan:
>> > Hi,
>> >
>> > We are observing that under high load, my clients start receiving a
>> GoAway
>> > frame with error:
>> >
>> > *Connection[{id}], Stream[{id}] an error occurred during processing that
>> > was fatal to the connection.*
>> >
>> > Background : We have implemented our clients to close connections after
>> > every 500-1000 requests (streams). This is a load balancer requirement
>> that
>> > we are working on and hence such a behavior. So with a throughput of
>> around
>> > 19k, almost 40 connections are closed and recreated every second.
>> >
>> > After we receive this frame, my clients start behaving erroneously.
>> Before
>> > this as well, my clients start sending RST_STREAM with canceled for each
>> > request. Could this be due to the number of connections we open? Is it
>> > related to the version of Tomcat? Or maybe my clients are misbehaving?
>> >
>> > Now since I only receive this under heavy load, I can't quite picture
>> > enough reasons for this to happen.
>> >
>> > Any possible clues on where I should start looking?
>> >
>> > My Stack:
>> > Server - Tomcat 9.0.29
>> > Client - Jetty 9.x
>>
>> Does the same happen w/o the load balancer?
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>

Reply via email to