Dave

Poked around a bit today but not sure I’ve reproduced anything as such or found 
any smoking guns

I ran a Fuseki instance with the same watch command you showed in your last 
message.  JVM Heap stays essentially static even after hours, there’s some 
minor fluctuation up and down in used heap space but the heap itself doesn’t 
grow at all.  Did this with a couple of different versions of 4.x to see if 
there’s any discernible difference but nothing meaningful showed up.  I also 
used 3.17.0 but again couldn’t reproduce the behaviour you are describing.

For reference I’m on OS X 13.4.1 using OpenJDK 17

The process peak memory (for all versions I tested) seems to peak at about 1.5G 
as reported by the vmmap tool.  Ongoing monitoring, i.e., OS X Activity Monitor 
shows the memory usage of the process fluctuating over time, but I don’t ever 
see the unlimited growth that your original report suggested.  Also, I didn’t 
set heap explicitly at all so I’m getting the default max heap of 4GB, and my 
actual heap usage was around 100 MB.

I see from vmmap that most of the memory appears to be virtual memory related 
to the many shared native libraries that the JVM links against which on a real 
OS is often swapped out as it’s not under active usage.

In a container, where swap is likely disabled, that’s obviously more 
problematic as everything occupies memory even if much of it might be for 
native libraries that are never needed by anything Fuseki does.  Again, I don’t 
see how that would lead to the apparently unbounded memory usage you’re 
describing.

You could try using jlink to build a minimal image where you only have the 
parts of the JDK that you need in the image.  I found the following old Jena 
thread - https://lists.apache.org/thread/dmmkndmy2ds8pf95zvqbcxpv84bj7cz6 - 
which actually describes an apparently similar memory issue but also has an 
example of a Dockerfile linked at the start of the thread that builds just such 
a minimal JRE for Fuseki.

Note that I also ran the leaks tool against the long running Fuseki processes 
and that didn’t find anything of note, 5.19KB of memory leaks over a 3.5 hr run 
so no smoking gun there.

Regards,

Rob

From: Dave Reynolds <dave.e.reyno...@gmail.com>
Date: Friday, 7 July 2023 at 11:11
To: users@jena.apache.org <users@jena.apache.org>
Subject: Re: Mystery memory leak in fuseki
Hi Andy,

Thanks for looking.

Good thought on some issue with stacked requests causing thread leak but
don't think that matches our data.

 From the metrics the number of threads and total thread memory used is
not that great and is stable long term while the process size grows, at
least in our situation.

This is based on both the JVM metrics from the prometheus scrape and by
switching on native memory checking and using jcmd to do various low
level dumps.

In a test set up we can replicate the long term (~3 hours) process
growth (while the heap, non-heap and threads stay stable) by just doing
something like:

watch -n 1 'curl -s http://localhost:3030/$/metrics'

With no other requests at all. So I think that makes it less likely the
root cause is triggered by stacked concurrent requests. Certainly the
curl process has exited completely each time. Though I guess there could
some connection cleanup going on in the linux kernel still.

 > Is the OOM kill the container runtime or Java exception?

We're not limiting the container memory but the OOM error is from docker
runtime itself:
     fatal error: out of memory allocating heap arena map

We have replicated the memory growth outside a container but not left
that to soak on a small machine to provoke an OOM, so not sure if the
OOM killer would hit first or get a java OOM exception first.

One curiosity we've found on the recent tests is that, when the process
has grown to dangerous level for the server, we do randomly sometimes
see the JVM (Temurin 17.0.7) spit out a thread dump and heap summary as
if there were a low level exception. However, there's no exception
message at all - just a timestamp the thread dump and nothing else. The
JVM seems to just carry on and the process doesn't exit. We're not
setting any debug flags and not requesting any thread dump, and there's
no obvious triggering event. This is before the server gets completely
out of the memory causing the docker runtime to barf.

Dave


On 07/07/2023 09:56, Andy Seaborne wrote:
> I tried running without any datasets. I get the same heap effect of
> growing slowly then a dropping back.
>
> Fuseki Main (fuseki-server did the same but the figures are from main -
> there is less going on)
> Version 4.8.0
>
> fuseki -v --ping --empty    # No datasets
>
> 4G heap.
> 71M allocated
> 4 threads (+ Daemon system threads)
> 2 are not parked (i.e. they are blocked)
> The heap grows slowly to 48M then a GC runs then drops to 27M
> This repeats.
>
> Run one ping.
> Heap now 142M, 94M/21M GC cycle
> and 2 more threads at least for a while. They seem to go away after time.
> 2 are not parked.
>
> Now pause process the JVM, queue 100 pings and continue the process.
> Heap now 142M, 80M/21M GC cycle
> and no more threads.
>
> Thread stacks are not heap so there may be something here.
>
> Same except -Xmx500M
> RSS is 180M
> Heap is 35M actual.
> 56M/13M heap cycle
> and after one ping:
> I saw 3 more threads, and one quickly exited.
> 2 are not parked
>
> 100 concurrent ping requests.
> Maybe 15 more threads. 14 parked. One is marked "running" by visualvm.
> RSS is 273M
>
> With -Xmx250M -Xss170k
> The Fuseki command failed below 170k during classloading.
>
> 1000 concurrent ping requests.
> Maybe 15 more threads. 14 parked. One is marked "running" by visualvm.
> The threads aren't being gathered.
> RSS is 457M.
>
> So a bit of speculation:
>
> Is the OOM kill the container runtime or Java exception?
>
> There aren't many moving parts.
>
> Maybe under some circumstances, the metrics gatherer or ping caller
> causes more threads. This could be bad timing, several operations
> arriving at the same time, or it could be the client end isn't releasing
> the HTTP connection in a timely manner or is delayed/failing to read the
> entire response.  HTTP/1.1. -- HTTP/2 probably isn't at risk.
>
> Together with a dataset, memory mapped files etc, it is pushing the
> process size up and on a small machine that might become a problem
> especially if the container host is limiting RAM.
>
> But speculation.
>
>      Andy
>

Reply via email to