I tried running without any datasets. I get the same heap effect of growing slowly then a dropping back.

Fuseki Main (fuseki-server did the same but the figures are from main - there is less going on)
Version 4.8.0

fuseki -v --ping --empty    # No datasets

4G heap.
71M allocated
4 threads (+ Daemon system threads)
2 are not parked (i.e. they are blocked)
The heap grows slowly to 48M then a GC runs then drops to 27M
This repeats.

Run one ping.
Heap now 142M, 94M/21M GC cycle
and 2 more threads at least for a while. They seem to go away after time.
2 are not parked.

Now pause process the JVM, queue 100 pings and continue the process.
Heap now 142M, 80M/21M GC cycle
and no more threads.

Thread stacks are not heap so there may be something here.

Same except -Xmx500M
RSS is 180M
Heap is 35M actual.
56M/13M heap cycle
and after one ping:
I saw 3 more threads, and one quickly exited.
2 are not parked

100 concurrent ping requests.
Maybe 15 more threads. 14 parked. One is marked "running" by visualvm.
RSS is 273M

With -Xmx250M -Xss170k
The Fuseki command failed below 170k during classloading.

1000 concurrent ping requests.
Maybe 15 more threads. 14 parked. One is marked "running" by visualvm.
The threads aren't being gathered.
RSS is 457M.

So a bit of speculation:

Is the OOM kill the container runtime or Java exception?

There aren't many moving parts.

Maybe under some circumstances, the metrics gatherer or ping caller
causes more threads. This could be bad timing, several operations arriving at the same time, or it could be the client end isn't releasing the HTTP connection in a timely manner or is delayed/failing to read the entire response. HTTP/1.1. -- HTTP/2 probably isn't at risk.

Together with a dataset, memory mapped files etc, it is pushing the process size up and on a small machine that might become a problem especially if the container host is limiting RAM.

But speculation.

    Andy

Reply via email to