Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-02-16 Thread Christopher Schultz

Chuck and Brian,

On 2/15/24 10:53, Chuck Caldarale wrote:



On Feb 15, 2024, at 09:04, Brian Braun  wrote:

I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo jcmd 
 VM.native_memory scale=MB":

Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)

Then I discovered the Jemalloc tool (http://jemalloc.net 
) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.



There have been numerous bugs filed with OpenJDK for C2 memory leaks over the 
past few years, mostly related to recompiling certain methods. The C2 compiler 
kicks in when fully optimizing methods, and it may recompile methods after 
internal instrumentation shows that additional performance can be obtained by 
doing so.



I am attaching the GIF in this email.



Attachments are stripped on this mailing list.


:(

I'd love to see these.


Does anybody know how to deal with this?



You could disable the C2 compiler temporarily, and just let C1 handle your 
code. Performance will be somewhat degraded, but may well still be acceptable. 
Add the following to the JVM options when you launch Tomcat:

-XX:TieredStopAtLevel=1



By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".



You could try a more recent JVM version; JDK 11 was first released over 5 years 
ago, although it is still being maintained.


There is an 11.0.22 -- just a patch-release away from what you appear to 
have. I'm not sure if it's offered through your package-manager, but you 
could give it a try directly from e.g. Eclipse Adoptium / Temurin.


Honestly, if your code runs on Java 11, it's very likely that it will 
run just fine on Java 17 or Java 21. Debian has packages for Java 17 for 
sure, so I suspect Ubuntu will have them available as well.


Debian-based distros will allow you to install and run multiple 
JDKs/JREs in parallel, so you can install Java 17 (or 21) without 
cutting-off access to Java 11 if you still want it.


-chris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-02-15 Thread Chuck Caldarale

> On Feb 15, 2024, at 09:04, Brian Braun  wrote:
> 
> I discovered the JCMD command to perform the native memory tracking. When
> running it, after 3-4 days since I started Tomcat, I found out that the
> compiler was using hundreds of MB and that is exactly why the Tomcat
> process starts abusing the memory! This is what I saw when executing "sudo 
> jcmd  VM.native_memory scale=MB":
> 
> Compiler (reserved=3D340MB, commited=3D340MB)
> (arena=3D340MB #10)
> 
> Then I discovered the Jemalloc tool (http://jemalloc.net 
> ) and its jeprof
> tool, so I started launching Tomcat using it. Then, after 3-4 days after
> Tomcat starts I was able to create some GIF images from the dumps that
> Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
> being used by some weird activity in the compiler! It seems that something
> called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
> and that creates the leak. Why after 3-4 days and not sooner? I don't know.


There have been numerous bugs filed with OpenJDK for C2 memory leaks over the 
past few years, mostly related to recompiling certain methods. The C2 compiler 
kicks in when fully optimizing methods, and it may recompile methods after 
internal instrumentation shows that additional performance can be obtained by 
doing so.


> I am attaching the GIF in this email.


Attachments are stripped on this mailing list.


> Does anybody know how to deal with this?


You could disable the C2 compiler temporarily, and just let C1 handle your 
code. Performance will be somewhat degraded, but may well still be acceptable. 
Add the following to the JVM options when you launch Tomcat:

-XX:TieredStopAtLevel=1


> By the way, I'm running my website using Tomcat 9.0.58, Java
> "11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
> using Eclipse and compiling my WAR file with a "Compiler compliance
> level:11".


You could try a more recent JVM version; JDK 11 was first released over 5 years 
ago, although it is still being maintained.


  - Chuck

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-02-15 Thread Brian Braun
Hello,

It has been a long time since I received the last suggestions to my issue
here on this support list. Since then I decided to stop asking for help and
to "do my homework". To read, to watch YouTube presentations, to spend time
on StackOverflow, etc. So I have spent a lot of time on this and I think I
have learned a lot which is nice.
This is what I have learned lately:

I definitely don't have a leak in my code (or in the libraries I am using,
as far as I understand). And my code is not creating a significant amount
of objects that would use too much memory.
The heap memory (the 3 G1s) and non-heap memory (3 CodeHeaps + compressed
class space + metaspace) together use just using a few hundred MBs and
their usage is steady and normal.
I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo
jcmd  VM.native_memory scale=MB":

Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)

All the other categories (Class, Thread, Code, GC, Internal, Symbol, etc)
look normal since they use a low amount of memory and they don't grow.

Then I discovered the Jemalloc tool (http://jemalloc.net) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.
I am attaching the GIF in this email.

Does anybody know how to deal with this? I have been struggling with this
issue already for 3 months. At least now I know that this is a native
memory leak, but at this point I feel lost.

By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".

Thanks in advance!

Brian

On Mon, Jan 8, 2024 at 10:05 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> Brian,
>
> On 1/5/24 17:21, Brian Braun wrote:
> > Hello Chirstopher,
> >
> > First of all: thanks a lot for your responses!
> >
> > On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/30/23 15:42, Brian Braun wrote:
> >>> At the beginning, this was the problem: The OOM-killer (something that
> I
> >>> never knew existed) killing Tomcat unexpectedly and without any
> >>> explanation
> >>
> >> The explanation is always the same: some application requests memory
> >> from the kernel, which always grants the request(!). When the
> >> application tries to use that memory, the kernel scrambles to physically
> >> allocate the memory on-demand and, if all the memory is gone, it will
> >> pick a process and kill it.
>  >
> > Yes, that was happening to me until I set up the SWAP file and now at
> least
> > the Tomcat process is not being killed anymore.
>
> Swap can get you out of a bind like this, but it will ruin your
> performance. If you care more about stability (and believe me, it's a
> reasonable decision), then leave the swap on. But swap will kill (a)
> performance (b) SSD lifetime and (c) storage/transaction costs depending
> upon your environment. Besides, you either need the memory or you do
> not. It's rare to "sometimes" need the memory.
>
> >> Using a swap file is probably going to kill your performance. What
> >> happens if you make your heap smaller?
>  >
> > Yes, in fact the performance is suffering and that is why I don't
> consider
> > the swap file as a solution.
>
> :D
>
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present (the Tomcat
> > process grows in memory usage up to 1.5GB combining real memory and swap
> > memory).
>
> Okay, that definitely indicates a problem that needs to be solved.
>
> I've seen things like native ZIP handling code leaking native memory,
> but I know that Tomcat does not leak like that. If you do anything in
> your application that might leave file handles open, it could be
> contributing to the problem.
>
> > As I have explained in another email recently, I think that neither heap
> > usage nor non-heap usage are the problem. I have been monitoring them and
> > their requirements have always stayed low enough, so I could leave the
> -Xms
> > parameter with about 300-400 MB and that would be enough.
>
> Well, between heap and non-heap, that's all the memory. There is no
> non-heap-non-non-heap memory to be counted. Technically stack space is
> the same as "native memory" but usually 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-02-15 Thread Brian Braun
Hello,

It has been a long time since I received the last suggestions to my issue
here on this support list. Since then I decided to stop asking for help and
to "do my homework". To read, to watch YouTube presentations, to spend time
on StackOverflow, etc. So I have spent a lot of time on this and I think I
have learned a lot which is nice.
This is what I have learned lately:

I definitely don't have a leak in my code (or in the libraries I am using,
as far as I understand). And my code is not creating a significant amount
of objects that would use too much memory.
The heap memory (the 3 G1s) and non-heap memory (3 CodeHeaps + compressed
class space + metaspace) together use just using a few hundred MBs and
their usage is steady and normal.
I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo
jcmd  VM.native_memory scale=MB":

Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)

All the other categories (Class, Thread, Code, GC, Internal, Symbol, etc)
look normal since they use a low amount of memory and they don't grow.

Then I discovered the Jemalloc tool (http://jemalloc.net) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.
I am attaching the GIF in this email.

Does anybody know how to deal with this? I have been struggling with this
issue already for 3 months. At least now I know that this is a native
memory leak, but at this point I feel lost.

By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".

Thanks in advance!

Brian



On Mon, Jan 8, 2024 at 10:05 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> Brian,
>
> On 1/5/24 17:21, Brian Braun wrote:
> > Hello Chirstopher,
> >
> > First of all: thanks a lot for your responses!
> >
> > On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/30/23 15:42, Brian Braun wrote:
> >>> At the beginning, this was the problem: The OOM-killer (something that
> I
> >>> never knew existed) killing Tomcat unexpectedly and without any
> >>> explanation
> >>
> >> The explanation is always the same: some application requests memory
> >> from the kernel, which always grants the request(!). When the
> >> application tries to use that memory, the kernel scrambles to physically
> >> allocate the memory on-demand and, if all the memory is gone, it will
> >> pick a process and kill it.
>  >
> > Yes, that was happening to me until I set up the SWAP file and now at
> least
> > the Tomcat process is not being killed anymore.
>
> Swap can get you out of a bind like this, but it will ruin your
> performance. If you care more about stability (and believe me, it's a
> reasonable decision), then leave the swap on. But swap will kill (a)
> performance (b) SSD lifetime and (c) storage/transaction costs depending
> upon your environment. Besides, you either need the memory or you do
> not. It's rare to "sometimes" need the memory.
>
> >> Using a swap file is probably going to kill your performance. What
> >> happens if you make your heap smaller?
>  >
> > Yes, in fact the performance is suffering and that is why I don't
> consider
> > the swap file as a solution.
>
> :D
>
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present (the Tomcat
> > process grows in memory usage up to 1.5GB combining real memory and swap
> > memory).
>
> Okay, that definitely indicates a problem that needs to be solved.
>
> I've seen things like native ZIP handling code leaking native memory,
> but I know that Tomcat does not leak like that. If you do anything in
> your application that might leave file handles open, it could be
> contributing to the problem.
>
> > As I have explained in another email recently, I think that neither heap
> > usage nor non-heap usage are the problem. I have been monitoring them and
> > their requirements have always stayed low enough, so I could leave the
> -Xms
> > parameter with about 300-400 MB and that would be enough.
>
> Well, between heap and non-heap, that's all the memory. There is no
> non-heap-non-non-heap memory to be counted. Technically stack space is
> the same as "native memory" but 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-01-08 Thread Christopher Schultz

Brian,

On 1/5/24 17:21, Brian Braun wrote:

Hello Chirstopher,

First of all: thanks a lot for your responses!

On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:


Brian,

On 12/30/23 15:42, Brian Braun wrote:

At the beginning, this was the problem: The OOM-killer (something that I
never knew existed) killing Tomcat unexpectedly and without any
explanation


The explanation is always the same: some application requests memory
from the kernel, which always grants the request(!). When the
application tries to use that memory, the kernel scrambles to physically
allocate the memory on-demand and, if all the memory is gone, it will
pick a process and kill it.

>

Yes, that was happening to me until I set up the SWAP file and now at least
the Tomcat process is not being killed anymore.


Swap can get you out of a bind like this, but it will ruin your 
performance. If you care more about stability (and believe me, it's a 
reasonable decision), then leave the swap on. But swap will kill (a) 
performance (b) SSD lifetime and (c) storage/transaction costs depending 
upon your environment. Besides, you either need the memory or you do 
not. It's rare to "sometimes" need the memory.



Using a swap file is probably going to kill your performance. What
happens if you make your heap smaller?

>

Yes, in fact the performance is suffering and that is why I don't consider
the swap file as a solution.


:D


I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present (the Tomcat
process grows in memory usage up to 1.5GB combining real memory and swap
memory).


Okay, that definitely indicates a problem that needs to be solved.

I've seen things like native ZIP handling code leaking native memory, 
but I know that Tomcat does not leak like that. If you do anything in 
your application that might leave file handles open, it could be 
contributing to the problem.



As I have explained in another email recently, I think that neither heap
usage nor non-heap usage are the problem. I have been monitoring them and
their requirements have always stayed low enough, so I could leave the -Xms
parameter with about 300-400 MB and that would be enough.


Well, between heap and non-heap, that's all the memory. There is no 
non-heap-non-non-heap memory to be counted. Technically stack space is 
the same as "native memory" but usually you experience other problems if 
you have too many threads and they are running out of stack space.



There is something else in the JVM that is using all that memory and I
still don't know what it is. And I think it doesn't care about the value I
give to -Xmx, it uses all the memory it wants. Doing what? I don't know.


It might be time to start digging into those native memory-tracking tools.


Maybe I am not understanding your suggestion.
I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present. In fact the
problem started with a low amount for -Xmx.


No, you are understanding my suggestion(s). But if you are hitting Linux 
oom-killer with a 300MiB heap and a process size that is growing to 1.5G 
then getting killed... it's time to dig deeper.


-chris


On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:


Brian,

On 12/29/23 20:48, Brian Braun wrote:

Hello,

First of all:
Christopher Schultz: You answered an email from me 6 weeks ago. You

helped

me a lot with your suggestions. I have done a lot of research and have
learnt a lot since then, so I have been able to rule out a lot of

potential

roots for my issue. Because of that I am able to post a new more

specific

email. Thanks a lot!!!

Now, this is my stack:

- Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been

enough

for years.
- Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1

2023-08-24

- Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m

-Xms1000m

..")
- My app, which I developed myself, and has been running without any
problems for years

Well, a couple of months ago my website/Tomcat/Java started eating more

and

more memory about after about 4-7 days. The previous days it uses just

a

few hundred MB and is very steady, but then after a few days the memory
usage suddenly grows up to 1.5GB (and then stops growing at that point,
which is interesting). Between these anomalies the RAM usage is fine

and

very steady (as it has been for years) and it uses just about 40-50% of

the

"Max memory" (according to what the Tomcat Manager server status

shows).

The 3 components of G1GC heap memory are steady and low, before and

after

the usage grows to 1.5GB, so it is definitely not that the heap starts
requiring more and more memory. I have been using several tools to

monitor

that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
memory usage by the heap is 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-01-06 Thread Stefan Mayr

Hi,

Am 05.01.2024 um 23:21 schrieb Brian Braun:

Tracking native memory usage can be tricky depending upon your
environment. I would only look into that if there were somethng very odd
going on, like your process memory space seems to be more than 50% taken
by non-java-heap memory.



Well, actually that is my case. The heap memory (the 3 G1s) and non-heap
memory (3 CodeHeaps + compressed class space + metaspace) together use just
a few hundred MBs. I can see that using Tomcat Manager as well as the other
monitoring tools. And the rest of the memory (about 1GB) is being used by
the JVM but I don't know why or how, and that started 2 months ago. In your
case you have just 20-25% extra memory used in a way that you don't
understand, in my case it is about 200%.


Have you tried limiting native memory (-XX:MaxDirectMemorySize)? If not 
set this can be as large as your maximum heap size according to 
https://github.com/openjdk/jdk/blob/ace010b38a83e0c9b43aeeb6bc5c92d0886dc53f/src/java.base/share/classes/jdk/internal/misc/VM.java#L130-L136


From what I know:

total memory ~ heap + metaspace + code cache + (#threads * thread stack 
size) + direct memory


So if you set -Xmx to 1GB this should also allow 1GB of native memory 
which may result in more then 2GB of memory used by the JVM


Regards,

   Stefan Mayr

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-01-05 Thread Chuck Caldarale

> On Jan 5, 2024, at 16:21, Brian Braun  wrote:
>> 
>> Tracking native memory usage can be tricky depending upon your
>> environment. I would only look into that if there were somethng very odd
>> going on, like your process memory space seems to be more than 50% taken
>> by non-java-heap memory.
>> 
> Well, actually that is my case. The heap memory (the 3 G1s) and non-heap
> memory (3 CodeHeaps + compressed class space + metaspace) together use just
> a few hundred MBs. I can see that using Tomcat Manager as well as the other
> monitoring tools. And the rest of the memory (about 1GB) is being used by
> the JVM but I don't know why or how, and that started 2 months ago. In your
> case you have just 20-25% extra memory used in a way that you don't
> understand, in my case it is about 200%.


The virtual map provided earlier doesn’t show any anomalies, but I really 
should have asked you to run the pmap utility on the active Tomcat process 
instead. The JVM heap that was active when you captured the data is this line:

c180-dda0 rw-p  00:00 0

which works out to 115,200 pages or almost 472 Mb. However, we don’t know how 
much of that virtual space was actually allocated in real memory. The pmap 
utility would have shown that, as seen below for Tomcat running with a 512M 
heap on my small Linux box. Having pmap output from your system, both before 
and after the high-memory event occurs, might provide some insight on what’s 
using up the real memory.

Are you using the Tomcat manager app to show memory information? This is a 
quick way to display both maximum and used amounts of the various JVM memory 
pools.

Below is the sample pmap output for my test system; the Kbytes and RSS columns 
are of primary interest, notably the 527232 and 55092 for the JVM heap at 
address e000. Finding the actual offender won’t be easy, but having 
both before and after views may help.

  - Chuck


26608:   /usr/lib64/jvm/java-11-openjdk-11/bin/java 
-Djava.util.logging.config.file=/home/chuck/Downloads/apache-tomcat-9.0.84/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xms512M -Xmx512M 
-Dignore.endorsed.dirs= -classpath 
/home/chuck/Downloads/apache-tomcat-9.0.84/bin/bootstrap.jar:/home/chuck/Downloads/apache-tomca
Address   Kbytes RSS PSS   DirtySwap Mode  Mapping
e000  527232   55092   55092   55092   0 rw-p-   [ anon ]
0001002e 1045632   0   0   0   0 ---p-   [ anon ]
561efff1e000   4   4   0   0   0 r--p- 
/usr/lib64/jvm/java-11-openjdk-11/bin/java
561efff1f000   4   4   0   0   0 r-xp- 
/usr/lib64/jvm/java-11-openjdk-11/bin/java
561efff2   4   4   0   0   0 r--p- 
/usr/lib64/jvm/java-11-openjdk-11/bin/java
561efff21000   4   4   4   4   0 r--p- 
/usr/lib64/jvm/java-11-openjdk-11/bin/java
561efff22000   4   4   4   4   0 rw-p- 
/usr/lib64/jvm/java-11-openjdk-11/bin/java
561f0095c000 264  68  68  68   0 rw-p-   [ anon ]
7f45d000 132  36  36  36   0 rw-p-   [ anon ]
7f45d0021000   65404   0   0   0   0 ---p-   [ anon ]
7f45d400 132  16  16  16   0 rw-p-   [ anon ]
7f45d4021000   65404   0   0   0   0 ---p-   [ anon ]
7f45d800 132  40  40  40   0 rw-p-   [ anon ]
7f45d8021000   65404   0   0   0   0 ---p-   [ anon ]
7f45dc00 132  84  84  84   0 rw-p-   [ anon ]
7f45dc021000   65404   0   0   0   0 ---p-   [ anon ]
7f45e000 132  16  16  16   0 rw-p-   [ anon ]
7f45e0021000   65404   0   0   0   0 ---p-   [ anon ]
7f45e400 132  16  16  16   0 rw-p-   [ anon ]
7f45e4021000   65404   0   0   0   0 ---p-   [ anon ]
7f45e8001340127612761276   0 rw-p-   [ anon ]
7f45e814f000   64196   0   0   0   0 ---p-   [ anon ]
7f45ec00 132  32  32  32   0 rw-p-   [ anon ]
7f45ec021000   65404   0   0   0   0 ---p-   [ anon ]
7f45f000 132  44  44  44   0 rw-p-   [ anon ]
7f45f0021000   65404   0   0   0   0 ---p-   [ anon ]
7f45f400 132  52  52  52   0 rw-p-   [ anon ]
7f45f4021000   65404   0   0   0   0 ---p-   [ anon ]
7f45f800 132  72  72  72   0 rw-p-   [ anon ]
7f45f8021000   65404   0   0   0   0 ---p-   [ anon ]
7f45fc00 132  52  52  52   0 rw-p-   [ anon 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-01-05 Thread Brian Braun
Hello Chirstopher,

First of all: thanks a lot for your responses!

On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> Brian,
>
> On 12/30/23 15:42, Brian Braun wrote:
> > At the beginning, this was the problem: The OOM-killer (something that I
> > never knew existed) killing Tomcat unexpectedly and without any
> > explanation
>
> The explanation is always the same: some application requests memory
> from the kernel, which always grants the request(!). When the
> application tries to use that memory, the kernel scrambles to physically
> allocate the memory on-demand and, if all the memory is gone, it will
> pick a process and kill it.
>
>
Yes, that was happening to me until I set up the SWAP file and now at least
the Tomcat process is not being killed anymore.


> There are ways to prevent this from happening, but the best way to not
> to over-commit your memory.
>
> > Not knowing how much memory would I need to satisfy the JVM, and not
> > willing to migrate to more expensive Amazon instances just because I
> > don't know why this is happening. And not knowing if the memory
> > requirement would keep growing and growing and growing.
> It might. But if your symptom is Linux oom-killer and not JVM OOME, then
> the better technique is to *reduce* your heap space in the JVM.
>
> > Then I activated the SWAP file, and I discovered that this problem stops
> at
> > 1.5GB of memory used by the JVM. At least I am not getting more crashes
> > anymore. But I consider the SWAP file as a palliative and I really want
> to
> > know what is the root of this problem. If I don't, then maybe I should
> > consider another career. I don't enjoy giving up.
>
> Using a swap file is probably going to kill your performance. What
> happens if you make your heap smaller?
>
>
Yes, in fact the performance is suffering and that is why I don't consider
the swap file as a solution.
I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present (the Tomcat
process grows in memory usage up to 1.5GB combining real memory and swap
memory).
As I have explained in another email recently, I think that neither heap
usage nor non-heap usage are the problem. I have been monitoring them and
their requirements have always stayed low enough, so I could leave the -Xms
parameter with about 300-400 MB and that would be enough.
There is something else in the JVM that is using all that memory and I
still don't know what it is. And I think it doesn't care about the value I
give to -Xmx, it uses all the memory it wants. Doing what? I don't know.

> Yes, the memory used by the JVM started to grow suddenly one day, after
> > several years running fine. Since I had not made any changes to my app, I
> > really don't know the reason. And I really think this should not be
> > happening without an explanation.
> >
> > I don't have any Java OOME exceptions, so it is not that my objects don't
> > fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
> > don't think the Heap and non-heap usage is the problem. I have been
> > inspecting those and their usage seems to be normal/modest and steady. I
> > can see that using the Tomcat Manager as well as several other tools (New
> > Relic, VisualVM, etc).
>
> Okay, so what you've done then is to allow a very large heap that you
> mostly don't need. If/when the heap grows a lot -- possibly suddenly --
> the JVM is lazy and just takes more heap space from the OS and
> ultimately you run out of main memory.
>
> The solution is to reduce the heap size.
>
>
Maybe I am not understanding your suggestion.
I have assigned to -Xmx both small amounts (as small as 300MB) and high
amounts (as high as 1GB) and the problem is still present. In fact the
problem started with a low amount for -Xmx.


> > Regarding the 1GB I am giving now to the -Xms parameter: I was giving
> just
> > a few hundreds and I already had the problem. Actually I think it is the
> > same if I give a few hundreds of MBs or 1GB, the JVM still starts using
> > more memory after 3-4 days of running until it takes 1.5GB. But during
> the
> > first 1-4 days it uses just a few hundred MBs.
> >
> > My app has been "static" as you say, but probably I have upgraded Tomcat
> > and/or Java recently. I don't really remember. Maybe one of those
> upgrades
> > brought this issue as a result. Actually, If I knew that one of those
> > upgrades causes this huge pike in memory consumption and there is no way
> to
> > avoid it, then I would accept it as a fact of life and move on. But
> since I
> > don't know, it really bugs me.
> >
> > I have the same amount of users and traffic as before. I also know how
> much
> > memory a session takes and it is fine.  I have also checked the HTTP(S)
> > requests to see if somehow I am getting any attempts to hack my instance
> > that could be the root of this problem. Yes, I get hacking attempts by
> > those bots all 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2024-01-03 Thread Christopher Schultz

Brian,

On 12/30/23 15:42, Brian Braun wrote:

At the beginning, this was the problem: The OOM-killer (something that I
never knew existed) killing Tomcat unexpectedly and without any
explanation


The explanation is always the same: some application requests memory 
from the kernel, which always grants the request(!). When the 
application tries to use that memory, the kernel scrambles to physically 
allocate the memory on-demand and, if all the memory is gone, it will 
pick a process and kill it.


There are ways to prevent this from happening, but the best way to not 
to over-commit your memory.



Not knowing how much memory would I need to satisfy the JVM, and not
willing to migrate to more expensive Amazon instances just because I
don't know why this is happening. And not knowing if the memory
requirement would keep growing and growing and growing.
It might. But if your symptom is Linux oom-killer and not JVM OOME, then 
the better technique is to *reduce* your heap space in the JVM.



Then I activated the SWAP file, and I discovered that this problem stops at
1.5GB of memory used by the JVM. At least I am not getting more crashes
anymore. But I consider the SWAP file as a palliative and I really want to
know what is the root of this problem. If I don't, then maybe I should
consider another career. I don't enjoy giving up.


Using a swap file is probably going to kill your performance. What 
happens if you make your heap smaller?



Yes, the memory used by the JVM started to grow suddenly one day, after
several years running fine. Since I had not made any changes to my app, I
really don't know the reason. And I really think this should not be
happening without an explanation.

I don't have any Java OOME exceptions, so it is not that my objects don't
fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
don't think the Heap and non-heap usage is the problem. I have been
inspecting those and their usage seems to be normal/modest and steady. I
can see that using the Tomcat Manager as well as several other tools (New
Relic, VisualVM, etc).


Okay, so what you've done then is to allow a very large heap that you 
mostly don't need. If/when the heap grows a lot -- possibly suddenly -- 
the JVM is lazy and just takes more heap space from the OS and 
ultimately you run out of main memory.


The solution is to reduce the heap size.


Regarding the 1GB I am giving now to the -Xms parameter: I was giving just
a few hundreds and I already had the problem. Actually I think it is the
same if I give a few hundreds of MBs or 1GB, the JVM still starts using
more memory after 3-4 days of running until it takes 1.5GB. But during the
first 1-4 days it uses just a few hundred MBs.

My app has been "static" as you say, but probably I have upgraded Tomcat
and/or Java recently. I don't really remember. Maybe one of those upgrades
brought this issue as a result. Actually, If I knew that one of those
upgrades causes this huge pike in memory consumption and there is no way to
avoid it, then I would accept it as a fact of life and move on. But since I
don't know, it really bugs me.

I have the same amount of users and traffic as before. I also know how much
memory a session takes and it is fine.  I have also checked the HTTP(S)
requests to see if somehow I am getting any attempts to hack my instance
that could be the root of this problem. Yes, I get hacking attempts by
those bots all the time, but I don't see anything relevant there. No news.

I agree with what you say now regarding the GC. I should not need to use
those switches since I understand it should work fine without using them.
And I don't know how to use them. And since I have never cared about using
them for about 15 years using Java+Tomcat, why should I start now?

I have also checked all my long-lasting objects. I have optimized my DB
queries recently as you suggest now, so they don't create huge amounts of
objects in a short period of time that the GC would have to deal with. The
same applies to my scheduled tasks. They all run very quickly and use
modest amounts of memory. All the other default Tomcat threads create far
more objects.

I have already activated the GC log. Is there a tool that you would suggest
to analyze it? I haven't even opened it. I suspect that the root of my
problem comes from the GC process indeed.


The GC logs are just text, so you can eyeball them if you'd like, but to 
really get a sense of what's happening you should use some kind of 
visualization tool.


It's not pretty, but gcviewer (https://github.com/chewiebug/GCViewer) 
gets the job done.


If you run with a 500MiB heap and everything looks good and you have no 
crashes (Linux oom-killer or Java OOME), I'd stick with that. Remember 
that your total OS memory requirements will be Java heap + JVM overhead 
+ whatever native memory is required by native libraries.


In production, I have an application with a 2048MiB heap whose "resident 
size" in `ps` shows as 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2023-12-30 Thread Stefan Mayr

Hi Brian,

Am 30.12.2023 um 21:42 schrieb Brian Braun:

I don't have any Java OOME exceptions, so it is not that my objects don't
fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
don't think the Heap and non-heap usage is the problem. I have been
inspecting those and their usage seems to be normal/modest and steady. I
can see that using the Tomcat Manager as well as several other tools (New
Relic, VisualVM, etc).

Regarding the 1GB I am giving now to the -Xms parameter: I was giving just
a few hundreds and I already had the problem. Actually I think it is the
same if I give a few hundreds of MBs or 1GB, the JVM still starts using
more memory after 3-4 days of running until it takes 1.5GB. But during the
first 1-4 days it uses just a few hundred MBs.


So if this is not heap memory (-Xmx) it must be some other memory the 
JVM uses.


I guess we can rule out reserved code cache (-XX:ReservedCodeCacheSize) 
and stack size (-Xss) because they should have fixed sizes and you've 
written earlier that you checked the number of threads.


This leaves us with meta space (-XX:MaxMetaspaceSize) and native/direct 
memory (-XX:MaxDirectMemorySize). You can try to limit that or use the 
Java flight recorder and tools like Mission Control or VisualVM to make 
that kind of memory usage visible.


Regards,

  Stefan

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2023-12-30 Thread Brian Braun
Hi Chris,

Thanks a lot for your very detailed response!
Here are my answers, comments and questions.

At the beginning, this was the problem: The OOM-killer (something that I
never knew existed) killing Tomcat unexpectedly and without any
explanation, many times during the night while I should be sleeping
peacefully (as long as  I wear my Apnea mask). Not knowing how much memory
would I need to satisfy the JVM, and not willing to migrate to more
expensive Amazon instances just because I don't know why this is happening.
And not knowing if the memory requirement would keep growing and growing
and growing.
Then I activated the SWAP file, and I discovered that this problem stops at
1.5GB of memory used by the JVM. At least I am not getting more crashes
anymore. But I consider the SWAP file as a palliative and I really want to
know what is the root of this problem. If I don't, then maybe I should
consider another career. I don't enjoy giving up.

Yes, the memory used by the JVM started to grow suddenly one day, after
several years running fine. Since I had not made any changes to my app, I
really don't know the reason. And I really think this should not be
happening without an explanation.

I don't have any Java OOME exceptions, so it is not that my objects don't
fit. Even if I supply 300MB to the -Xmx parameter. In fact, as I wrote, I
don't think the Heap and non-heap usage is the problem. I have been
inspecting those and their usage seems to be normal/modest and steady. I
can see that using the Tomcat Manager as well as several other tools (New
Relic, VisualVM, etc).

Regarding the 1GB I am giving now to the -Xms parameter: I was giving just
a few hundreds and I already had the problem. Actually I think it is the
same if I give a few hundreds of MBs or 1GB, the JVM still starts using
more memory after 3-4 days of running until it takes 1.5GB. But during the
first 1-4 days it uses just a few hundred MBs.

My app has been "static" as you say, but probably I have upgraded Tomcat
and/or Java recently. I don't really remember. Maybe one of those upgrades
brought this issue as a result. Actually, If I knew that one of those
upgrades causes this huge pike in memory consumption and there is no way to
avoid it, then I would accept it as a fact of life and move on. But since I
don't know, it really bugs me.

I have the same amount of users and traffic as before. I also know how much
memory a session takes and it is fine.  I have also checked the HTTP(S)
requests to see if somehow I am getting any attempts to hack my instance
that could be the root of this problem. Yes, I get hacking attempts by
those bots all the time, but I don't see anything relevant there. No news.

I agree with what you say now regarding the GC. I should not need to use
those switches since I understand it should work fine without using them.
And I don't know how to use them. And since I have never cared about using
them for about 15 years using Java+Tomcat, why should I start now?

I have also checked all my long-lasting objects. I have optimized my DB
queries recently as you suggest now, so they don't create huge amounts of
objects in a short period of time that the GC would have to deal with. The
same applies to my scheduled tasks. They all run very quickly and use
modest amounts of memory. All the other default Tomcat threads create far
more objects.

I have already activated the GC log. Is there a tool that you would suggest
to analyze it? I haven't even opened it. I suspect that the root of my
problem comes from the GC process indeed.

Thanks again!

Brian



On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> Brian,
>
> On 12/29/23 20:48, Brian Braun wrote:
> > Hello,
> >
> > First of all:
> > Christopher Schultz: You answered an email from me 6 weeks ago. You
> helped
> > me a lot with your suggestions. I have done a lot of research and have
> > learnt a lot since then, so I have been able to rule out a lot of
> potential
> > roots for my issue. Because of that I am able to post a new more specific
> > email. Thanks a lot!!!
> >
> > Now, this is my stack:
> >
> > - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been enough
> > for years.
> > - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> 2023-08-24
> > - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m -Xms1000m
> > ..")
> > - My app, which I developed myself, and has been running without any
> > problems for years
> >
> > Well, a couple of months ago my website/Tomcat/Java started eating more
> and
> > more memory about after about 4-7 days. The previous days it uses just a
> > few hundred MB and is very steady, but then after a few days the memory
> > usage suddenly grows up to 1.5GB (and then stops growing at that point,
> > which is interesting). Between these anomalies the RAM usage is fine and
> > very steady (as it has been for years) and it uses just about 40-50% of
> the
> > "Max memory" 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2023-12-30 Thread Christopher Schultz

Brian,

On 12/29/23 20:48, Brian Braun wrote:

Hello,

First of all:
Christopher Schultz: You answered an email from me 6 weeks ago. You helped
me a lot with your suggestions. I have done a lot of research and have
learnt a lot since then, so I have been able to rule out a lot of potential
roots for my issue. Because of that I am able to post a new more specific
email. Thanks a lot!!!

Now, this is my stack:

- Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been enough
for years.
- Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1 2023-08-24
- Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m -Xms1000m
..")
- My app, which I developed myself, and has been running without any
problems for years

Well, a couple of months ago my website/Tomcat/Java started eating more and
more memory about after about 4-7 days. The previous days it uses just a
few hundred MB and is very steady, but then after a few days the memory
usage suddenly grows up to 1.5GB (and then stops growing at that point,
which is interesting). Between these anomalies the RAM usage is fine and
very steady (as it has been for years) and it uses just about 40-50% of the
"Max memory" (according to what the Tomcat Manager server status shows).
The 3 components of G1GC heap memory are steady and low, before and after
the usage grows to 1.5GB, so it is definitely not that the heap starts
requiring more and more memory. I have been using several tools to monitor
that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
memory usage by the heap is not the problem.
The Non-heaps memory usage is not the problem either. Everything there is
normal, the usage is humble and even more steady.

And there are no leaks, I'm sure of that. I have inspected the JVM using
several tools.

There are no peaks in the number of threads either. The peak is the same
when the memory usage is low and when it requires 1.5GB. It stays the same
all the time.

I have also reviewed all the scheduled tasks in my app and lowered the
amount of objects they create, which was nice and entertaining. But that is
not the problem, I have analyzed the object creation by all the threads
(and there are many) and the threads created by my scheduled tasks are very
humble in their memory usage, compared to many other threads.

And I haven't made any relevant changes to my app in the 6-12 months before
this problem started occurring. It is weird that I started having this
problem. Could it be that I received an update in the java version or the
Tomcat version that is causing this problem?

If neither the heap memory or the Non-heaps memory is the source of the
growth of the memory usage, what could it be? Clearly something is
happening inside the JVM that raises the memory usage. And everytime it
grows, it doesn't decrease.  It is like if something suddenly starts
"pushing" the memory usage more and more, until it stops at 1.5GB.

I think that maybe the source of the problem is the garbage collector. I
haven't used any of the switches that we can use to optimize that,
basically because I don't know what I should do there (if I should at all).
I have also activated the GC log, but I don't know how to analyze it.

I have also increased and decreased the value of "-Xms" parameter and it is
useless.

Finally, maybe I should add that I activated 4GB of SWAP memory in my
Ubuntu instance so at least my JVM would not be killed my the OS anymore
(since the real memory is just 1.8GB). That worked and now the memory usage
can grow up to 1.5GB without crashing, by using the much slower SWAP
memory, but I still think that this is an abnormal situation.

Thanks in advance for your suggestions!


First of all: what is the problem? Are you just worried that the number 
of bytes taken by your JVM process is larger than it was ... sometime in 
the past? Or are you experiencing Java OOME of Linux oom-killer or 
anything like that?


Not all JVMs behave this way, most most of them do: once memory is 
"appropriated" by the JVM from the OS, it will never be released. It's 
just too expensive of an operation to shrink the heap.. plus, you told 
the JVM "feel free to use up to 1GiB of heap" so it's taking you at your 
word. Obviously, the native heap plus stack space for every thread plus 
native memory for any native libraries takes up more space than just the 
1GiB you gave for the heap, so ... things just take up space.


Lowering the -Xms will never reduce the maximum memory the JVM ever 
uses. Only lowering -Xmx can do that. I always recommend setting Xms == 
Xmx because otherwise you are lying to yourself about your needs.


You say you've been running this application "for years". Has it been in 
a static environment, or have you been doing things such as upgrading 
Java and/or Tomcat during that time? There are things that Tomcat does 
now that it did not do in the past that sometimes require more memory to 
manage, sometimes only at startup and sometimes 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2023-12-29 Thread Brian Braun
Hello Chuck,

On Fri, Dec 29, 2023 at 11:00 PM Chuck Caldarale  wrote:

>
> > On Dec 29, 2023, at 19:48, Brian Braun  wrote:
> >
> > First of all:
> > Christopher Schultz: You answered an email from me 6 weeks ago. You
> helped
> > me a lot with your suggestions. I have done a lot of research and have
> > learnt a lot since then, so I have been able to rule out a lot of
> potential
> > roots for my issue. Because of that I am able to post a new more specific
> > email. Thanks a lot!!!
> >
> > Now, this is my stack:
> >
> > - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been enough
> > for years.
>
>
> I presume the “2GM” above should be “2GB”.
>


Yes, sorry, I mean to write "2GB".


>
>
> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> 2023-08-24
> > - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m -Xms1000m
> > ..")
> > - My app, which I developed myself, and has been running without any
> > problems for years
> >
> > Well, a couple of months ago my website/Tomcat/Java started eating more
> and
> > more memory about after about 4-7 days. The previous days it uses just a
> > few hundred MB and is very steady, but then after a few days the memory
> > usage suddenly grows up to 1.5GB (and then stops growing at that point,
> > which is interesting). Between these anomalies the RAM usage is fine and
> > very steady (as it has been for years) and it uses just about 40-50% of
> the
> > "Max memory" (according to what the Tomcat Manager server status shows).
> > The 3 components of G1GC heap memory are steady and low, before and after
> > the usage grows to 1.5GB, so it is definitely not that the heap starts
> > requiring more and more memory. I have been using several tools to
> monitor
> > that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
> > memory usage by the heap is not the problem.
> > The Non-heaps memory usage is not the problem either. Everything there is
> > normal, the usage is humble and even more steady.
>
>
> What does the /proc//maps file show, both before and after the
> problem occurs? This should give you some idea of what .so library is
> grabbing the extra memory. (I only have Tomcat installed on macOS at the
> moment, so I can’t show you an example; I should be able to bring up Tomcat
> on a Linux box tomorrow.) The output may be long, depending on how
> fragmented the virtual memory allocations are.
>
>
This is the first time I hear about the "/proc/id/maps" file and how to see
the content with "cat". The content is very long and now I suspect that all
those strange lines that don't seem to be files are the source of my
problem. Are those Linux threads or something like that? At least from
the point of view of the JVM there are just 67 threads which I think is
normal considering that I am running "New Relic" and also JMX (and the peak
was 72 threads). I have reviewed all those java threads and all of them
look normal and necessary.
I will paste the content at the end of this email.


> > And there are no leaks, I'm sure of that. I have inspected the JVM using
> > several tools.
> >
> > There are no peaks in the number of threads either. The peak is the same
> > when the memory usage is low and when it requires 1.5GB. It stays the
> same
> > all the time.
> >
> > I have also reviewed all the scheduled tasks in my app and lowered the
> > amount of objects they create, which was nice and entertaining. But that
> is
> > not the problem, I have analyzed the object creation by all the threads
> > (and there are many) and the threads created by my scheduled tasks are
> very
> > humble in their memory usage, compared to many other threads.
> >
> > And I haven't made any relevant changes to my app in the 6-12 months
> before
> > this problem started occurring. It is weird that I started having this
> > problem. Could it be that I received an update in the java version or the
> > Tomcat version that is causing this problem?
> >
> > If neither the heap memory or the Non-heaps memory is the source of the
> > growth of the memory usage, what could it be? Clearly something is
> > happening inside the JVM that raises the memory usage. And everytime it
> > grows, it doesn't decrease.  It is like if something suddenly starts
> > "pushing" the memory usage more and more, until it stops at 1.5GB.
> >
> > I think that maybe the source of the problem is the garbage collector. I
> > haven't used any of the switches that we can use to optimize that,
> > basically because I don't know what I should do there (if I should at
> all).
> > I have also activated the GC log, but I don't know how to analyze it.
>
>
> I doubt that GC is the problem; if it were, it should show up in the GC
> data, which you say is essentially the same before and after the problem
> manifests itself..
>
>
> > I have also increased and decreased the value of "-Xms" parameter and it
> is
> > useless.
>
>
> Unrelated to your problem, but for server processes, -Xms should 

Re: Tomcat/Java starts using too much memory and not by the heap or non-heap memory

2023-12-29 Thread Chuck Caldarale

> On Dec 29, 2023, at 19:48, Brian Braun  wrote:
> 
> First of all:
> Christopher Schultz: You answered an email from me 6 weeks ago. You helped
> me a lot with your suggestions. I have done a lot of research and have
> learnt a lot since then, so I have been able to rule out a lot of potential
> roots for my issue. Because of that I am able to post a new more specific
> email. Thanks a lot!!!
> 
> Now, this is my stack:
> 
> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been enough
> for years.


I presume the “2GM” above should be “2GB”.


> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1 2023-08-24
> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m -Xms1000m
> ..")
> - My app, which I developed myself, and has been running without any
> problems for years
> 
> Well, a couple of months ago my website/Tomcat/Java started eating more and
> more memory about after about 4-7 days. The previous days it uses just a
> few hundred MB and is very steady, but then after a few days the memory
> usage suddenly grows up to 1.5GB (and then stops growing at that point,
> which is interesting). Between these anomalies the RAM usage is fine and
> very steady (as it has been for years) and it uses just about 40-50% of the
> "Max memory" (according to what the Tomcat Manager server status shows).
> The 3 components of G1GC heap memory are steady and low, before and after
> the usage grows to 1.5GB, so it is definitely not that the heap starts
> requiring more and more memory. I have been using several tools to monitor
> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that the
> memory usage by the heap is not the problem.
> The Non-heaps memory usage is not the problem either. Everything there is
> normal, the usage is humble and even more steady.


What does the /proc//maps file show, both before and after the 
problem occurs? This should give you some idea of what .so library is grabbing 
the extra memory. (I only have Tomcat installed on macOS at the moment, so I 
can’t show you an example; I should be able to bring up Tomcat on a Linux box 
tomorrow.) The output may be long, depending on how fragmented the virtual 
memory allocations are.


> And there are no leaks, I'm sure of that. I have inspected the JVM using
> several tools.
> 
> There are no peaks in the number of threads either. The peak is the same
> when the memory usage is low and when it requires 1.5GB. It stays the same
> all the time.
> 
> I have also reviewed all the scheduled tasks in my app and lowered the
> amount of objects they create, which was nice and entertaining. But that is
> not the problem, I have analyzed the object creation by all the threads
> (and there are many) and the threads created by my scheduled tasks are very
> humble in their memory usage, compared to many other threads.
> 
> And I haven't made any relevant changes to my app in the 6-12 months before
> this problem started occurring. It is weird that I started having this
> problem. Could it be that I received an update in the java version or the
> Tomcat version that is causing this problem?
> 
> If neither the heap memory or the Non-heaps memory is the source of the
> growth of the memory usage, what could it be? Clearly something is
> happening inside the JVM that raises the memory usage. And everytime it
> grows, it doesn't decrease.  It is like if something suddenly starts
> "pushing" the memory usage more and more, until it stops at 1.5GB.
> 
> I think that maybe the source of the problem is the garbage collector. I
> haven't used any of the switches that we can use to optimize that,
> basically because I don't know what I should do there (if I should at all).
> I have also activated the GC log, but I don't know how to analyze it.


I doubt that GC is the problem; if it were, it should show up in the GC data, 
which you say is essentially the same before and after the problem manifests 
itself..


> I have also increased and decreased the value of "-Xms" parameter and it is
> useless.


Unrelated to your problem, but for server processes, -Xms should be set to the 
same value as -Xmx; no sense in thrashing between the two.


> Finally, maybe I should add that I activated 4GB of SWAP memory in my
> Ubuntu instance so at least my JVM would not be killed my the OS anymore
> (since the real memory is just 1.8GB). That worked and now the memory usage
> can grow up to 1.5GB without crashing, by using the much slower SWAP
> memory, but I still think that this is an abnormal situation.


At least you have a workaround, as undesirable as it may be.

  - Chuck