Hello,

It has been a long time since I received the last suggestions to my issue
here on this support list. Since then I decided to stop asking for help and
to "do my homework". To read, to watch YouTube presentations, to spend time
on StackOverflow, etc. So I have spent a lot of time on this and I think I
have learned a lot which is nice.
This is what I have learned lately:

I definitely don't have a leak in my code (or in the libraries I am using,
as far as I understand). And my code is not creating a significant amount
of objects that would use too much memory.
The heap memory (the 3 G1s) and non-heap memory (3 CodeHeaps + compressed
class space + metaspace) together use just using a few hundred MBs and
their usage is steady and normal.
I discovered the JCMD command to perform the native memory tracking. When
running it, after 3-4 days since I started Tomcat, I found out that the
compiler was using hundreds of MB and that is exactly why the Tomcat
process starts abusing the memory! This is what I saw when executing "sudo
jcmd <TomcatProcessID> VM.native_memory scale=MB":

Compiler (reserved=3D340MB, commited=3D340MB)
(arena=3D340MB #10)

All the other categories (Class, Thread, Code, GC, Internal, Symbol, etc)
look normal since they use a low amount of memory and they don't grow.

Then I discovered the Jemalloc tool (http://jemalloc.net) and its jeprof
tool, so I started launching Tomcat using it. Then, after 3-4 days after
Tomcat starts I was able to create some GIF images from the dumps that
Jemalloc creates. The GIF files show the problem: 75-90% of the memory is
being used by some weird activity in the compiler! It seems that something
called "The C2 compile/JIT compiler" starts doing something after 3-4 days,
and that creates the leak. Why after 3-4 days and not sooner? I don't know.
I am attaching the GIF in this email.

Does anybody know how to deal with this? I have been struggling with this
issue already for 3 months. At least now I know that this is a native
memory leak, but at this point I feel lost.

By the way, I'm running my website using Tomcat 9.0.58, Java
"11.0.21+9-post-Ubuntu-0ubuntu122.04", Ubuntu 22.04.03. And I am developing
using Eclipse and compiling my WAR file with a "Compiler compliance
level:11".

Thanks in advance!

Brian

On Mon, Jan 8, 2024 at 10:05 AM Christopher Schultz <
ch...@christopherschultz.net> wrote:

> Brian,
>
> On 1/5/24 17:21, Brian Braun wrote:
> > Hello Chirstopher,
> >
> > First of all: thanks a lot for your responses!
> >
> > On Wed, Jan 3, 2024 at 9:25 AM Christopher Schultz <
> > ch...@christopherschultz.net> wrote:
> >
> >> Brian,
> >>
> >> On 12/30/23 15:42, Brian Braun wrote:
> >>> At the beginning, this was the problem: The OOM-killer (something that
> I
> >>> never knew existed) killing Tomcat unexpectedly and without any
> >>> explanation
> >>
> >> The explanation is always the same: some application requests memory
> >> from the kernel, which always grants the request(!). When the
> >> application tries to use that memory, the kernel scrambles to physically
> >> allocate the memory on-demand and, if all the memory is gone, it will
> >> pick a process and kill it.
>  >
> > Yes, that was happening to me until I set up the SWAP file and now at
> least
> > the Tomcat process is not being killed anymore.
>
> Swap can get you out of a bind like this, but it will ruin your
> performance. If you care more about stability (and believe me, it's a
> reasonable decision), then leave the swap on. But swap will kill (a)
> performance (b) SSD lifetime and (c) storage/transaction costs depending
> upon your environment. Besides, you either need the memory or you do
> not. It's rare to "sometimes" need the memory.
>
> >> Using a swap file is probably going to kill your performance. What
> >> happens if you make your heap smaller?
>  >
> > Yes, in fact the performance is suffering and that is why I don't
> consider
> > the swap file as a solution.
>
> :D
>
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present (the Tomcat
> > process grows in memory usage up to 1.5GB combining real memory and swap
> > memory).
>
> Okay, that definitely indicates a problem that needs to be solved.
>
> I've seen things like native ZIP handling code leaking native memory,
> but I know that Tomcat does not leak like that. If you do anything in
> your application that might leave file handles open, it could be
> contributing to the problem.
>
> > As I have explained in another email recently, I think that neither heap
> > usage nor non-heap usage are the problem. I have been monitoring them and
> > their requirements have always stayed low enough, so I could leave the
> -Xms
> > parameter with about 300-400 MB and that would be enough.
>
> Well, between heap and non-heap, that's all the memory. There is no
> non-heap-non-non-heap memory to be counted. Technically stack space is
> the same as "native memory" but usually you experience other problems if
> you have too many threads and they are running out of stack space.
>
> > There is something else in the JVM that is using all that memory and I
> > still don't know what it is. And I think it doesn't care about the value
> I
> > give to -Xmx, it uses all the memory it wants. Doing what? I don't know.
>
> It might be time to start digging into those native memory-tracking tools.
>
> > Maybe I am not understanding your suggestion.
> > I have assigned to -Xmx both small amounts (as small as 300MB) and high
> > amounts (as high as 1GB) and the problem is still present. In fact the
> > problem started with a low amount for -Xmx.
>
> No, you are understanding my suggestion(s). But if you are hitting Linux
> oom-killer with a 300MiB heap and a process size that is growing to 1.5G
> then getting killed... it's time to dig deeper.
>
> -chris
>
> >>> On Sat, Dec 30, 2023 at 12:44 PM Christopher Schultz <
> >>> ch...@christopherschultz.net> wrote:
> >>>
> >>>> Brian,
> >>>>
> >>>> On 12/29/23 20:48, Brian Braun wrote:
> >>>>> Hello,
> >>>>>
> >>>>> First of all:
> >>>>> Christopher Schultz: You answered an email from me 6 weeks ago. You
> >>>> helped
> >>>>> me a lot with your suggestions. I have done a lot of research and
> have
> >>>>> learnt a lot since then, so I have been able to rule out a lot of
> >>>> potential
> >>>>> roots for my issue. Because of that I am able to post a new more
> >> specific
> >>>>> email. Thanks a lot!!!
> >>>>>
> >>>>> Now, this is my stack:
> >>>>>
> >>>>> - Ubuntu 22.04.3 on x86/64 with 2GM of physical RAM that has been
> >> enough
> >>>>> for years.
> >>>>> - Java 11.0.20.1+1-post-Ubuntu-0ubuntu122.04 / openjdk 11.0.20.1
> >>>> 2023-08-24
> >>>>> - Tomcat 9.0.58 (JAVA_OPTS="-Djava.awt.headless=true -Xmx1000m
> >> -Xms1000m
> >>>>> ......")
> >>>>> - My app, which I developed myself, and has been running without any
> >>>>> problems for years
> >>>>>
> >>>>> Well, a couple of months ago my website/Tomcat/Java started eating
> more
> >>>> and
> >>>>> more memory about after about 4-7 days. The previous days it uses
> just
> >> a
> >>>>> few hundred MB and is very steady, but then after a few days the
> memory
> >>>>> usage suddenly grows up to 1.5GB (and then stops growing at that
> point,
> >>>>> which is interesting). Between these anomalies the RAM usage is fine
> >> and
> >>>>> very steady (as it has been for years) and it uses just about 40-50%
> of
> >>>> the
> >>>>> "Max memory" (according to what the Tomcat Manager server status
> >> shows).
> >>>>> The 3 components of G1GC heap memory are steady and low, before and
> >> after
> >>>>> the usage grows to 1.5GB, so it is definitely not that the heap
> starts
> >>>>> requiring more and more memory. I have been using several tools to
> >>>> monitor
> >>>>> that (New Relic, VisualVM and JDK Mission Control) so I'm sure that
> the
> >>>>> memory usage by the heap is not the problem.
> >>>>> The Non-heaps memory usage is not the problem either. Everything
> there
> >> is
> >>>>> normal, the usage is humble and even more steady.
> >>>>>
> >>>>> And there are no leaks, I'm sure of that. I have inspected the JVM
> >> using
> >>>>> several tools.
> >>>>>
> >>>>> There are no peaks in the number of threads either. The peak is the
> >> same
> >>>>> when the memory usage is low and when it requires 1.5GB. It stays the
> >>>> same
> >>>>> all the time.
> >>>>>
> >>>>> I have also reviewed all the scheduled tasks in my app and lowered
> the
> >>>>> amount of objects they create, which was nice and entertaining. But
> >> that
> >>>> is
> >>>>> not the problem, I have analyzed the object creation by all the
> threads
> >>>>> (and there are many) and the threads created by my scheduled tasks
> are
> >>>> very
> >>>>> humble in their memory usage, compared to many other threads.
> >>>>>
> >>>>> And I haven't made any relevant changes to my app in the 6-12 months
> >>>> before
> >>>>> this problem started occurring. It is weird that I started having
> this
> >>>>> problem. Could it be that I received an update in the java version or
> >> the
> >>>>> Tomcat version that is causing this problem?
> >>>>>
> >>>>> If neither the heap memory or the Non-heaps memory is the source of
> the
> >>>>> growth of the memory usage, what could it be? Clearly something is
> >>>>> happening inside the JVM that raises the memory usage. And everytime
> it
> >>>>> grows, it doesn't decrease.  It is like if something suddenly starts
> >>>>> "pushing" the memory usage more and more, until it stops at 1.5GB.
> >>>>>
> >>>>> I think that maybe the source of the problem is the garbage
> collector.
> >> I
> >>>>> haven't used any of the switches that we can use to optimize that,
> >>>>> basically because I don't know what I should do there (if I should at
> >>>> all).
> >>>>> I have also activated the GC log, but I don't know how to analyze it.
> >>>>>
> >>>>> I have also increased and decreased the value of "-Xms" parameter and
> >> it
> >>>> is
> >>>>> useless.
> >>>>>
> >>>>> Finally, maybe I should add that I activated 4GB of SWAP memory in my
> >>>>> Ubuntu instance so at least my JVM would not be killed my the OS
> >> anymore
> >>>>> (since the real memory is just 1.8GB). That worked and now the memory
> >>>> usage
> >>>>> can grow up to 1.5GB without crashing, by using the much slower SWAP
> >>>>> memory, but I still think that this is an abnormal situation.
> >>>>>
> >>>>> Thanks in advance for your suggestions!
> >>>>
> >>>> First of all: what is the problem? Are you just worried that the
> number
> >>>> of bytes taken by your JVM process is larger than it was ... sometime
> in
> >>>> the past? Or are you experiencing Java OOME of Linux oom-killer or
> >>>> anything like that?
> >>>>
> >>>> Not all JVMs behave this way, most most of them do: once memory is
> >>>> "appropriated" by the JVM from the OS, it will never be released. It's
> >>>> just too expensive of an operation to shrink the heap.. plus, you told
> >>>> the JVM "feel free to use up to 1GiB of heap" so it's taking you at
> your
> >>>> word. Obviously, the native heap plus stack space for every thread
> plus
> >>>> native memory for any native libraries takes up more space than just
> the
> >>>> 1GiB you gave for the heap, so ... things just take up space.
> >>>>
> >>>> Lowering the -Xms will never reduce the maximum memory the JVM ever
> >>>> uses. Only lowering -Xmx can do that. I always recommend setting Xms
> ==
> >>>> Xmx because otherwise you are lying to yourself about your needs.
> >>>>
> >>>> You say you've been running this application "for years". Has it been
> in
> >>>> a static environment, or have you been doing things such as upgrading
> >>>> Java and/or Tomcat during that time? There are things that Tomcat does
> >>>> now that it did not do in the past that sometimes require more memory
> to
> >>>> manage, sometimes only at startup and sometimes for the lifetime of
> the
> >>>> server. There are some things that the JVM is doing that require more
> >>>> memory than their previous versions.
> >>>>
> >>>> And then there is the usage of your web application. Do you have the
> >>>> same number of users? I've told this (short)( story a few times on
> this
> >>>> list, but we had a web application that ran for 10 years with only
> 64MiB
> >>>> of heap and one day we started getting OOMEs. At first we just bounced
> >>>> the service and tried looking for bugs, leaks, etc. but the heap dumps
> >>>> were telling us everything was fine.
> >>>>
> >>>> The problem was user load. We simply outgrew the heap we had allocated
> >>>> because we had more simultaneous logged-in users than we did in the
> >>>> past, and they all had sessions, etc. We had plenty of RAM available,
> we
> >>>> were just being stingy with it.
> >>>>
> >>>> The G1 garbage collector doesn't have very many switches to
> mess-around
> >>>> with it compared to older collectors. The whole point of G1 was to
> "make
> >>>> garbage collection easy". Feel free to read 30 years of lies and
> >>>> confusion about how to best configure Java garbage collectors. At the
> >>>> end of the day, if you don't know exactly what you are doing and/or
> you
> >>>> don't have a specific problem you are trying to solve, you are better
> >>>> off leaving everything with default settings.
> >>>>
> >>>> If you want to reduce the amount of RAM your application uses, set a
> >>>> lower heap size. If that causes OOMEs, audit your application for
> wasted
> >>>> memory such as too-large caches (which presumably live a long time) or
> >>>> too-large single-transactions such as loading 10k records all at once
> >>>> from a database. Sometimes a single request can require a whole lot of
> >>>> memory RIGHT NOW which is only used temporarily.
> >>>>
> >>>> I was tracking-down something in our own application like this
> recently:
> >>>> a page-generation process was causing an OOME periodically, but the
> JVM
> >>>> was otherwise very healthy. It turns out we had an administrative
> action
> >>>> in our application that had no limits on the amount of data that could
> >>>> be requested from the database at once. So naive administrators were
> >>>> able to essentially cause a query to be run that returned a huge
> number
> >>>> of rows from the db, then every row was being converted into a row in
> an
> >>>> HTML table in a web page. Our page-generation process builds the whole
> >>>> page in memory before returning it, instead of streaming it back out
> to
> >>>> the user, which means a single request can use many MiBs of memory
> just
> >>>> for in-memory strings/byte arrays.
> >>>>
> >>>> If something like that happens in your application, it can pressure
> the
> >>>> heap to jump from e.g. 256MiB way up to 1.5GiB and -- as I said before
> >>>> -- the JVM is never gonna give that memory back to the OS.
> >>>>
> >>>> So even though everything "looks good", your heap and native memory
> >>>> spaces are very large until you terminate the JVM.
> >>>>
> >>>> If you haven't already done so, I would recommend that you enable GC
> >>>> logging. How to do that is very dependent on your JVM, version, and
> >>>> environment. This writes GC activity details to a series of files
> during
> >>>> the JVM execution. There are freely-available tools you can use to
> view
> >>>> those log files in a meaningful way and draw some conclusions. You
> might
> >>>> even be able to see when that "memory event" took place that caused
> your
> >>>> heap memory to shoot-up. (Or maybe it's your native memory, which
> isn't
> >>>> logged by the GC logger.) If you are able to see when it happened, you
> >>>> may be able to correlate that with your application log to see what
> >>>> happened in your application. Maybe you need a fix.
> >>>>
> >>>> Then again, maybe everything is totally fine and there is nothing to
> >>>> worry about.
> >>>>
> >>>> -chris
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>>>
> >>>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to