> On 29 Aug 2017, at 12:29, Suvendu Sekhar Mondal <suv3...@gmail.com> wrote:
> 
> On Tue, Aug 29, 2017 at 2:54 PM, Ing. Andrea Vettori
> <a.vett...@b2bires.com> wrote:
>> - with a fresh started tomcat instance, the time it takes is around 0,8 
>> seconds. Most of the time is spent on the two RMI calls the task does.
>> - with an instance that is running from some time, the time can reach 2/3 
>> seconds; occasionally 5/6 seconds. Most time is still spent on RMI calls. 
>> I.e. what slows down are the RMI calls.
> 
> Sill question, do you mean RMI calls generating from Tomcat is getting
> slower with time? or, JBoss is taking time to return the response?

Thanks for your help.
What I see is that the http requests are slower. Looking at the details of that 
specific request (putting a product in the cart) using System.nanotime I can 
see that most of the time is spent during the RMI calls. I’m pretty sure it’s 
not jboss that is slower because doing other calls at the same time with fresh 
tomcats or with a java client they're not slow.

> 
>> - restarting the jvm fixes the issue
>> - ***it seems*** but I’m still testing this since it seems there’s no 
>> ‘meatspace gc trigger command available', that when Metaspace is garbage 
>> collected, tomcat then performs like a fresh instance.
>> 
> 
> Yes, no command to only garbage collect Metaspace area. Without
> recycling Tomcat, you can trigger a Full GC either using VisualVM or
> jcmd's GC.run command.

I think that explicit GC is not doing a metaspace GC… that’s what I observed 
starting the GC with the java console.

> 
>> Since we’re using more than one tomcat instance (2 in production for this 
>> website, 1 for development) I can see that the issue is isolated to Tomcat 
>> or the JVM/Host where it runs because other Tomcat instances behave well at 
>> the same time one is slow. The same JBoss/Postgres backend is used by java 
>> batches and fat clients and it does work well with consistent times.
>> 
>> To clarify: the moment one production tomcat that is running from some time 
>> finishes the task in 3 seconds, the development tomcat or a fresh started 
>> production tomcat instance does the same task in less that one second. Note 
>> that repeating the task gives always consistent results, i.e. the instance 
>> is running from some time is always slow,  the fresh running instance is 
>> always fast.
>> 
>> Tomcat is running with these VM options:
>> 
>> -Xms20480m -Xmx20480m -Dsun.rmi.dgc.client.gcInterval=3600000 
>> -Dsun.rmi.dgc.server.gcInterval=3600000 -Xloggc:/tmp/gc.txt 
>> -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9012 
>> -Dcom.sun.management.jmxremote.ssl=false 
>> -Dcom.sun.management.jmxremote.authenticate=false -XX:+PrintGCDetails 
>> -XX:+PrintGCTimeStamps -XX:+UseG1GC -XX:ReservedCodeCacheSize=1g 
>> -XX:InitialCodeCacheSize=256m -XX:+UseHugeTLBFS -XX:MetaspaceSize=1g 
>> -XX:MaxMetaspaceSize=2g
>> 
>> Some of the options have been recently added (for example the increase in 
>> code cache  size) but it seems they had no impact on the issue.
>> 
> 
> Is there any specific reason to trigger Full GC every
> 60min(-Dsun.rmi.dgc.client.gcInterval=3600000)? I think that's
> overkill.

That’s a value I saw “everywhere” as suggested value (the default is too 
short). Note that on the GC logs I don’t see full GCs every hour so maybe that 
option is not triggering full gc ?


>> Metaspace goes up to 1,6GB before being collected. Value after garbage 
>> collect is around 200MB. Heap usage is variable, it usually stays under 10G 
>> and is around 1G after garbage collect.
>> CPU usage rarely goes over 10%. Loaded classes between 20k and 40k. Active 
>> sessions around 100/120 for each instance.
>> 
> 
> That sounds like a healthy behavior to me. That means GC is occurring
> and Metaspace is getting cleaned up. Why do you think Metaspace is the
> problem?

Because I can observe that when the metaspace is collected the requests become 
fast. I observer that a few hours ago, looking at the metaspace graph of the 
java console and doing requests just after the collect. 

> From the given data, my first impression is JVM is not having any
> memory pressure. If you can post your gc.txt file to this group, I
> would be happy to check it.

Thanks, here’s a link for gceasy report: 
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0YXRzLnR4dC5nei0tMTEtNDktMzQ=
 
<http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0YXRzLnR4dC5nei0tMTEtNDktMzQ=>
and the txt log file: http://ge.tt/1Un0DLm2
Please consider that I triggered a Full GC manually two times on this server.


> 
> Another thing, is there any way you can measure the processing time in
> JBoss for each RMI calls?
> 

At the moment I’m not measuring the time on the jboss side but I’m considering 
using a telemetry product in the future. I’m measuring the time on the client 
side (tomcat or fat client).

Thank you!
—
Andrea Vettori

Reply via email to