Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-13 Thread James H. H. Lampert
Last night, we switched the customer box in question from running Tomcat 
under Java 7 (64-bit) to Java 8 (64-bit), and all day today, the 
processor load from the Tomcat server has been dramatically lighter. I 
haven't seen the overall CPU over around 70% today, and haven't seen the 
Tomcat's JVM job usage over 30% (as I type this, the overall is at 
48.1%, of which the JVM job is a mere 7.8%), where before, it was 
peaking at full saturation, with the Tomcat JVM job alone accounting for 
as much as 90% or more.


I am cautious about this, but I think maybe just the switch to Java 8 
alone has solved the problem.


--
JHHL

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



AW: JVM job for Tomcat taking lots and lots of CPU

2020-02-13 Thread Jäkel , Guido
Dear James, Dear others,

I just get notice of this long thread. Sorry, if already give this suggestion. 
But James: If it's possible, you may use (J)VisualVM with the "VisualGC" Plugin 
to get some live impressions of "what's going on" with the heaps. To my 
experience this visual perceptions is much easy to comprehend as a long list of 
numbers.

If JMX is enabled to use remotely, you may even run the VisualVM-Tool on a 
different machine.

Maybe some JVM-Options like

-XX:+PrintGCDetails
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintAdaptiveSizePolicy
-XX:+PrintTenuringDistribution
-XX:+PrintStringDeduplicationStatistics

may help you to let write down a log of GC activities.


with greetings

Guido 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

James,

On 2/12/20 4:53 PM, James H. H. Lampert wrote:
> Here is a "Display Java Threads" from the Tomcat server's JVM job:
> 
>> Total   Aux ThreadName  Status  CPU
>> I/O 0011  main  SELW   1.100   130 
>> 0014  JIT Compilatio >  THDW  61.93289 0015
>> JIT Compilatio >  THDW  96.84677 0016  JIT
>> Compilatio >  THDW  11.64010 0017  JIT Compilatio
>> >  THDW   1.350 2 0018  JIT-SamplerThr >  THDW
>> 23.866 0
[...]
>> 001D  GC Slave  THDW 187.726   157 001E
>> GC Slave  THDW 187.534   420 001F  GC Slave
>> THDW 187.862   217 0020  GC Slave  THDW
>> 187.165   362 0021  GC Slave  THDW 187.572
>> 340 0022  GC Slave  THDW 187.686   333 
>> 0023  GC Slave  THDW 187.870   161

I don't know up from down on an AS/400, but that certainly looks like
both GC and JIT are tearing up your CPUs.

What do those numbers in the "Total CPU" column mean? I that %-CPU
like "thread 001D is using nearly 2 CPUs solid"? Or does that mean
"thread 001D has had 187.726 seconds of activity since the process
launched"?

If this is a somewhat freshly launched JVM, you will see two things
happen "early" in the process's lifetime:

1. Lots of heap re-sizing (because -Xms != -Xmx) before you get to a
steady-state heap

2. Lots of JIT compilation because most of your Java code is being for
the "first time" by the JVM (most JVMs don't cache JIT-compiled
bytecode between process invocations ... except for the core classes
in some more recent versions)

So maybe everything above looks fine. *shrug*

- -chris
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl5Eg+kACgkQHPApP6U8
pFiCZg/9GWYpIEoob6+NllEPHErv1zu5/IwZXrqjUmgjv39j5F5/0+38F+lapUY1
u0wwNqcXuOyj44uRyxoE8uNL5RcOJUIJumAyoOtL3jtREyM+qWPJ3oEeQzUmNqQQ
IlVXvZT0ALlG/eKSci747FbMAUk2Yp49r3EaTmn+KBOO3U04TVzpjbl0o1dupcjm
ZI+52cbyrRHp9Myvz9s/r56SqLDSefoxGqrsFa8rBfHSIdFZft3hEE+LwY4iWO4T
u9mnF1INr5CHBm2tIYHuXfSJGVw8LKVydYzp6ntRdLZ1l9inBLxbkDWihFzJn3fz
nd9X9pPv34p3qMlqCaV9Ow7pxRXdFpUa8b/q6GOt9gR8V/SIHhrSX0gUA1dWuD2Q
ROkenXUcaZcXa4jIWEw0/7v5wJaN4kYEzUuYfZdMmimSqLgVXCZn3L9584QlRDFo
RFdAZ5Rbr3b0uy9cAm/EMlpXDuvGm4ARFBl2lG0q1R5B6yNsLXpbgaBGkI9YjLJx
K8q4J11EJJH6hZ45tf3xV4MVlqD3MoVSLuuzj/eb/TD0iDFmeL31cpN0tCvG852R
1qOHhXXNKmvxH4toHthGxVWpKdDIlUSPtTXpq1Nn3NYCDme5IUKbetz9yV+qvrol
lV1HPo2fUWglb1koFBCXEreC1MXqBev9wawlMeqPdunN3+n4t98=
=1PqO
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread James H. H. Lampert

From the :

How many VPs does your partition have?
Hardware:  Power7 vs Power8/Power9?
How many GC threads?


I wish I knew how to determine the answers to these. What I *do* know is 
that the box in question self-IDs as model "22A" (which I've never heard 
of, which in itself suggests Power9), with 22(!) LPARs.


Here is a "Display Java Threads" from the Tomcat server's JVM job:

>Total   Aux
> ThreadName  Status  CPUI/O
> 0011  main  SELW   1.100   130
> 0014  JIT Compilatio >  THDW  61.93289
> 0015  JIT Compilatio >  THDW  96.84677
> 0016  JIT Compilatio >  THDW  11.64010
> 0017  JIT Compilatio >  THDW   1.350 2
> 0018  JIT-SamplerThr >  THDW  23.866 0
> 0019  IProfiler THDW   5.481 3
> 001A  Signal Dispatc >  THDW   0.983 1
> 001C  Concurrent Mar >  THDW   0.001 0
> 001D  GC Slave  THDW 187.726   157
> 001E  GC Slave  THDW 187.534   420
> 001F  GC Slave  THDW 187.862   217
> 0020  GC Slave  THDW 187.165   362
> 0021  GC Slave  THDW 187.572   340
> 0022  GC Slave  THDW 187.686   333
> 0023  GC Slave  THDW 187.870   161
> 0024  i5/OS informat >  JVAW   0.192 3
> 0026  Attach API wai >  SEMW   0.00244
> 0027  MemoryPoolMXBe >  THDW   0.001 0
> 0028  GC Daemon THDW   0.001 0
> 002B  Finalizer thread  THDW   5.939  1475
> 005B  HSQLDB Timer @ >  THDW   0.676 1
> 005E  ContainerBackg >  THDW   0.704   508
> 005F  http-bio-443-A >  SELW   1.63442
> 0060  http-bio-443-A >  THDW   0.251 0
> 0061  ajp-bio-8009-A >  SELW   0.001 0
> 0062  ajp-bio-8009-A >  THDW   0.243 2
> 009B  HSQLDB Timer @ >  THDW   2.741185854
> 00DC  Timer-6   THDW   0.001 1
> 01FD  MappedByteBuff >  THDW   0.169 10361
> 0AE2  Thread-2330   THDW   1.111 21159
> 0EA8  Timer-59  THDW   0.002 2
> 0EA9  Timer-60  THDW   0.245 0
> 1C03  Timer-98  THDW   0.002 2
> 1C04  Timer-99  THDW   0.251 0
> 465C  Timer-201 THDW   0.001 0
> 465D  Timer-202 THDW   0.252 0
> 61BC  Timer-244 THDW   0.001 0
> 61BD  Timer-245 THDW   0.178 0
> 6C30  Timer-254 THDW   0.001 1
> 6C31  Timer-255 THDW   0.215 0
> 8C71  Timer-294 THDW   0.001 0
> 8C72  Timer-295 THDW   0.173 0
> 994B  Timer-302 THDW   0.001 2
> 994C  Timer-303 THDW   0.150 0
> C2BD  Thread-47354  THDW   0.001 0
> C2C4  Okio Watchdog THDW   0.001 0
> C4E7  Timer-338 THDW   0.001 1
> C4E8  Timer-339 THDW   0.143 0
> D061  Timer-354 THDW   0.001 3
> D062  Timer-355 THDW   0.141 0
> D09E  Timer-356 THDW   0.002 0
> D09F  Timer-357 THDW   0.150 0
> 00010124  Timer-426 THDW   0.001 0
> 0001012D  Timer-427 THDW   0.120 0
> 00011E2B  Timer-484 THDW   0.001 0
> 00011E2C  Timer-485 THDW   0.103 0
> 000126C4  Timer-507 THDW   0.001 3
> 000126C6  Timer-508 THDW   0.105 0
> 00013C51  Timer-553 THDW   0.001 0
> 00013C52  Timer-554 THDW   0.094 0
> 00013E8D  Timer-556 THDW   0.001 0
> 00013E93  Timer-557 THDW   0.093 0
> 000155C5  Timer-597 THDW   0.001 0
> 000155C7  Timer-598 THDW   0.105 0
> 00015911  Timer-605 THDW   0.001 2
> 00015913  Timer-606 THDW   0.084 0
> 00015A4A  Heart beat che >  THDW   0.013 1
> 00015AAF  Heart beat che >  THDW   0.012 0
> 00016531  Timer-627 THDW   0.001 1
> 00016532  Timer-628 THDW   0.087 0
> 00017039  Timer-645 THDW   0.001 0
> 0001703A  Timer-646 THDW   0.105 0
> 00017363  Timer-651 THDW   0.001 0
> 00017364  Timer-652 THDW   0.092 0
> 000189A0  Timer-702 THDW   0.001 1
> 000189A1  Timer-703 THDW   0.083 0
> 00018FBB  Timer-714 THDW   0.001 1
> 00018FBC  Timer-715 THDW   0.088

RE: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread John.E.Gregg
James,



> -Original Message-
> From: James H. H. Lampert 
> Sent: Wednesday, February 12, 2020 12:47 PM
> To: Tomcat Users List ; Java 400 List  l...@lists.midrange.com>
> Subject: Re: JVM job for Tomcat taking lots and lots of CPU
> 
> I've got some more detailed GC stats. The full report runs 600 pages for just
> the last 300 GC cycles, so I've just included three cycles worth of
> data:
> 
> > GC Cycle Number  :   1801
> > Basic GC Cycle Information:
> >   Current GC cycle time  . . . . . . . :399
> >   GC reason  . . . . . . . . . . . . . :   Allocation Failure
> >   GC area  . . . . . . . . . . . . . . :   None specified
> >   GC compaction reason . . . . . . . . :   None specified
> >   Number of internal cycles  . . . . . :  1
> >   Time spent in excessive GC time  . . :  0
> >   Number of objects moved  . . . . . . :  0
> >   Amount of space consumed by moved
> > objects  . . . . . . . . . . . . . :  0
> >   Number of classes unloaded . . . . . :  0
> > GC Time Information:
> >   Cycle start time . . . . . . . . . . :02/12/20 11:58:08.104
> >   Cycle end time . . . . . . . . . . . :02/12/20 11:58:08.503
> >   Mark start time  . . . . . . . . . . : 0
> >   Mark end time  . . . . . . . . . . . : 0
> >   Sweep start time . . . . . . . . . . : 0
> >   Sweep end time . . . . . . . . . . . : 0
> >   Compact start time . . . . . . . . . : 0
> >   Compact end time . . . . . . . . . . : 0
> > Nursery Area Information:
> >   Free space at start  . . . . . . . . :  0
> >   Allocated space at start . . . . . . :1179648
> >   Total size at start  . . . . . . . . :1179648
> >   Free space at end  . . . . . . . . . :1140219
> >   Allocated space at end . . . . . . . :  39429
> >   Total size at end  . . . . . . . . . :1179648
> > Tenured Area:
> >   Free space at start  . . . . . . . . :1119550
> >   Allocated space at start . . . . . . :1717250
> >   Total size at start  . . . . . . . . :2836800
> >   Free space at end  . . . . . . . . . :1087674
> >   Allocated space at end . . . . . . . :1749126
> >   Total size at end  . . . . . . . . . :2836800
> > Large Object Tenured Area:
> >   Free space at start  . . . . . . . . :  0
> >   Allocated space at start . . . . . . :  0
> >   Total size at start  . . . . . . . . :  0
> >   Free space at end  . . . . . . . . . :  0
> >   Allocated space at end . . . . . . . :  0
> >   Total size at end  . . . . . . . . . :  0
> > Small Object Tenured Area:
> >   Free space at start  . . . . . . . . :1119550
> >   Allocated space at start . . . . . . :1717250
> >   Total size at start  . . . . . . . . :2836800
> >   Free space at end  . . . . . . . . . :1087674
> >   Allocated space at end . . . . . . . :1749126
> >   Total size at end  . . . . . . . . . :2836800
> > Weak Object References:
> >   Number at start  . . . . . . . . . . :  0
> >   Number at end  . . . . . . . . . . . :  0
> >   Number cleared . . . . . . . . . . . :  0
> > Finalizer Object References:
> >   Number at start  . . . . . . . . . . :  0
> >   Number at end  . . . . . . . . . . . :763
> >   Number cleared . . . . . . . . . . . : 4294966533
> > Soft Object References:
> >   Number at start  . . . . . . . . . . :  0
> >   Number at end  . . . . . . . . . . . :  0
> >   Number cleared . . . . . . . . . . . :  0
> > Phantom Object References:
> >   Number at start  . . . . . . . . . . :  0
> >   Number at end  . . . . . . . . . . . :  0
> >   Number cleared . . . . . . . . . . . :  0
> >
> >
> > GC Cycle Number  :   1802
> > Basic GC Cycle Information:
> >   Current GC cycle time  . . . . . . . :181
> >   GC reason  . . . . . . . . . . . . . :   Allocation Failure
> >   GC area  . . . . . . . . . . . . . . :   None specified
> >   GC compaction reason . . . . . . . . :   None specified
> >   Number of internal cycles  . . . . . :   

Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread James H. H. Lampert
I've got some more detailed GC stats. The full report runs 600 pages for 
just the last 300 GC cycles, so I've just included three cycles worth of 
data:


GC Cycle Number  :   1801 
Basic GC Cycle Information:   
  Current GC cycle time  . . . . . . . :399   
  GC reason  . . . . . . . . . . . . . :   Allocation Failure 
  GC area  . . . . . . . . . . . . . . :   None specified 
  GC compaction reason . . . . . . . . :   None specified 
  Number of internal cycles  . . . . . :  1   
  Time spent in excessive GC time  . . :  0   
  Number of objects moved  . . . . . . :  0   
  Amount of space consumed by moved   
objects  . . . . . . . . . . . . . :  0   
  Number of classes unloaded . . . . . :  0   
GC Time Information:  
  Cycle start time . . . . . . . . . . :02/12/20 11:58:08.104 
  Cycle end time . . . . . . . . . . . :02/12/20 11:58:08.503 
  Mark start time  . . . . . . . . . . : 0
  Mark end time  . . . . . . . . . . . : 0 
  Sweep start time . . . . . . . . . . : 0 
  Sweep end time . . . . . . . . . . . : 0 
  Compact start time . . . . . . . . . : 0 
  Compact end time . . . . . . . . . . : 0 
Nursery Area Information:  
  Free space at start  . . . . . . . . :  0

  Allocated space at start . . . . . . :1179648
  Total size at start  . . . . . . . . :1179648
  Free space at end  . . . . . . . . . :1140219
  Allocated space at end . . . . . . . :  39429
  Total size at end  . . . . . . . . . :1179648
Tenured Area:  
  Free space at start  . . . . . . . . :1119550

  Allocated space at start . . . . . . :1717250
  Total size at start  . . . . . . . . :2836800
  Free space at end  . . . . . . . . . :1087674
  Allocated space at end . . . . . . . :1749126
  Total size at end  . . . . . . . . . :2836800
Large Object Tenured Area:  
  Free space at start  . . . . . . . . :  0 
  Allocated space at start . . . . . . :  0 
  Total size at start  . . . . . . . . :  0 
  Free space at end  . . . . . . . . . :  0 
  Allocated space at end . . . . . . . :  0 
  Total size at end  . . . . . . . . . :  0 
Small Object Tenured Area: 
  Free space at start  . . . . . . . . :1119550

  Allocated space at start . . . . . . :1717250
  Total size at start  . . . . . . . . :2836800
  Free space at end  . . . . . . . . . :1087674
  Allocated space at end . . . . . . . :1749126
  Total size at end  . . . . . . . . . :2836800
Weak Object References:
  Number at start  . . . . . . . . . . :  0

  Number at end  . . . . . . . . . . . :  0
  Number cleared . . . . . . . . . . . :  0
Finalizer Object References:   
  Number at start  . . . . . . . . . . :  0

  Number at end  . . . . . . . . . . . :763
  Number cleared . . . . . . . . . . . : 4294966533
Soft Object References:
  Number at start  . . . . . . . . . . :  0

  Number at end  . . . . . . . . . . . :  0
  Number cleared . . . . . . . . . . . :  0
Phantom Object References: 
  Number at start  . . . . . . . . . . :  0

  Number at end  . . . . . . . . . . . :  0
  Number cleared . . . . . . . . . . . :  0


GC Cycle Number  :   1802  
Basic GC Cycle Information:
  Current GC cycle time  . . . . . . . :181
  GC reason  . . . . . . . . . . . . . :   Allocation Failure
  GC area  . . . . . . . . . . . . . . :   None specified
  GC compaction reason . . . . . . . . :   None specified
  Number of internal cycles  . . . . . :  1  
  Time spent in excessive GC time  . . :  0  
  Number of objects moved  . . . . . . :  0  
  Amount of space consumed by moved  
objects  . . . . . . . . . . . . . :  0  
  Number of classes unloaded . . . . . :  0  
GC Time Information: 

Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread James H. H. Lampert

On 2/12/20 5:37 AM, Niranjan Babu Bommu wrote:

have you turned on codecahe , if that is true can you monitor codecahe?
known issue with codecache in java 7, when codecache fills up the compiler
may not get get restarted even after the codecache occupancy drops down to
half after the emergency flushing, this may cause high cpu usage by the
compiler threads.


I've never heard of "codecache"; what is it?

If memory serves correctly, on all AS/400 JVMs (from the very beginning 
of Java on the AS/400), the first time any Java class or JAR is 
executed, the Java bytecode gets internally compiled into native code 
(that's not visible to the user, but which somehow remains persistently 
attached).


This is somewhat analogous to how non-Java programs work on the AS/400: 
the program gets compiled to a pseudo-assembler level called "MI," and 
then goes through a second compilation, called "encapsulation," that 
takes it down to a level the processor actually executes. So long as the 
MI level remains present, AS/400 programs will automatically 
re-encapsulate when moved to a system with a different physical 
processor; that's why, a quarter of a century ago, you could take AS/400 
programs that were compiled on component-CPU boxes (e.g. models like 
D02, F20, or 200), install them on PowerPC-based boxes (e.g., models 
like 40s, 170, or 270), and they would port themselves to the new processor.


--
JHHL

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread John.E.Gregg
James,

> -Original Message-
> From: James H. H. Lampert 
> Sent: Tuesday, February 11, 2020 6:41 PM
> To: Tomcat Users List 
> Subject: JVM job for Tomcat taking lots and lots of CPU
> 
> Ladies and Gentlemen:
> 
> We have a customer installation in which the JVM job for our Tomcat server
> is frequently using massive amounts of CPU.
> 
> It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with -
> Xms3096m and -Xmx5120m JVM arguments.
> 
> GC information on the JVM job shows:
> > Garbage collected heap:
> >   Initial heap size  . . . . . . . . . :  3096.000M
> >   Maximum heap size  . . . . . . . . . :  5120.000M
> >   Current heap size  . . . . . . . . . :  4458.562M
> >   Heap in use  . . . . . . . . . . . . :  1907.673M
> > Other memory:
> >   Internal (break) memory size . . . . :   504.982M
> >   JIT memory size  . . . . . . . . . . :74.000M
> >   Shared classes memory size . . . . . : 0.000M
> > General GC information:
> >   Current GC cycle . . . . . . . . . . :   2184
> >   GC policy type . . . . . . . . . . . : GENCON
> >   Current GC cycle time  . . . . . . . :552
> >   Accumulated GC time  . . . . . . . . :5108241
> 
> It seems to be doing a lot of garbage-collecting.
> 
> Would switching to Java 8 help? Would switching to 7.0.93 help?
> 
> --
> James H. H. Lampert

I haven't worked with java on the AS400.

You said high CPU is the symptom.  Frequent GC is certainly one possible cause, 
but there are others.  If possible, take several thread dumps at short 
intervals (5-10 seconds) and review them to see what they have in common.  If 
you're not a java developer, ask one for help.  On Linux, it's possible to 
correlate the thread ids with process ids obtained from top to see how much CPU 
each one is using/has used.

I can't tell whether GC might be the culprit from the data you provided.  The 
first thing I always look at is the throughput, which is 1 - (gc time/total 
time).  You want that to be as close to 100% as possible.  Take that 
accumulated GC time on the last line and divide by the time the app has been 
running and subtract from 1.  Hopefully that number is up around .98 or .99.  
You also have to keep the time range in mind.  If the app sits idle at night, 
then the total throughput will look good because there were so few GCs during 
that idle time.  However if there is a surge in activity over the last few 
minutes, that number could be very different over that short time range.

I assume "current GC cycles" means that this is the 2184th GC since the JVM 
started.  Ok, but how frequent are they?  I've been busy apps do 5-20 minor GCs 
per minute, so 2000 total isn't a scary number to me.

Does "gencon" mean it's collecting the old generation at that moment?  If there 
are really 2000 of those, I would be mildly concerned.  The generations should 
be sized so that the old generation grows slowly.  Some of the apps I work with 
only do 1 or 2 of those per day.  This doesn't necessarily indicate a bug, 
however.  It might just mean that the young generation needs to be increased.

The fact that your used heap is so much lower than your total heap is a good 
sign.

Is JIT memory size how much space is allocated for compiled code?  How much is 
actually used?  By default, HotSpot allocates 240MB and I regularly see apps 
that use more than 74MB.  I don't know what your JVM does when that fills up.  
HotSpot used to puke but is better behaved now.  If you've filled up the space 
allocated for compiled code, I could definitely see this being a contributor to 
high CPU because of a) code running in interpreted mode and b) the JIT compiler 
having to run.

John




Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread Suvendu Sekhar Mondal
Hello James,

On Wed, Feb 12, 2020, 6:10 AM James H. H. Lampert 
wrote:

> Ladies and Gentlemen:
>
> We have a customer installation in which the JVM job for our Tomcat
> server is frequently using massive amounts of CPU.
>
> It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with
> -Xms3096m and -Xmx5120m JVM arguments.
>

Is this a snippet from GC log or collected data?

>
> GC information on the JVM job shows:
> > Garbage collected heap:
> >   Initial heap size  . . . . . . . . . :  3096.000M
> >   Maximum heap size  . . . . . . . . . :  5120.000M
> >   Current heap size  . . . . . . . . . :  4458.562M
> >   Heap in use  . . . . . . . . . . . . :  1907.673M
>

If I am reading it correctly, actual heap usage was lower than the initial
heap size. JVM still got ~3GB space left for it's use. I am not seeing any
reason for JVM to do frequent GC unless heap is fragmented and unusable to
JVM.

> Other memory:
> >   Internal (break) memory size . . . . :   504.982M
> >   JIT memory size  . . . . . . . . . . :74.000M
> >   Shared classes memory size . . . . . : 0.000M
> > General GC information:
> >   Current GC cycle . . . . . . . . . . :   2184
> >   GC policy type . . . . . . . . . . . : GENCON
>

Is this J9 JVM? Hotspot vms do not have that GC policy.

>   Current GC cycle time  . . . . . . . :552

>   Accumulated GC time  . . . . . . . . :5108241
>

Are those values are in Millisecond or second? If in MS, on avg 2.3
sec(5108241/2184)
is the collection time, which is bit high. But I don't know anything about
the application. Also I don't know if there were sudden spikes which might
have skewed the avg.

>
> It seems to be doing a lot of garbage-collecting.
>
> Would switching to Java 8 help? Would switching to 7.0.93 help?
>

I will suggest to analyze the garbage collection logs thoroughly before
making such decisions. You can share it with us. Upgrade to Java 8 has some
effort as Permgen was removed and Metaspace was introduced which now runs
on native memory. So some testing needs to be done to make sure app is not
leaking in Metaspace.

>
> --
> James H. H. Lampert
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>


Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-12 Thread Niranjan Babu Bommu
have you turned on codecahe , if that is true can you monitor codecahe?
known issue with codecache in java 7, when codecache fills up the compiler
may not get get restarted even after the codecache occupancy drops down to
half after the emergency flushing, this may cause high cpu usage by the
compiler threads.
if above all true here are the solutions.
1. increase codecache may be 128MB
2. upgrade jdk to 8


On Tue, Feb 11, 2020 at 7:40 PM James H. H. Lampert <
jam...@touchtonecorp.com> wrote:

> Ladies and Gentlemen:
>
> We have a customer installation in which the JVM job for our Tomcat
> server is frequently using massive amounts of CPU.
>
> It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with
> -Xms3096m and -Xmx5120m JVM arguments.
>
> GC information on the JVM job shows:
> > Garbage collected heap:
> >   Initial heap size  . . . . . . . . . :  3096.000M
> >   Maximum heap size  . . . . . . . . . :  5120.000M
> >   Current heap size  . . . . . . . . . :  4458.562M
> >   Heap in use  . . . . . . . . . . . . :  1907.673M
> > Other memory:
> >   Internal (break) memory size . . . . :   504.982M
> >   JIT memory size  . . . . . . . . . . :74.000M
> >   Shared classes memory size . . . . . : 0.000M
> > General GC information:
> >   Current GC cycle . . . . . . . . . . :   2184
> >   GC policy type . . . . . . . . . . . : GENCON
> >   Current GC cycle time  . . . . . . . :552
> >   Accumulated GC time  . . . . . . . . :5108241
>
> It seems to be doing a lot of garbage-collecting.
>
> Would switching to Java 8 help? Would switching to 7.0.93 help?
>
> --
> James H. H. Lampert
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

-- 
*Thanks*
*Niranjan*


Re: JVM job for Tomcat taking lots and lots of CPU

2020-02-11 Thread calder
On Tue, Feb 11, 2020, 18:40 James H. H. Lampert 
wrote:

> Ladies and Gentlemen:
>
> We have a customer installation in which the JVM job for our Tomcat
> server is frequently using massive amounts of CPU.
>
> It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with
> -Xms3096m and -Xmx5120m JVM arguments.
>
> GC information on the JVM job shows:
> > Garbage collected heap:
> >   Initial heap size  . . . . . . . . . :  3096.000M
> >   Maximum heap size  . . . . . . . . . :  5120.000M
> >   Current heap size  . . . . . . . . . :  4458.562M
> >   Heap in use  . . . . . . . . . . . . :  1907.673M
> > Other memory:
> >   Internal (break) memory size . . . . :   504.982M
> >   JIT memory size  . . . . . . . . . . :74.000M
> >   Shared classes memory size . . . . . : 0.000M
> > General GC information:
> >   Current GC cycle . . . . . . . . . . :   2184
> >   GC policy type . . . . . . . . . . . : GENCON
> >   Current GC cycle time  . . . . . . . :552
> >   Accumulated GC time  . . . . . . . . :5108241
>
> It seems to be doing a lot of garbage-collecting.
>
> Would switching to Java 8 help? Would switching to 7.0.93 help?


Well, End of Public Updates for Java 7 was Apr 2015 (Extended support to
2022).  EOPU for Java 8 was Jan 2019 (Ext support to 2025).

Is this a new client install?  Or has the app been running with no issue
until some time recently?  Does the issue surface with one user; or 20
users;  or 100 users?  How many instances of Tomcat? (on the same machine?)

Have y'all run a profiler (JVisualVM or Mission Control or YourKit, etc) to
narrow down where in the app it's spiking  and to gather more granular GC
stats?

It's always my experience these type of issues are not Tomcat, it's the
app, or something's misconfigured.


JVM job for Tomcat taking lots and lots of CPU

2020-02-11 Thread James H. H. Lampert

Ladies and Gentlemen:

We have a customer installation in which the JVM job for our Tomcat 
server is frequently using massive amounts of CPU.


It's Tomcat 7.0.67, running on an AS/400, in a 64-bit Java 7 JVM, with 
-Xms3096m and -Xmx5120m JVM arguments.


GC information on the JVM job shows:
Garbage collected heap:
  Initial heap size  . . . . . . . . . :  3096.000M

  Maximum heap size  . . . . . . . . . :  5120.000M
  Current heap size  . . . . . . . . . :  4458.562M
  Heap in use  . . . . . . . . . . . . :  1907.673M
Other memory:  
  Internal (break) memory size . . . . :   504.982M

  JIT memory size  . . . . . . . . . . :74.000M
  Shared classes memory size . . . . . : 0.000M
General GC information:
  Current GC cycle . . . . . . . . . . :   2184

  GC policy type . . . . . . . . . . . : GENCON
  Current GC cycle time  . . . . . . . :552
  Accumulated GC time  . . . . . . . . :5108241


It seems to be doing a lot of garbage-collecting.

Would switching to Java 8 help? Would switching to 7.0.93 help?

--
James H. H. Lampert

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org