Re: suspected memory leak in io.netty.buffer.PoolChunk

2020-04-29 Thread Mark Thomas
On 29/04/2020 14:23, Ragavendhiran Bhiman (rabhiman) wrote:
> This question is from 8.5.29 version.
> 
> On 29/04/20, 6:52 PM, "Ragavendhiran Bhiman (rabhiman)" 
>  wrote:
> 
> Hi,
> 
>     I am seeing a continuous memory leak from io.netty.buffer.PoolChunk, 
> Kindly advice how to go ahead and trace the problem?

Use a profiler.

Where is this in the Tomcat code base?

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: suspected memory leak in io.netty.buffer.PoolChunk

2020-04-29 Thread Ragavendhiran Bhiman (rabhiman)
This question is from 8.5.29 version.

On 29/04/20, 6:52 PM, "Ragavendhiran Bhiman (rabhiman)" 
 wrote:

Hi,

I am seeing a continuous memory leak from io.netty.buffer.PoolChunk, Kindly 
advice how to go ahead and trace the problem?

Thanks a lot.

Regards,
Raghavendran
+91 8220757651.



suspected memory leak in io.netty.buffer.PoolChunk

2020-04-29 Thread Ragavendhiran Bhiman (rabhiman)
Hi,

I am seeing a continuous memory leak from io.netty.buffer.PoolChunk, Kindly 
advice how to go ahead and trace the problem?

Thanks a lot.

Regards,
Raghavendran
+91 8220757651.


Re: JNI memory leak?

2020-04-26 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Mark,

On 4/24/20 17:46, Mark Boon wrote:
> Thanks Chris for taking the time.
>
> As you point out, from the threads I can tell we're not using ARP
> as
the names al all starting with "jsse". AFAI could find out
BouncyCastle is a pure Java implementation, so that also can't be the
cause.
>
> Someone suggested PAMLibrary may be the culprit. So I started a
thread that makes continuous auth calls to the PAM library. Now there
does seems to be an indication memory is leaking very, very slowly. It
seems to be roughly in line with the number of auth failures. It looks
like PAM throttles auth failures though, hence it's taking such a long
time for the evidence to mount.
>
> So nothing to see here for this group. Just wanted to give a heads
> up.

Actually, this is great feedback, even if it means there is no action
to take by the Tomcat team. 312 bytes leaked per allocation is enough
to be dangerous, but not such a huge problem to make it easy to discover
.

Identifying that it is an authentication library that isn't commonly
used with Java (or maybe it is?!) at least puts it into the archives
of this list in case someone else is having a similar problem.

Thanks,
- -chris

> On 4/6/20, 12:12 PM, "Christopher Schultz"
 wrote:
>
> Mark,
>
> On 4/3/20 21:48, Mark Boon wrote:
>> For the past few months we’ve been trying to trace what looks
>> like gradual memory creep. After some long-running experiments it
>> seems due to memory leaking when jni_invoke_static(JNIEnv_*,
>> JavaValue*, _jobject*, JNICallType, _jmethodID*,
>> JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>
>> My environment is Tomcat running a proxy webapp. It does TLS
>> termination,  authentication and then forwards the call to local
>> services. It doesn’t do much else, it’s a relatively small
>> application.
>
>> Some (possibly relevant) versions and config parameters: Tomcat
>> 8.5 Java 8u241 (Oracle) Heap size = 360Mb MAX_ALLOC_ARENA=2
>> MALLOC_TRIM_THRESHOLD_=250048 jdk.nio.maxCachedBufferSize=25600
>
>> We couldn’t find any proof of memory leaking on the Java side.
>> When we turn on NativeMemoryTracking=detail and we take a
>> snapshot shortly after starting, we see (just one block shown):
>
>> [0x03530e462f9a]
>> JNIHandleBlock::allocate_block(Thread*)+0xaa [0x03530e3f759a]
>> JavaCallWrapper::JavaCallWrapper(methodHandle, Handle,
>> JavaValue*, Thread*)+0x6a [0x03530e3fa000]
>> JavaCalls::call_helper(JavaValue*, methodHandle*,
>> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
>> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
>> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96]
>> [clone .constprop.117]+0x1e1 (malloc=33783KB type=Internal
>> #110876)
>
>> Then we run it under heavy load for a few weeks and take another
>> snapshot:
>
>> [0x03530e462f9a]
>> JNIHandleBlock::allocate_block(Thread*)+0xaa [0x03530e3f759a]
>> JavaCallWrapper::JavaCallWrapper(methodHandle, Handle,
>> JavaValue*, Thread*)+0x6a [0x03530e3fa000]
>> JavaCalls::call_helper(JavaValue*, methodHandle*,
>> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
>> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
>> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96]
>> [clone .constprop.117]+0x1e1 (malloc=726749KB type=Internal
>> #2385226)
>
>> While other blocks also show some variation, none show growth
>> like this one. When I do some math on the number (726749KB -
>> 33783KB) / (2385226 – 110876) it comes down to a pretty even 312
>> bytes per allocation. And we leaked just under 700Mb. While not
>> immediately problematic, this does not bode well for our
>> customers who run this service for months.
>
>> I’d like to avoid telling them they need to restart this service
>> every two weeks to reclaim memory. Has anyone seen something
>> like this? Any way it could be avoided?
>
> That was some very good sleuthing on your part. 312 bytes per
> allocation will indeed be very difficult to detect unless you are
> really looking hard for it.
>
> On 4/4/20 13:02, Mark Boon wrote:
>> The connector of the webapp uses Http11NioProtocol. My
>> understanding is it uses direct-byte-buffers backed by native
>> memory for the Nio channels. I don't know for sure if that gets
>> allocated through a JNI call, but that was my assumption.
>
> This will definitely use Tomcat's NIO protocol which doesn't use
> the APR connector. However, you still might be using tcnative to
> get the crypto engine. Can you confirm the thread-naming convention
> of your request-processing threads? They will tell you if JSSE or
> OpenSSL (tcnative) is being used.
>
> A few data points:
>
> * No Tomcat code directly invokes jni_invoke_static(), but it might
> do so indirectly through a variety of means.
>
> * NIO does use buffers, but those buffers tend to be (a) fairly
> large --  on the order of kilobytes -- and (b) re-used for the life
> of the request-processor 

Re: JNI memory leak?

2020-04-24 Thread Mark Boon
Thanks Chris for taking the time.

As you point out, from the threads I can tell we're not using ARP as the names 
al all starting with "jsse". AFAI could find out BouncyCastle is a pure Java 
implementation, so that also can't be the cause.

Someone suggested PAMLibrary may be the culprit. So I started a thread that 
makes continuous auth calls to the PAM library. Now there does seems to be an 
indication memory is leaking very, very slowly. It seems to be roughly in line 
with the number of auth failures. It looks like PAM throttles auth failures 
though, hence it's taking such a long time for the evidence to mount.

So nothing to see here for this group. Just wanted to give a heads up.

Mark


On 4/6/20, 12:12 PM, "Christopher Schultz"  
wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Mark,

On 4/3/20 21:48, Mark Boon wrote:
> For the past few months we’ve been trying to trace what looks like
> gradual memory creep. After some long-running experiments it seems
> due to memory leaking when jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS
> termination,  authentication and then forwards the call to local
> services. It doesn’t do much else, it’s a relatively small
> application.
>
> Some (possibly relevant) versions and config parameters: Tomcat
> 8.5 Java 8u241 (Oracle) Heap size = 360Mb MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048 jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side. When
> we turn on NativeMemoryTracking=detail and we take a snapshot
> shortly after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a [0x03530e3fa000]
> JavaCalls::call_helper(JavaValue*, methodHandle*,
> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone
> .constprop.117]+0x1e1 (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another
> snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a [0x03530e3fa000]
> JavaCalls::call_helper(JavaValue*, methodHandle*,
> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone
> .constprop.117]+0x1e1 (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like
> this one. When I do some math on the number (726749KB - 33783KB) /
> (2385226 – 110876) it comes down to a pretty even 312 bytes per
> allocation. And we leaked just under 700Mb. While not immediately
> problematic, this does not bode well for our customers who run this
> service for months.
>
> I’d like to avoid telling them they need to restart this service
> every two weeks to reclaim memory. Has anyone seen something like
> this? Any way it could be avoided?

That was some very good sleuthing on your part. 312 bytes per
allocation will indeed be very difficult to detect unless you are
really looking hard for it.

On 4/4/20 13:02, Mark Boon wrote:
> The connector of the webapp uses Http11NioProtocol. My
> understanding is it uses direct-byte-buffers backed by native
> memory for the Nio channels. I don't know for sure if that gets
> allocated through a JNI call, but that was my assumption.

This will definitely use Tomcat's NIO protocol which doesn't use the
APR connector. However, you still might be using tcnative to get the
crypto engine. Can you confirm the thread-naming convention of your
request-processing threads? They will tell you if JSSE or OpenSSL
(tcnative) is being used.

A few data points:

* No Tomcat code directly invokes jni_invoke_static(), but it might do
so indirectly through a variety of means.

* NIO does use buffers, but those buffers tend to be (a) fairly large
- --  on the order of kilobytes -- and (b) re-used for the life of the
request-processor thread.

It is very possible that there is a very small leak in Tomcat's
handling of NIO buffers. I think it's equally likely that there is a
bug in the JVM itself.

Are you able to try different JVM versions in your test? I would
recommend major-version changes, 

Re: JNI memory leak?

2020-04-06 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Mark,

On 4/3/20 21:48, Mark Boon wrote:
> For the past few months we’ve been trying to trace what looks like
> gradual memory creep. After some long-running experiments it seems
> due to memory leaking when jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS
> termination,  authentication and then forwards the call to local
> services. It doesn’t do much else, it’s a relatively small
> application.
>
> Some (possibly relevant) versions and config parameters: Tomcat
> 8.5 Java 8u241 (Oracle) Heap size = 360Mb MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048 jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side. When
> we turn on NativeMemoryTracking=detail and we take a snapshot
> shortly after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a [0x03530e3fa000]
> JavaCalls::call_helper(JavaValue*, methodHandle*,
> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone
> .constprop.117]+0x1e1 (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another
> snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a [0x03530e3fa000]
> JavaCalls::call_helper(JavaValue*, methodHandle*,
> JavaCallArguments*, Thread*)+0x8f0 [0x03530e4454a1]
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone
> .constprop.117]+0x1e1 (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like
> this one. When I do some math on the number (726749KB - 33783KB) /
> (2385226 – 110876) it comes down to a pretty even 312 bytes per
> allocation. And we leaked just under 700Mb. While not immediately
> problematic, this does not bode well for our customers who run this
> service for months.
>
> I’d like to avoid telling them they need to restart this service
> every two weeks to reclaim memory. Has anyone seen something like
> this? Any way it could be avoided?

That was some very good sleuthing on your part. 312 bytes per
allocation will indeed be very difficult to detect unless you are
really looking hard for it.

On 4/4/20 13:02, Mark Boon wrote:
> The connector of the webapp uses Http11NioProtocol. My
> understanding is it uses direct-byte-buffers backed by native
> memory for the Nio channels. I don't know for sure if that gets
> allocated through a JNI call, but that was my assumption.

This will definitely use Tomcat's NIO protocol which doesn't use the
APR connector. However, you still might be using tcnative to get the
crypto engine. Can you confirm the thread-naming convention of your
request-processing threads? They will tell you if JSSE or OpenSSL
(tcnative) is being used.

A few data points:

* No Tomcat code directly invokes jni_invoke_static(), but it might do
so indirectly through a variety of means.

* NIO does use buffers, but those buffers tend to be (a) fairly large
- --  on the order of kilobytes -- and (b) re-used for the life of the
request-processor thread.

It is very possible that there is a very small leak in Tomcat's
handling of NIO buffers. I think it's equally likely that there is a
bug in the JVM itself.

Are you able to try different JVM versions in your test? I would
recommend major-version changes, here. I thought I read somewhere that
Oracle re-wrote the implementation of the NIO API in a somewhat recent
Java release (Java 9?), but I can't seem to find that reference, now.

Are you able to try:

- - Java 8
- - Java 9/10/11/12
- - Java 13

- -chris

PS This bug report may be relevant:
https://bugs.openjdk.java.net/browse/JDK-8190395

The bug report says it's closed/incomplete, but they do mention a
312-byte leak with certain invocations.
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAl6Lfx8ACgkQHPApP6U8
pFiGEQ/7BweMhSjALeMhEoAURv6NarT33Nb1ydHynOmDXnSXbczO+B6MC1j9QjHD
2sdYsPRLpi8holt2pl3dLxrsWE4gkK27e+2hwNN7568P/9S03m9VzpdciuwBCwfu
0acFtRs8iWcJO0jI29R438lt6w1CY6QGV2rasZKhkhaoBA8K9mSb0J02KUhDlpbn
oljBKA2k1oeqEDJpJoejsX7Zwgaf2aM96VZNds8Atly1WZhqj0nENwtU4yaIxel5
HyHrjOLsHMLPZeTDx/5pFs45qTGfFos1YF1lf99EDdKuX9qv+X+Dr2vr0RZW2iw8
5Oxh0EqAjMko9ysvjo2N7cnKK5fLNvAHRAGBIqDlAtcWF51vddeMASHjKYmM+/ha
pvN3/Dff65QXK74fgIk10Yqro1REudYcXTXT2+9WdycS06HOORndQaMs3l0qqaLG

Re: JNI memory leak?

2020-04-06 Thread calder
> On Sat, Apr 4, 2020 at 10:39 AM Thomas Meyer  wrote:
> > April 2020 14:53:17 MESZ schrieb calder  wrote:

[ snip ]
> >So, ultimately, I'm confused why we think Tomcat is "to blame" as
> >there is no evidence it uses JNI.
> >It's my experience JNI memory issues are related to the Java JNI or
> >proprietary native code.
>

> I think jni is used via apr in tomcat.
>
> Do you use apr http connector?

Thomas - thanks for correcting my oversight - I obviously wasn't
thinking about the Native Library

user@stimpy:~/Desktop/tomcat-source/tomcat-native-1.2.23-src> find .
-name "*jni*" -ls
818614714  0 drwxr-xr-x   2 user  users 138 Jun 26  2019
./examples/org/apache/tomcat/jni
544916739  8 -rwxr-xr-x   1 user  users7639 Jun 26  2019
./jnirelease.sh
21107212 12 -rw-r--r--   1 user  users   11352 Jun 26  2019
./native/src/jnilib.c
812313638  0 drwxr-xr-x   2 user  users 150 Jun 26  2019
./test/org/apache/tomcat/jni
25339941  4 drwxr-xr-x   2 user  users4096 Jun 26  2019
./java/org/apache/tomcat/jni

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-05 Thread calder
On Sat, Apr 4, 2020, 12:02 Mark Boon  wrote:

> I don't have 'proof' Tomcat is to blame. Hence the question-mark. All I
> have managed is narrow it down to this NMT data, which is not very
> informative. I hoped anyone could give me an idea how or where to
> investigate further. Or if someone had run into this before.
>
> The connector of the webapp uses Http11NioProtocol. My understanding is it
> uses direct-byte-buffers backed by native memory for the Nio channels. I
> don't know for sure if that gets allocated through a JNI call, but that was
> my assumption.
>
> I did not consider trying Mission Control or jvisualvm. Isn't Mission
> Control for embedded Java? And AFAIK, jvisualvm is for profiling Java
> memory usage and underneath uses tools like jmap, jstat and jcmd. Through
> GC logs and jmap heap-dumps I can confidently say there's no memory leak on
> the Java side. The NMT data shown comes from jcmd. No type grows beyond
> control and full GC always returns to the same baseline for the heap.
> Anyway, the Java heap is only 360Mb and this memory-block created by
> jni_invoke_static has grown to 700Mb by itself. And I see no out-of-memory
> messages. The only hint of this happening is that the RES memory of the
> Tomcat process keeps growing over time, as shown by 'top'. And it seems GC
> is getting slower over time, but the customers haven't noticed it yet.
> (This is after we switched to ParallelGC. We did see considerable slow-down
> when using G1GC in the ref-processing, but we couldn't figure out why. It
> would slow to a crawl before the memory leak became obvious.)
>
> Anyway, I was mostly fishing for hints or tips that could help me figure
> this out or avoid it.
>
> The application is simple to the point I'm hard-pressed to think of any
> other part making JNI calls. The only library I can think of using JNI is
> BouncyCastle doing the SSL encryption/decryption, so maybe I'll switch my
> focus there.
>

Something else to consider - we should keep in mind that a JVM is loaded
for the native code, but won't be obvious in a process table  : )


[OT] Re: JNI memory leak?

2020-04-04 Thread Mark Thomas
On April 4, 2020 7:26:05 PM UTC, calder  wrote:
>m
>
>On Sat, Apr 4, 2020, 14:14 Frank Tornack  wrote:
>
>> Good evening,
>> I have a question about your e-mail address. Why does the address end
>> on com.INVALID? How do you get such an address?
>>
>
>That question is off topic.

Subject line adjusted accordingly.

>The invalid is too avoid spam email

No it isn't. And to side track for a moment it is very unhelpful to state 
something as a fact when it is, at best, an educated guess. Especially when, as 
in this case, that guess is wrong. Guesses can be acceptable responses to 
questions on this list but it must be made clear to readers that it is a guess 

The .INVALID is added by the ASF mailing list software (strictly a custom 
extension written by the ASF does this) when the originator posts from a domain 
that has a strict SPF record. If the ASF didn't do this, recipients that check 
SPF records would reject the mail as the originator's domain will not list the 
ASF mail servers as permitted senders for the originator's domain.

In short, .INVALID is added to make sure the message is received by all 
subscribers.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread calder
m

On Sat, Apr 4, 2020, 14:14 Frank Tornack  wrote:

> Good evening,
> I have a question about your e-mail address. Why does the address end
> on com.INVALID? How do you get such an address?
>

That question is off topic.

The invalid is too avoid spam email


Re: JNI memory leak?

2020-04-04 Thread Frank Tornack
Good evening,
I have a question about your e-mail address. Why does the address end
on com.INVALID? How do you get such an address?

Sorry for the interposed question,

Am Samstag, den 04.04.2020, 01:48 + schrieb Mark Boon:
> For the past few months we’ve been trying to trace what looks like
> gradual memory creep. After some long-running experiments it seems
> due to memory leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
> _jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
> 
> My environment is Tomcat running a proxy webapp. It does TLS
> termination,  authentication and then forwards the call to local
> services. It doesn’t do much else, it’s a relatively small
> application.
> 
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
> 
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot
> shortly after starting, we see (just one block shown):
> 
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
> methodHandle*, JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> [clone .isra.96] [clone .constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
> 
> Then we run it under heavy load for a few weeks and take another
> snapshot:
> 
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
> Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
> methodHandle*, JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
> [clone .isra.96] [clone .constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
> 
> While other blocks also show some variation, none show growth like
> this one. When I do some math on the number (726749KB - 33783KB) /
> (2385226 – 110876) it comes down to a pretty even 312 bytes per
> allocation.
> And we leaked just under 700Mb. While not immediately problematic,
> this does not bode well for our customers who run this service for
> months.
> 
> I’d like to avoid telling them they need to restart this service
> every two weeks to reclaim memory. Has anyone seen something like
> this? Any way it could be avoided?
> 
> Mark Boon
> 
> 
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread Mark Boon
I don't have 'proof' Tomcat is to blame. Hence the question-mark. All I have 
managed is narrow it down to this NMT data, which is not very informative. I 
hoped anyone could give me an idea how or where to investigate further. Or if 
someone had run into this before.

The connector of the webapp uses Http11NioProtocol. My understanding is it uses 
direct-byte-buffers backed by native memory for the Nio channels. I don't know 
for sure if that gets allocated through a JNI call, but that was my assumption.

I did not consider trying Mission Control or jvisualvm. Isn't Mission Control 
for embedded Java? And AFAIK, jvisualvm is for profiling Java memory usage and 
underneath uses tools like jmap, jstat and jcmd. Through GC logs and jmap 
heap-dumps I can confidently say there's no memory leak on the Java side. The 
NMT data shown comes from jcmd. No type grows beyond control and full GC always 
returns to the same baseline for the heap. Anyway, the Java heap is only 360Mb 
and this memory-block created by jni_invoke_static has grown to 700Mb by 
itself. And I see no out-of-memory messages. The only hint of this happening is 
that the RES memory of the Tomcat process keeps growing over time, as shown by 
'top'. And it seems GC is getting slower over time, but the customers haven't 
noticed it yet. (This is after we switched to ParallelGC. We did see 
considerable slow-down when using G1GC in the ref-processing, but we couldn't 
figure out why. It would slow to a crawl before the memory leak became obvious.)

Anyway, I was mostly fishing for hints or tips that could help me figure this 
out or avoid it.

The application is simple to the point I'm hard-pressed to think of any other 
part making JNI calls. The only library I can think of using JNI is 
BouncyCastle doing the SSL encryption/decryption, so maybe I'll switch my focus 
there.

Thanks for taking the time to think along.

Mark
  
On 4/4/20, 5:50 AM, "calder"  wrote:

On Fri, Apr 3, 2020 at 8:48 PM Mark Boon  wrote:
>
> For the past few months we’ve been trying to trace what looks like 
gradual memory creep. After some long-running experiments it seems due to 
memory leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, 
_jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS termination, 
 authentication and then forwards the call to local services. It doesn’t do 
much else, it’s a relatively small application.
>
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot 
shortly after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, 
Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, 
Handle, JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like this 
one. When I do some math on the number (726749KB - 33783KB) / (2385226 – 
110876) it comes down to a pretty even 312 bytes per allocation.
> And we leaked just under 700Mb. While not immediately problematic, this 
does not bode well for our customers who run this service for months.
>
> I’d like to avoid telling them they need to restart this service every 
two weeks to reclaim memory. Has anyone seen something like this? Any way it 
could be avoided?

I'm a bit confused. Your stated title is "JNI Memory Leak?"
Tomcat, to my intimate knowledge, does not use JNI (correct m

Re: JNI memory leak?

2020-04-04 Thread Thomas Meyer
Am 4. April 2020 14:53:17 MESZ schrieb calder :
>On Fri, Apr 3, 2020 at 8:48 PM Mark Boon 
>wrote:
>>
>> For the past few months we’ve been trying to trace what looks like
>gradual memory creep. After some long-running experiments it seems due
>to memory leaking when
>> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType,
>_jmethodID*, JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>>
>> My environment is Tomcat running a proxy webapp. It does TLS
>termination,  authentication and then forwards the call to local
>services. It doesn’t do much else, it’s a relatively small application.
>>
>> Some (possibly relevant) versions and config parameters:
>> Tomcat 8.5
>> Java 8u241 (Oracle)
>> Heap size = 360Mb
>> MAX_ALLOC_ARENA=2
>> MALLOC_TRIM_THRESHOLD_=250048
>> jdk.nio.maxCachedBufferSize=25600
>>
>> We couldn’t find any proof of memory leaking on the Java side.
>> When we turn on NativeMemoryTracking=detail and we take a snapshot
>shortly after starting, we see (just one block shown):
>>
>> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
>> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
>Handle, JavaValue*, Thread*)+0x6a
>> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
>methodHandle*, JavaCallArguments*, Thread*)+0x8f0
>> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
>_jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
>[clone .isra.96] [clone .constprop.117]+0x1e1
>>  (malloc=33783KB type=Internal #110876)
>>
>> Then we run it under heavy load for a few weeks and take another
>snapshot:
>>
>> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
>> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle,
>Handle, JavaValue*, Thread*)+0x6a
>> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*,
>methodHandle*, JavaCallArguments*, Thread*)+0x8f0
>> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*,
>_jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)
>[clone .isra.96] [clone .constprop.117]+0x1e1
>>  (malloc=726749KB type=Internal #2385226)
>>
>> While other blocks also show some variation, none show growth like
>this one. When I do some math on the number (726749KB - 33783KB) /
>(2385226 – 110876) it comes down to a pretty even 312 bytes per
>allocation.
>> And we leaked just under 700Mb. While not immediately problematic,
>this does not bode well for our customers who run this service for
>months.
>>
>> I’d like to avoid telling them they need to restart this service
>every two weeks to reclaim memory. Has anyone seen something like this?
>Any way it could be avoided?
>
>I'm a bit confused. Your stated title is "JNI Memory Leak?"
>Tomcat, to my intimate knowledge, does not use JNI (correct me if I'm
>rwong)
>( quick check
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.c -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.cpp -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.asm -ls
> user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
>-name *.pas -ls
>}
>
>a) for the "snapshots" provided, there is NO reference to their
>association, ie, "what" code are those related to?
>b) could you run Mission Control or jvisualvm to locate a stack trace
>for this?
>
>We have two apps that use JNI and run via Tomcat (and another app
>server) - one is "so old" that it is limited to 32-bit . the one
>memory leak we have encountered was related to the "native side" (for
>us, the native-compiled Pascal side of things (we also use Assembly
>code) via Java's JNI code).
>
>So, ultimately, I'm confused why we think Tomcat is "to blame" as
>there is no evidence it uses JNI.
>It's my experience JNI memory issues are related to the Java JNI or
>proprietary native code.
>
>-
>To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>For additional commands, e-mail: users-h...@tomcat.apache.org

Hi,

I think jni is used via apr in tomcat.

Do you use apr http connector?
-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: JNI memory leak?

2020-04-04 Thread calder
On Fri, Apr 3, 2020 at 8:48 PM Mark Boon  wrote:
>
> For the past few months we’ve been trying to trace what looks like gradual 
> memory creep. After some long-running experiments it seems due to memory 
> leaking when
> jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, 
> JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.
>
> My environment is Tomcat running a proxy webapp. It does TLS termination,  
> authentication and then forwards the call to local services. It doesn’t do 
> much else, it’s a relatively small application.
>
> Some (possibly relevant) versions and config parameters:
> Tomcat 8.5
> Java 8u241 (Oracle)
> Heap size = 360Mb
> MAX_ALLOC_ARENA=2
> MALLOC_TRIM_THRESHOLD_=250048
> jdk.nio.maxCachedBufferSize=25600
>
> We couldn’t find any proof of memory leaking on the Java side.
> When we turn on NativeMemoryTracking=detail and we take a snapshot shortly 
> after starting, we see (just one block shown):
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
> JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] 
> [clone .constprop.117]+0x1e1
>  (malloc=33783KB type=Internal #110876)
>
> Then we run it under heavy load for a few weeks and take another snapshot:
>
> [0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
> [0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
> JavaValue*, Thread*)+0x6a
> [0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x8f0
> [0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] 
> [clone .constprop.117]+0x1e1
>  (malloc=726749KB type=Internal #2385226)
>
> While other blocks also show some variation, none show growth like this one. 
> When I do some math on the number (726749KB - 33783KB) / (2385226 – 110876) 
> it comes down to a pretty even 312 bytes per allocation.
> And we leaked just under 700Mb. While not immediately problematic, this does 
> not bode well for our customers who run this service for months.
>
> I’d like to avoid telling them they need to restart this service every two 
> weeks to reclaim memory. Has anyone seen something like this? Any way it 
> could be avoided?

I'm a bit confused. Your stated title is "JNI Memory Leak?"
Tomcat, to my intimate knowledge, does not use JNI (correct me if I'm rwong)
( quick check
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.c -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.cpp -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.asm -ls
 user@stimpy:~/Desktop/tomcat-source/apache-tomcat-8.5.53-src> find .
-name *.pas -ls
}

a) for the "snapshots" provided, there is NO reference to their
association, ie, "what" code are those related to?
b) could you run Mission Control or jvisualvm to locate a stack trace for this?

We have two apps that use JNI and run via Tomcat (and another app
server) - one is "so old" that it is limited to 32-bit . the one
memory leak we have encountered was related to the "native side" (for
us, the native-compiled Pascal side of things (we also use Assembly
code) via Java's JNI code).

So, ultimately, I'm confused why we think Tomcat is "to blame" as
there is no evidence it uses JNI.
It's my experience JNI memory issues are related to the Java JNI or
proprietary native code.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



JNI memory leak?

2020-04-03 Thread Mark Boon
For the past few months we’ve been trying to trace what looks like gradual 
memory creep. After some long-running experiments it seems due to memory 
leaking when
jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, 
JNI_ArgumentPusher*, Thread*) is invoked. Somewhere.

My environment is Tomcat running a proxy webapp. It does TLS termination,  
authentication and then forwards the call to local services. It doesn’t do much 
else, it’s a relatively small application.

Some (possibly relevant) versions and config parameters:
Tomcat 8.5
Java 8u241 (Oracle)
Heap size = 360Mb
MAX_ALLOC_ARENA=2
MALLOC_TRIM_THRESHOLD_=250048
jdk.nio.maxCachedBufferSize=25600

We couldn’t find any proof of memory leaking on the Java side.
When we turn on NativeMemoryTracking=detail and we take a snapshot shortly 
after starting, we see (just one block shown):

[0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
[0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
JavaValue*, Thread*)+0x6a
[0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
[0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
 (malloc=33783KB type=Internal #110876)

Then we run it under heavy load for a few weeks and take another snapshot:

[0x03530e462f9a] JNIHandleBlock::allocate_block(Thread*)+0xaa
[0x03530e3f759a] JavaCallWrapper::JavaCallWrapper(methodHandle, Handle, 
JavaValue*, Thread*)+0x6a
[0x03530e3fa000] JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x8f0
[0x03530e4454a1] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.96] [clone 
.constprop.117]+0x1e1
 (malloc=726749KB type=Internal #2385226)

While other blocks also show some variation, none show growth like this one. 
When I do some math on the number (726749KB - 33783KB) / (2385226 – 110876) it 
comes down to a pretty even 312 bytes per allocation.
And we leaked just under 700Mb. While not immediately problematic, this does 
not bode well for our customers who run this service for months.

I’d like to avoid telling them they need to restart this service every two 
weeks to reclaim memory. Has anyone seen something like this? Any way it could 
be avoided?

Mark Boon





Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-10-02 Thread Mark Thomas
On 02/10/2019 01:28, Chen Levy wrote:
>> -Original Message-
>> From: Mark Thomas 
>> Sent: Tuesday, October 1, 2019 17:43
>> To: users@tomcat.apache.org
>> Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak
>>
>> Found it.
>>
>> HTTP/2 on NIO is affected.
>> HTTP/2 on APR/native is not affected.
>>
>> Need to check on NIO2 but I suspect it is affected.
>>
>> Patch to follow shortly.
>>
>> Mark
> 
> 
> Good, here's some more corroborating info:
> Mark I followed your suggestion to test without HTTP/2, and one of my servers 
> (v9.0.26) has been running without it for a day now, showing no memory 
> accumulation
> I do not use APR/Native

This has been fixed and the fix will be included in 9.0.27 onwards.

8.5.x was not affected.

NIO2 was affected.

You should also be able to avoid the memory leak with NIO by setting
useAsyncIO="false" on the Connector.

There isn't an easy way to avoid it with NIO2. For those users using
NIO2, I'd recommend switching to NIO was a workaround until 9.0.27 is
released.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-10-01 Thread Chen Levy
> -Original Message-
> From: Mark Thomas 
> Sent: Tuesday, October 1, 2019 17:43
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak
> 
> Found it.
> 
> HTTP/2 on NIO is affected.
> HTTP/2 on APR/native is not affected.
> 
> Need to check on NIO2 but I suspect it is affected.
> 
> Patch to follow shortly.
> 
> Mark


Good, here's some more corroborating info:
Mark I followed your suggestion to test without HTTP/2, and one of my servers 
(v9.0.26) has been running without it for a day now, showing no memory 
accumulation
I do not use APR/Native

Chen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-10-01 Thread Mark Thomas
Found it.

HTTP/2 on NIO is affected.
HTTP/2 on APR/native is not affected.

Need to check on NIO2 but I suspect it is affected.

Patch to follow shortly.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-10-01 Thread Mark Thomas
On 30/09/2019 14:12, Rémy Maucherat wrote:



> I added debug code in
> AbstractProtocol.ConnectionHandler.release(SocketWrapperBase) to check
> if the processor considered was present in the waitingProcessors map. The
> result is the following:
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@77b16580
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@1d902704
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@610c4fc8
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@1a3a3cb6
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@336f552d
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@3cd94f25
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@66e24762
> TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
> PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@7c7a1c3c
> TEST-org.apache.coyote.http11.TestHttp11Processor.NIO.txt:CHECK PROCESSOR
> FAILED org.apache.coyote.http11.Http11Processor@55a44822
> TEST-org.apache.coyote.http11.upgrade.TestUpgradeInternalHandler.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@6e55ff60
> TEST-org.apache.coyote.http11.upgrade.TestUpgrade.NIO.txt:CHECK PROCESSOR
> FAILED org.apache.coyote.http11.upgrade.UpgradeProcessorExternal@37d98b7f
> TEST-org.apache.tomcat.websocket.server.TestShutdown.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@6be9bd85
> TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@3bd4e02f
> TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@4bb23a77
> TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@32e20d65
> TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
> PROCESSOR FAILED
> org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@16abf52f
> 
> All instances of not removed processors are either from async or upgraded
> processors (the internal kind), as expected. I have verified the processor
> instances above are never removed so it might be more robust to simply call
> proto.removeWaitingProcessor(processor); in
> AbstractProtocol.ConnectionHandler.release(SocketWrapperBase) (after all
> the socket is closed and done after that point). There could be a more fine
> grained solution of course.
> 
> However, this does not match the leak scenario described by the user, this
> doesn't happen without async or websockets being used.

I'm not sure those are leaks. I've started to check them and it looks
like Tomcat is shutting down while an async request is still waiting to
timeout. In those circumstances you would expect to see a Processor in
waiting processors.

A separate question is what is the correct error handling for async
requests. There was some discussion on that topic on the Jakarta Servlet
list but it didn't reach any definitive conclusions. I have some patches
I need to get back to that should help but they are still a work in
progress.

I'll keep checking but my sense is that we haven't found the root cause
of this leak yet.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-30 Thread Rémy Maucherat
On Sat, Sep 28, 2019 at 9:05 PM Mark Thomas  wrote:

> On 27/09/2019 22:39, Chen Levy wrote:
> > -Original Message-
> > From: Mark Thomas 
> > Sent: Friday, September 27, 2019 15:34
> > To: users@tomcat.apache.org
> > Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak
> >
> > On 27/09/2019 16:34, Chen Levy wrote:
> >> On 26/09/2019 18:22, Chen Levy wrote:
> >
> > 
> >
> >>> The HashMap referenced in the report appears to be "waitingProcessors"
> inside AbstractProtocol which contain 262K entries.
> >>
> >> OK. Those are asynchronous Servlets that are still in async mode.
> >
> > 
> >
> >> * I do not employ async servlets in my application
> >
> > OK. Do you use WebSocket? There is a code path to add Processors to the
> waitingProcessors Map for WebSocket as well.
> >
> > Mark
> >
> >
> > No, no WebSocket either; just plain old Servlets, Filters and the
> occasional JSP
>
> OK. That narrows down where/how this might be happening.
>
> What about if you disable HTTP/2. Do you still see the issue then?
>

I added debug code in
AbstractProtocol.ConnectionHandler.release(SocketWrapperBase) to check
if the processor considered was present in the waitingProcessors map. The
result is the following:
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@77b16580
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@1d902704
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@610c4fc8
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@1a3a3cb6
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@336f552d
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@3cd94f25
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@66e24762
TEST-javax.servlet.http.TestHttpServletResponseSendError.NIO.txt:CHECK
PROCESSOR FAILED org.apache.coyote.http11.Http11Processor@7c7a1c3c
TEST-org.apache.coyote.http11.TestHttp11Processor.NIO.txt:CHECK PROCESSOR
FAILED org.apache.coyote.http11.Http11Processor@55a44822
TEST-org.apache.coyote.http11.upgrade.TestUpgradeInternalHandler.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@6e55ff60
TEST-org.apache.coyote.http11.upgrade.TestUpgrade.NIO.txt:CHECK PROCESSOR
FAILED org.apache.coyote.http11.upgrade.UpgradeProcessorExternal@37d98b7f
TEST-org.apache.tomcat.websocket.server.TestShutdown.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@6be9bd85
TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@3bd4e02f
TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@4bb23a77
TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@32e20d65
TEST-org.apache.tomcat.websocket.TestWsRemoteEndpoint.NIO.txt:CHECK
PROCESSOR FAILED
org.apache.coyote.http11.upgrade.UpgradeProcessorInternal@16abf52f

All instances of not removed processors are either from async or upgraded
processors (the internal kind), as expected. I have verified the processor
instances above are never removed so it might be more robust to simply call
proto.removeWaitingProcessor(processor); in
AbstractProtocol.ConnectionHandler.release(SocketWrapperBase) (after all
the socket is closed and done after that point). There could be a more fine
grained solution of course.

However, this does not match the leak scenario described by the user, this
doesn't happen without async or websockets being used.

Rémy


Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-28 Thread Mark Thomas
On 27/09/2019 22:39, Chen Levy wrote:
> -Original Message-
> From: Mark Thomas  
> Sent: Friday, September 27, 2019 15:34
> To: users@tomcat.apache.org
> Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak
> 
> On 27/09/2019 16:34, Chen Levy wrote:
>> On 26/09/2019 18:22, Chen Levy wrote:
> 
> 
> 
>>> The HashMap referenced in the report appears to be "waitingProcessors" 
>>> inside AbstractProtocol which contain 262K entries.
>>
>> OK. Those are asynchronous Servlets that are still in async mode.
> 
> 
> 
>> * I do not employ async servlets in my application
> 
> OK. Do you use WebSocket? There is a code path to add Processors to the 
> waitingProcessors Map for WebSocket as well.
> 
> Mark
> 
> 
> No, no WebSocket either; just plain old Servlets, Filters and the occasional 
> JSP

OK. That narrows down where/how this might be happening.

What about if you disable HTTP/2. Do you still see the issue then?

Thanks,

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-27 Thread Chen Levy
-Original Message-
From: Mark Thomas  
Sent: Friday, September 27, 2019 15:34
To: users@tomcat.apache.org
Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak

On 27/09/2019 16:34, Chen Levy wrote:
> On 26/09/2019 18:22, Chen Levy wrote:



>> The HashMap referenced in the report appears to be "waitingProcessors" 
>> inside AbstractProtocol which contain 262K entries.
> 
> OK. Those are asynchronous Servlets that are still in async mode.



> * I do not employ async servlets in my application

OK. Do you use WebSocket? There is a code path to add Processors to the 
waitingProcessors Map for WebSocket as well.

Mark


No, no WebSocket either; just plain old Servlets, Filters and the occasional JSP

Chen
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-27 Thread Mark Thomas
On 27/09/2019 16:34, Chen Levy wrote:
> On 26/09/2019 18:22, Chen Levy wrote:



>> The HashMap referenced in the report appears to be "waitingProcessors" 
>> inside AbstractProtocol which contain 262K entries.
> 
> OK. Those are asynchronous Servlets that are still in async mode.



> * I do not employ async servlets in my application

OK. Do you use WebSocket? There is a code path to add Processors to the
waitingProcessors Map for WebSocket as well.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-27 Thread Chen Levy


-Original Message-
From: Mark Thomas  
Sent: Thursday, September 26, 2019 15:50
To: users@tomcat.apache.org
Subject: Re: Tomcat 9.0.24/9.0.26 suspected memory leak

On 26/09/2019 18:22, Chen Levy wrote:
> Hello Experts
> 
> Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
> 9.0.24; immediately after the upgrade the servers started accumulating memory 
> in a steady trend that was not observed before. In addition, CPU utilization 
> that used to hover around 2% not sits at 8%.
> For now the servers are still serving but I suspect they'll become 
> unresponsive in a few hours.
> I loaded a heap dump from one of the servers into MAT and received the 
> following Leak Suspect:
> 
> One instance of "org.apache.coyote.http11.Http11NioProtocol" loaded by 
> "java.net.URLClassLoader @ 0x503f02c40" occupies 9,282,972,608 (96.88%) 
> bytes. The memory is accumulated in one instance of 
> "java.util.concurrent.ConcurrentHashMap$Node[]" loaded by " loader>".
> 
> The HashMap referenced in the report appears to be "waitingProcessors" inside 
> AbstractProtocol which contain 262K entries.

OK. Those are asynchronous Servlets that are still in async mode.

While it is possible for an application to deliberately get itself into a state 
like this (infinite async timeouts and don't complete/dispatch the async 
requests) given that it doesn't happen with 9.0.14 but does with 9.0.24 (and 
.26) that suggests a Tomcat bug.

> The same issue was reproduced using v9.0.26 as well
> 
> Please let me know whether I should provide additional information

Can you do a binary search to determine which Tomcat 9.0.x release this problem 
was introduced in?

How easily can you reproduce this? Do you have something approaching a test 
case we could use to repeat the issue?

Meanwhile, I'll take a look at the changelog and see if anything jumps out as a 
possible cause.

Thanks,

Mark


> 
> Current setup of the production servers:
> AdoptOpenJDK (build 11.0.3+7)
> Amazon Linux 2
> 
> maxHttpHeaderSize="16384"
>maxThreads="500" minSpareThreads="25"
>enableLookups="false" disableUploadTimeout="true"
>connectionTimeout="1"
>compression="on"
>SSLEnabled="true" scheme="https" secure="true">
>keepAliveTimeout="2"
>  overheadDataThreadhold="0"/>
> 
>   certificateKeyAlias="tomcat"
>  certificateKeystorePassword=""
>  certificateKeystoreType="PKCS12"/>
> 
> 
> 
> Thanks
> Chen
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


Thanks for the attention Mark, here are some additional information and answers:
* Once the memory was completely consumed, the servers stopped responding with 
CPU stuck at 100%
* I do not employ async servlets in my application
* I cannot do a binary search for a version because of this change: 
https://github.com/apache/tomcat/commit/c16d9d810a1f64cd768ff33058936cf8907e3117
 which cause another memory leak and server failure between v9.0.16 and v9.0.21 
and was fixed in v9.0.24 (as far as I know)
* This is easily reproduced with the traffic in my farm and all the servers 
suffer the same. In a development environment it's more tricky; so currently I 
don't have a test case

Thanks
Chen


Re: Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-26 Thread Mark Thomas
On 26/09/2019 18:22, Chen Levy wrote:
> Hello Experts
> 
> Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
> 9.0.24; immediately after the upgrade the servers started accumulating memory 
> in a steady trend that was not observed before. In addition, CPU utilization 
> that used to hover around 2% not sits at 8%.
> For now the servers are still serving but I suspect they'll become 
> unresponsive in a few hours.
> I loaded a heap dump from one of the servers into MAT and received the 
> following Leak Suspect:
> 
> One instance of "org.apache.coyote.http11.Http11NioProtocol" loaded by 
> "java.net.URLClassLoader @ 0x503f02c40" occupies 9,282,972,608 (96.88%) 
> bytes. The memory is accumulated in one instance of 
> "java.util.concurrent.ConcurrentHashMap$Node[]" loaded by " loader>".
> 
> The HashMap referenced in the report appears to be "waitingProcessors" inside 
> AbstractProtocol which contain 262K entries.

OK. Those are asynchronous Servlets that are still in async mode.

While it is possible for an application to deliberately get itself into
a state like this (infinite async timeouts and don't complete/dispatch
the async requests) given that it doesn't happen with 9.0.14 but does
with 9.0.24 (and .26) that suggests a Tomcat bug.

> The same issue was reproduced using v9.0.26 as well
> 
> Please let me know whether I should provide additional information

Can you do a binary search to determine which Tomcat 9.0.x release this
problem was introduced in?

How easily can you reproduce this? Do you have something approaching a
test case we could use to repeat the issue?

Meanwhile, I'll take a look at the changelog and see if anything jumps
out as a possible cause.

Thanks,

Mark


> 
> Current setup of the production servers:
> AdoptOpenJDK (build 11.0.3+7) 
> Amazon Linux 2
> 
> maxHttpHeaderSize="16384"
>maxThreads="500" minSpareThreads="25"
>enableLookups="false" disableUploadTimeout="true"
>connectionTimeout="1"
>compression="on"
>SSLEnabled="true" scheme="https" secure="true">
>keepAliveTimeout="2"
>  overheadDataThreadhold="0"/>
> 
>   certificateKeyAlias="tomcat"
>  certificateKeystorePassword=""
>  certificateKeystoreType="PKCS12"/>
> 
> 
> 
> Thanks
> Chen
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat 9.0.24/9.0.26 suspected memory leak

2019-09-26 Thread Chen Levy
Hello Experts

Several of my production servers were recently upgraded from Tomcat 9.0.14 to 
9.0.24; immediately after the upgrade the servers started accumulating memory 
in a steady trend that was not observed before. In addition, CPU utilization 
that used to hover around 2% not sits at 8%.
For now the servers are still serving but I suspect they'll become unresponsive 
in a few hours.
I loaded a heap dump from one of the servers into MAT and received the 
following Leak Suspect:

One instance of "org.apache.coyote.http11.Http11NioProtocol" loaded by 
"java.net.URLClassLoader @ 0x503f02c40" occupies 9,282,972,608 (96.88%) bytes. 
The memory is accumulated in one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]" loaded by "".

The HashMap referenced in the report appears to be "waitingProcessors" inside 
AbstractProtocol which contain 262K entries.

The same issue was reproduced using v9.0.26 as well

Please let me know whether I should provide additional information

Current setup of the production servers:
AdoptOpenJDK (build 11.0.3+7) 
Amazon Linux 2








Thanks
Chen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat JDBC Pool memory leak when using StatementFinalizer interceptor

2018-07-14 Thread Caldarale, Charles R
> From: Felix Schumacher [mailto:felix.schumac...@internetallee.de] 
> Subject: Re: Tomcat JDBC Pool memory leak when using StatementFinalizer
interceptor

> Am 11.07.2018 um 16:22 schrieb Martin Knoblauch:
> >   Now it might be, that we are just using the StatementFinalizer in a
wrong
> > manner. And what we see is expected behavior. Below is our pool
> > configuration. Maybe something is just missing :-)

> The docs in the interceptor says one has to call close on the 
> connection, that the statements created. Does your application call 
> close on the connection?

This section of the doc includes a decent model that your webapp code should
be following:
http://tomcat.apache.org/tomcat-8.0-doc/jndi-datasource-examples-howto.html#
Random_Connection_Closed_Exceptions

Proper use of a finally block is critical.

  - Chuck

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is thus for use only by the intended recipient. If you received
this in error, please contact the sender and delete the e-mail and its
attachments from all computers.



smime.p7s
Description: S/MIME cryptographic signature


Re: Tomcat JDBC Pool memory leak when using StatementFinalizer interceptor

2018-07-14 Thread Felix Schumacher




Am 11.07.2018 um 16:22 schrieb Martin Knoblauch:

Hi,

  while analyzing some heap dump for other reasons, I found that our
application is apparently aggregating a considerable amount of memory in
"org.apache.tomcat.jdbc.pool.TrapException", which is never cleaned by GC.
Digging deeper, it seems that the entries of the "statements" linked list
in the StatementFinalizer are never removed from the list, so after three
weeks of lifetime one ends up with a list of 7 million entries, each 80
bytes.

  Now it might be, that we are just using the StatementFinalizer in a wrong
manner. And what we see is expected behavior. Below is our pool
configuration. Maybe something is just missing :-)
The docs in the interceptor says one has to call close on the 
connection, that the statements created. Does your application call 
close on the connection?


Regards,
 Felix



We are at Tomcat 8.0.36 (yeah, I know, but that is the version we have to
use) and Java 8 (1.8.0_171). Underlying DB is Oracle 12.1.0.2 and we are
using the latest "ojdbc7.jar" from Oracle.


 

Thanks
Martin



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat JDBC Pool memory leak when using StatementFinalizer interceptor

2018-07-11 Thread Martin Knoblauch
Hi,

 while analyzing some heap dump for other reasons, I found that our
application is apparently aggregating a considerable amount of memory in
"org.apache.tomcat.jdbc.pool.TrapException", which is never cleaned by GC.
Digging deeper, it seems that the entries of the "statements" linked list
in the StatementFinalizer are never removed from the list, so after three
weeks of lifetime one ends up with a list of 7 million entries, each 80
bytes.

 Now it might be, that we are just using the StatementFinalizer in a wrong
manner. And what we see is expected behavior. Below is our pool
configuration. Maybe something is just missing :-)

We are at Tomcat 8.0.36 (yeah, I know, but that is the version we have to
use) and Java 8 (1.8.0_171). Underlying DB is Oracle 12.1.0.2 and we are
using the latest "ojdbc7.jar" from Oracle.




Thanks
Martin
-- 
--
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de


RE: Suspected memory leak of org.apache.coyote.AbstractProtocol$ConnectionHandler object while using Websocket

2018-02-02 Thread Serge Perepel
Any takers to tackle this issue?

-Original Message-
From: Serge Perepel [mailto:se...@american-data.com] 
Sent: Friday, January 26, 2018 2:33 PM
To: Tomcat Users List
Subject: RE: Suspected memory leak of 
org.apache.coyote.AbstractProtocol$ConnectionHandler object while using 
Websocket

-Original Message-
From: Mark Thomas [mailto:ma...@apache.org]

>This is likely to get looked at faster if you provide code that other people 
>can run that will reproduce the issue you are seeing rather than expecting 
>someone else to construct the test case for you.

Here is how you can reproduce it:

Server websocket:

package ad.ecs.websocket;

import java.io.IOException;

import javax.websocket.CloseReason;
import javax.websocket.OnClose;
import javax.websocket.OnError;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

@ServerEndpoint(value = "/asyncMsg")
public class TestWebsocket {
@OnOpen
public void open(Session session) throws IOException{
session.getBasicRemote().sendText("Connection Established");
}

@OnMessage
public String login(String sessionID, Session session) {
//you can do this instead of exception and it will still leak memory
//try {
//session.close(new CloseReason(CloseCodes.GOING_AWAY, "Clearing 
session"));
//} catch (IOException e) {
//e.printStackTrace();
//}
throw new IllegalArgumentException("Testing Leak");
}

@OnClose
public void close(Session session, CloseReason reason) {
}

@OnError
public void error(Session session, Throwable error) {
error.printStackTrace();
}

}

Here is the front end html:



  


LEAK





Here is js file:

var webSocket = null;

function test() {
console.log("test button clicked");
openSocket();
}

function openSocket() {
console.log("open webSocket");
webSocket = new WebSocket("ws://" + window.location.host + 
"/memoryleak/asyncMsg");

webSocket.onopen = function(event){
console.log("webSocket open");
webSocket.send('test');
};

webSocket.onmessage = function(event){
console.log("webSocket onMessage", event);
};

webSocket.onclose = function(event){
webSocket = null;
console.log("webSocket connection closed", event);
openSocket();
};
}

function closeSocket(){
console.log("close webSocket");
if (webSocket)
webSocket.close();
}

You need to run it on multiple clients if you want it leak fast. We did it with 
8 clients and it leaked 3GB with in like 10 min.

Thank you
[http://www.american-data.com/images/AmericanDataLogoRoundedEmailSignature.png]

Serge Perepel
Software Developer | American Data
p. 608.643.8022 | tf. 800.464.9942 | f. 608.643.2314 
se...@american-data.com<mailto:se...@american-data.com> | www.american-data.com


[Follow us on 
Facebook]<https://www.facebook.com/pages/American-Data/10044518233>   
[Follow us on LinkedIn] <https://www.linkedin.com/company/american-data>
[Follow us on Twitter] <https://twitter.com/AmericanDataECS>   [Follow us on 
Twitter] <https://www.youtube.com/channel/UCiKfcsunZXWIYHf_2oE708A>


Notice: This message is the property of 1984 Systems, Inc. (DBA) American Data 
and contains information that may be confidential and/or privileged. If you are 
not the intended recipient, you should not use, disclose or take any action 
based on the message. If you have received this transmission in error, please 
immediately contact the sender by return e-mail and delete this e-mail, and any 
attachments, from any computer.

[ECS 10 Conversion]<http://www.american-data.com/viewer>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Suspected memory leak of org.apache.coyote.AbstractProtocol$ConnectionHandler object while using Websocket

2018-01-26 Thread Serge Perepel
-Original Message-
From: Mark Thomas [mailto:ma...@apache.org]

>This is likely to get looked at faster if you provide code that other people 
>can run that will reproduce the issue you are seeing rather than expecting 
>someone else to construct the test case for you.

Here is how you can reproduce it:

Server websocket:

package ad.ecs.websocket;

import java.io.IOException;

import javax.websocket.CloseReason;
import javax.websocket.OnClose;
import javax.websocket.OnError;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

@ServerEndpoint(value = "/asyncMsg")
public class TestWebsocket {
@OnOpen
public void open(Session session) throws IOException{
session.getBasicRemote().sendText("Connection Established");
}

@OnMessage
public String login(String sessionID, Session session) {
//you can do this instead of exception and it will still leak memory
//try {
//session.close(new CloseReason(CloseCodes.GOING_AWAY, "Clearing 
session"));
//} catch (IOException e) {
//e.printStackTrace();
//}
throw new IllegalArgumentException("Testing Leak");
}

@OnClose
public void close(Session session, CloseReason reason) {
}

@OnError
public void error(Session session, Throwable error) {
error.printStackTrace();
}

}

Here is the front end html:








LEAK





Here is js file:

var webSocket = null;

function test() {
console.log("test button clicked");
openSocket();
}

function openSocket() {
console.log("open webSocket");
webSocket = new WebSocket("ws://" + window.location.host + 
"/memoryleak/asyncMsg");

webSocket.onopen = function(event){
console.log("webSocket open");
webSocket.send('test');
};

webSocket.onmessage = function(event){
console.log("webSocket onMessage", event);
};

webSocket.onclose = function(event){
webSocket = null;
console.log("webSocket connection closed", event);
openSocket();
};
}

function closeSocket(){
console.log("close webSocket");
if (webSocket)
webSocket.close();
}

You need to run it on multiple clients if you want it leak fast. We did it with 
8 clients and it leaked 3GB with in like 10 min.

Thank you
[http://www.american-data.com/images/AmericanDataLogoRoundedEmailSignature.png]

Serge Perepel
Software Developer | American Data
p. 608.643.8022 | tf. 800.464.9942 | f. 608.643.2314
se...@american-data.com | www.american-data.com


[Follow us on 
Facebook]   
[Follow us on LinkedIn] 
[Follow us on Twitter]    [Follow us on 
Twitter] 


Notice: This message is the property of 1984 Systems, Inc. (DBA) American Data
and contains information that may be confidential and/or privileged. If you are 
not
the intended recipient, you should not use, disclose or take any action based on
the message. If you have received this transmission in error, please immediately
contact the sender by return e-mail and delete this e-mail,
and any attachments, from any computer.

[ECS 10 Conversion]

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Suspected memory leak of org.apache.coyote.AbstractProtocol$ConnectionHandler object while using Websocket

2018-01-26 Thread Mark Thomas
On 26/01/18 15:11, Serge Perepel wrote:
> When we run our app for 2-3 hours we experience this leak related to our web 
> sockets connections, here is the Memory Analyzer Suspect:
> 
> One instance of "org.apache.coyote.AbstractProtocol$ConnectionHandler" loaded 
> by "java.net.URLClassLoader @ 0x720029098" occupies 2,153,196,128 (88.10%) 
> bytes. The memory is accumulated in one instance of 
> "java.util.concurrent.ConcurrentHashMap$Node[]" loaded by " loader>".

A break down of that ~2Gb would be helpful. How many entries in the Map?
What is the distribution of sizes?

> Keywords
> org.apache.coyote.AbstractProtocol$ConnectionHandler
> java.util.concurrent.ConcurrentHashMap$Node[]
> java.net.URLClassLoader @ 0x720029098
> 
> 
> Also these bugs might be related to this one:
> 
> 
> 
> https://bz.apache.org/bugzilla/show_bug.cgi?id=57546
> 
> https://bz.apache.org/bugzilla/show_bug.cgi?id=57750
> 
> 
> 
> The 57546 bug looks very similar to what we are experiencing. We tested on 
> Linux and so far we do not see the same behavior. Also this is our web socket 
> code:
> 
> 
> 
> package ad.ecs.async.websocket;
> 
> 
> 
> import java.io.IOException;
> 
> import java.util.Arrays;
> 
> 
> 
> import javax.websocket.CloseReason;
> 
> import javax.websocket.OnClose;
> 
> import javax.websocket.OnError;
> 
> import javax.websocket.OnMessage;
> 
> import javax.websocket.OnOpen;
> 
> import javax.websocket.Session;
> 
> import javax.websocket.server.ServerEndpoint;
> 
> 
> 
> import ad.common.Global;
> 
> import ad.ecs.async.AsyncEngine;
> 
> import ad.ecs.async.AsyncResponse;
> 
> import ad.ecs.async.AsyncType;
> 
> import ad.ecs.db.DatabaseEngine;
> 
> import ad.ecs.db.paradox.User;
> 
> import ad.ecs.security.engine.SecurityEngine;
> 
> 
> 
> /**
> 
>  * @author Serge Perepel
> 
>  * @since Aug 23, 2017 12:23:32 PM
> 
>  */
> 
> @ServerEndpoint(value = "/asyncMsg", encoders = AsyncResponseEncoder.class)
> 
> public class ECSAsync {
> 
> 
> 
> @OnOpen
> 
> public void open(Session session) throws IOException{
> 
> session.getBasicRemote().sendText("Connection Established");
> 
> }
> 
> 
> 
> @OnMessage
> 
> public String login(String sessionID, Session session) {
> 
> AsyncEngine.INSTANCE.wsConnect(session, sessionID);
> 
> org.hibernate.Session dbSession = 
> DatabaseEngine.getSessionFactory().openSession();
> 
> try {
> 
>int userID = 
> SecurityEngine.INSTANCE.getUserIDBasedOnSessionID(sessionID);
> 
>User user = (User) dbSession.get(User.class, userID);
> 
>if (user != null) {
> 
>if (user.getNextLogin() == 1) {
> 
>AsyncResponse response = new AsyncResponse();
> 
>response.setType(AsyncType.None);
> 
>response.setData(Arrays.asList("PASSWORD"));
> 
>response.setObjData("PASSWORD");
> 
>
> AsyncEngine.INSTANCE.addTransientResult(sessionID, response);
> 
>}
> 
>}
> 
> } finally {
> 
>dbSession.close();
> 
> }
> 
> return "ok";
> 
> }
> 
> 
> 
> @OnClose
> 
> public void close(Session session, CloseReason reason) {
> 
> AsyncEngine.INSTANCE.wsDisconnect(session);
> 
> }
> 
> 
> 
> @OnError
> 
> public void error(Session session, Throwable error) {
> 
> Global.INSTANCE.getLogHelper().exception(error);
> 
> //session.close(new CloseReason(closeCode, reasonPhrase));
> 
> }
> 
> }
> 
> 
> 
> Front end opens connection and sends invalid sessionID which causes
> 
> the line `AsyncEngine.INSTANCE.wsConnect(session, sessionID);` to throw 
> exception and after that disconnect happens on the front end. At this point 
> front end opens new connection and process goes into the loop. I'm assuming 
> that the connection handler suppose to get freed after a disconnect happen. 
> But it seems to accumulate. You can try this code and just replace code in 
> the login method to always throw the exception. On the front end onDisconnect 
> try to open new connection and send a random message to the web socket.

This is likely to get looked at faster if you provide code that other
people can run that will reproduce the issue you are seeing rather than
expecting someone else to construct the test case for you.

Mark


> 
> 
> 
> Thank you
> 
> Serge
> 
> [http://www.american-data.com/images/AmericanDataLogoRoundedEmailSignature.png]
> 
> Serge Perepel
> Software Developer | American Data
> p. 608.643.8022 | tf. 800.464.9942 | f. 608.643.2314
> se...@american-data.com | 
> www.american-data.com
> 
> 
> [Follow us on 
> Facebook]   
> [Follow us on LinkedIn] 
> [Follow us on 

RE: Suspected memory leak of org.apache.coyote.AbstractProtocol$ConnectionHandler object while using Websocket

2018-01-26 Thread Serge Perepel
Forgot to mention that we are running Tomcat 8.5 on Windows 2012 Server

-Original Message-
From: Serge Perepel [mailto:se...@american-data.com] 
Sent: Friday, January 26, 2018 9:12 AM
To: users@tomcat.apache.org
Subject: Suspected memory leak of 
org.apache.coyote.AbstractProtocol$ConnectionHandler object while using 
Websocket

When we run our app for 2-3 hours we experience this leak related to our web 
sockets connections, here is the Memory Analyzer Suspect:

One instance of "org.apache.coyote.AbstractProtocol$ConnectionHandler" loaded 
by "java.net.URLClassLoader @ 0x720029098" occupies 2,153,196,128 (88.10%) 
bytes. The memory is accumulated in one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]" loaded by "".

Keywords
org.apache.coyote.AbstractProtocol$ConnectionHandler
java.util.concurrent.ConcurrentHashMap$Node[]
java.net.URLClassLoader @ 0x720029098


Also these bugs might be related to this one:



https://bz.apache.org/bugzilla/show_bug.cgi?id=57546

https://bz.apache.org/bugzilla/show_bug.cgi?id=57750



The 57546 bug looks very similar to what we are experiencing. We tested on 
Linux and so far we do not see the same behavior. Also this is our web socket 
code:



package ad.ecs.async.websocket;



import java.io.IOException;

import java.util.Arrays;



import javax.websocket.CloseReason;

import javax.websocket.OnClose;

import javax.websocket.OnError;

import javax.websocket.OnMessage;

import javax.websocket.OnOpen;

import javax.websocket.Session;

import javax.websocket.server.ServerEndpoint;



import ad.common.Global;

import ad.ecs.async.AsyncEngine;

import ad.ecs.async.AsyncResponse;

import ad.ecs.async.AsyncType;

import ad.ecs.db.DatabaseEngine;

import ad.ecs.db.paradox.User;

import ad.ecs.security.engine.SecurityEngine;



/**

 * @author Serge Perepel

 * @since Aug 23, 2017 12:23:32 PM

 */

@ServerEndpoint(value = "/asyncMsg", encoders = AsyncResponseEncoder.class)

public class ECSAsync {



@OnOpen

public void open(Session session) throws IOException{

session.getBasicRemote().sendText("Connection Established");

}



@OnMessage

public String login(String sessionID, Session session) {

AsyncEngine.INSTANCE.wsConnect(session, sessionID);

org.hibernate.Session dbSession = 
DatabaseEngine.getSessionFactory().openSession();

try {

   int userID = 
SecurityEngine.INSTANCE.getUserIDBasedOnSessionID(sessionID);

   User user = (User) dbSession.get(User.class, userID);

   if (user != null) {

   if (user.getNextLogin() == 1) {

   AsyncResponse response = new AsyncResponse();

   response.setType(AsyncType.None);

   response.setData(Arrays.asList("PASSWORD"));

   response.setObjData("PASSWORD");

   
AsyncEngine.INSTANCE.addTransientResult(sessionID, response);

   }

   }

} finally {

   dbSession.close();

}

return "ok";

}



@OnClose

public void close(Session session, CloseReason reason) {

AsyncEngine.INSTANCE.wsDisconnect(session);

}



@OnError

public void error(Session session, Throwable error) {

Global.INSTANCE.getLogHelper().exception(error);

//session.close(new CloseReason(closeCode, reasonPhrase));

}

}



Front end opens connection and sends invalid sessionID which causes

the line `AsyncEngine.INSTANCE.wsConnect(session, sessionID);` to throw 
exception and after that disconnect happens on the front end. At this point 
front end opens new connection and process goes into the loop. I'm assuming 
that the connection handler suppose to get freed after a disconnect happen. But 
it seems to accumulate. You can try this code and just replace code in the 
login method to always throw the exception. On the front end onDisconnect try 
to open new connection and send a random message to the web socket.



Thank you

Serge

[http://www.american-data.com/images/AmericanDataLogoRoundedEmailSignature.png]

Serge Perepel
Software Developer | American Data
p. 608.643.8022 | tf. 800.464.9942 | f. 608.643.2314 
se...@american-data.com<mailto:se...@american-data.com> | www.american-data.com


[Follow us on 
Facebook]<https://www.facebook.com/pages/American-Data/10044518233>   
[Follow us on LinkedIn] <https://www.linkedin.com/company/american-data>
[Follow us on Twitter] <https://twitter.com/AmericanDataECS>   [Follow us on 
Twitter] <https://www.youtube.com/channel/UCiKfcsunZXWIYHf_2oE708A>


Notice: This message is the property of 1984 Systems, Inc. (DBA) American Data 
and contains information that may be confidential and/or privileged. If you are 

Suspected memory leak of org.apache.coyote.AbstractProtocol$ConnectionHandler object while using Websocket

2018-01-26 Thread Serge Perepel
When we run our app for 2-3 hours we experience this leak related to our web 
sockets connections, here is the Memory Analyzer Suspect:

One instance of "org.apache.coyote.AbstractProtocol$ConnectionHandler" loaded 
by "java.net.URLClassLoader @ 0x720029098" occupies 2,153,196,128 (88.10%) 
bytes. The memory is accumulated in one instance of 
"java.util.concurrent.ConcurrentHashMap$Node[]" loaded by "".

Keywords
org.apache.coyote.AbstractProtocol$ConnectionHandler
java.util.concurrent.ConcurrentHashMap$Node[]
java.net.URLClassLoader @ 0x720029098


Also these bugs might be related to this one:



https://bz.apache.org/bugzilla/show_bug.cgi?id=57546

https://bz.apache.org/bugzilla/show_bug.cgi?id=57750



The 57546 bug looks very similar to what we are experiencing. We tested on 
Linux and so far we do not see the same behavior. Also this is our web socket 
code:



package ad.ecs.async.websocket;



import java.io.IOException;

import java.util.Arrays;



import javax.websocket.CloseReason;

import javax.websocket.OnClose;

import javax.websocket.OnError;

import javax.websocket.OnMessage;

import javax.websocket.OnOpen;

import javax.websocket.Session;

import javax.websocket.server.ServerEndpoint;



import ad.common.Global;

import ad.ecs.async.AsyncEngine;

import ad.ecs.async.AsyncResponse;

import ad.ecs.async.AsyncType;

import ad.ecs.db.DatabaseEngine;

import ad.ecs.db.paradox.User;

import ad.ecs.security.engine.SecurityEngine;



/**

 * @author Serge Perepel

 * @since Aug 23, 2017 12:23:32 PM

 */

@ServerEndpoint(value = "/asyncMsg", encoders = AsyncResponseEncoder.class)

public class ECSAsync {



@OnOpen

public void open(Session session) throws IOException{

session.getBasicRemote().sendText("Connection Established");

}



@OnMessage

public String login(String sessionID, Session session) {

AsyncEngine.INSTANCE.wsConnect(session, sessionID);

org.hibernate.Session dbSession = 
DatabaseEngine.getSessionFactory().openSession();

try {

   int userID = 
SecurityEngine.INSTANCE.getUserIDBasedOnSessionID(sessionID);

   User user = (User) dbSession.get(User.class, userID);

   if (user != null) {

   if (user.getNextLogin() == 1) {

   AsyncResponse response = new AsyncResponse();

   response.setType(AsyncType.None);

   response.setData(Arrays.asList("PASSWORD"));

   response.setObjData("PASSWORD");

   
AsyncEngine.INSTANCE.addTransientResult(sessionID, response);

   }

   }

} finally {

   dbSession.close();

}

return "ok";

}



@OnClose

public void close(Session session, CloseReason reason) {

AsyncEngine.INSTANCE.wsDisconnect(session);

}



@OnError

public void error(Session session, Throwable error) {

Global.INSTANCE.getLogHelper().exception(error);

//session.close(new CloseReason(closeCode, reasonPhrase));

}

}



Front end opens connection and sends invalid sessionID which causes

the line `AsyncEngine.INSTANCE.wsConnect(session, sessionID);` to throw 
exception and after that disconnect happens on the front end. At this point 
front end opens new connection and process goes into the loop. I'm assuming 
that the connection handler suppose to get freed after a disconnect happen. But 
it seems to accumulate. You can try this code and just replace code in the 
login method to always throw the exception. On the front end onDisconnect try 
to open new connection and send a random message to the web socket.



Thank you

Serge

[http://www.american-data.com/images/AmericanDataLogoRoundedEmailSignature.png]

Serge Perepel
Software Developer | American Data
p. 608.643.8022 | tf. 800.464.9942 | f. 608.643.2314
se...@american-data.com | www.american-data.com


[Follow us on 
Facebook]   
[Follow us on LinkedIn] 
[Follow us on Twitter]    [Follow us on 
Twitter] 


Notice: This message is the property of 1984 Systems, Inc. (DBA) American Data
and contains information that may be confidential and/or privileged. If you are 
not
the intended recipient, you should not use, disclose or take any action based on
the message. If you have received this transmission in error, please immediately
contact the sender by return e-mail and delete this e-mail,
and any attachments, from any computer.

[ECS 10 Conversion]


Re: This is very likely to create a memory leak (Tomcat 8.5.8)

2017-01-22 Thread Mark Thomas
On 22/01/2017 19:52, Bheemanagouda A wrote:
> I am seeing these warnings in catalina.out while stopping the tomcat.
> Why these errors appearing? Are these harmful? Is simplewebapp actually
> leaking memory by not stopping threads? Thanks in Advance.

This should answer those questions:
http://home.apache.org/~markt/presentations/2010-11-04-Memory-Leaks-60mins.pdf

Mark

> 
> WARNING [localhost-startStop-2] org.apache.catalina.loader.
> WebappClassLoaderBase.clearReferencesThreads The web application
> [simplewebapp] appears to have started a thread named [pool-3-thread-5] but
> has failed to stop it. This is very likely to create a memory leak. Stack
> trace of thread:
>  sun.misc.Unsafe.park(Native Method)
>  java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(
> SynchronousQueue.java:460)
>  java.util.concurrent.SynchronousQueue$TransferStack.transfer(
> SynchronousQueue.java:362)
>  java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
>  java.util.concurrent.ThreadPoolExecutor.getTask(
> ThreadPoolExecutor.java:1066)
>  java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1127)
>  java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>  java.lang.Thread.run(Thread.java:745)
> 
> 
> 
> Kind Regards,
> Bheemanagouda
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



This is very likely to create a memory leak (Tomcat 8.5.8)

2017-01-22 Thread Bheemanagouda A
I am seeing these warnings in catalina.out while stopping the tomcat.
Why these errors appearing? Are these harmful? Is simplewebapp actually
leaking memory by not stopping threads? Thanks in Advance.

WARNING [localhost-startStop-2] org.apache.catalina.loader.
WebappClassLoaderBase.clearReferencesThreads The web application
[simplewebapp] appears to have started a thread named [pool-3-thread-5] but
has failed to stop it. This is very likely to create a memory leak. Stack
trace of thread:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(
SynchronousQueue.java:460)
 java.util.concurrent.SynchronousQueue$TransferStack.transfer(
SynchronousQueue.java:362)
 java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
 java.util.concurrent.ThreadPoolExecutor.getTask(
ThreadPoolExecutor.java:1066)
 java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1127)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)



Kind Regards,
Bheemanagouda


Re: help resolving memory leak error message

2017-01-06 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

To whom it may concern,

On 1/6/17 5:21 PM, modjkl...@comcast.net wrote:
> I'm porting a Apache Flex (with Apache BlazeDS) web app from 
> glassfish to tomcat 8.5.9, and observing the following severe
> error.
> 
> 06-Jan-2017 13:49:07.644 SEVERE 
> [ContainerBackgroundProcessor[StandardEngine[Catalina]]] 
> org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapFo
rLeaks
>
> 
The web application [myApp] created a ThreadLocal with key of type
> [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@4d6e6d7b])
> and a value of type [flex.messaging.io.SerializationContext]
> (value [flex.messaging.io.SerializationContext@66135428]) but
> failed to remove it when the web application was stopped. Threads
> are going to be renewed over time to try and avoid a probable
> memory leak.
> 
> Is this a warning or an actual error that is causing a memory
> leak?

This is probably a problem with the application that Glassfish never
detected, but has always been there.

This is a warning about a leak, but Tomcat will cycle-through the
request-processing threads, retiring them at intervals, in order to
mitigate the leak. You're welcome ;)

If all goes well, this leak will be handled by Tomcat and your service
won't suffer for it. That said, you should fix the problem because
cleaning-up messes is wasteful if the mess wasn't necessary in the
first place.

> Can anyone point me in the right direction to resolve this?

The leak itself is coming from your application or one of the
libraries it's using. The solution will be to find a fix that leak.

I would start by asking the Apache Flex people what the
SerializationContext is for, and how to remove ThreadLocal values from
shared threads (such as those in a servlet environment).

It's possible you are using the Flex framework in a way that is not
conducive to a servlet environment, but that a few changes could make
it safer to use.

> I'm a new Tomcat user (please try to be explicit).

Welcome to the community.

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJYcBxSAAoJEBzwKT+lPKRYmNsP/3w4GmqYvLoCS40bUEqmC0gp
placYbZkbloyRLxRRjgbod8qZcj5HPQqJzGfzV9vS4Wu5PdWg5rxbavv/aOOR/3X
X5hRtzRiw5TBDDUdxjp2MtxoANWRQ+XXITMFTy8dGqoq3LeAmJSsLpFri7T6GwNx
SO2ZeuwYPYX+r462tOlFqboS1FvZTcGp/lU1FCoyIAuPs2qwVsHLKEMpnQ4scEpr
MB6wf8vx50+dK27MAW/Oa3MaNOfS85FuJsqJ+rqqfUHNI95A6MVGW0ocQ8EeHpna
tGY6QC+xfaiLHfd2u7PR+kFUhMC7G94C37rJtF3vqh+x7gUhsEgtzMRpF6tB5FpV
ge5zWY9TIWQ7KejLv0VwIdv4cdR4TcMpjG7Kw2wdExEytoINX0L+ATL7Sg3WN1Cv
viAct9D4LyHa/Sov1hudC6VXvLBvvzmHTSThVKgW5thNrut279DMXaHCasPLXSIa
WK8+MP77LKh9MjsSP1R61GQgZh6y4/hn/E7g/RB3e17oTUzUqc1UsOQa/2nJlyF0
imHpnC2KPiZMb9FMcrwtRG2K7fOSwOVIctVHnIykM27oZA8usjNmdBdmOGzcJR5F
wcnkgIOTOsUKaymYNOAeBYH3FmkG/RAjfavsvXACMyDpiaeucHaR6oNtK47MJHCu
qnr4kNE/TtpH9Wbx0GoV
=vJk+
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



help resolving memory leak error message

2017-01-06 Thread modjklist
I'm porting a Apache Flex (with Apache BlazeDS) web app from glassfish to 
tomcat 8.5.9, and observing the following severe error.  

06-Jan-2017 13:49:07.644 SEVERE 
[ContainerBackgroundProcessor[StandardEngine[Catalina]]] 
org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks 
The web application [myApp] created a ThreadLocal with key of type 
[java.lang.ThreadLocal] (value [java.lang.ThreadLocal@4d6e6d7b]) and a value of 
type [flex.messaging.io.SerializationContext] (value 
[flex.messaging.io.SerializationContext@66135428]) but failed to remove it when 
the web application was stopped. Threads are going to be renewed over time to 
try and avoid a probable memory leak. 

Is this a warning or an actual error that is causing a memory leak?

Can anyone point me in the right direction to resolve this? I'm a new Tomcat 
user (please try to be explicit). 

Thanks for any advice/comments.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Memory Leak Tomcat with Vaadin+Grails

2016-07-07 Thread Edwin Quijada
Hi! I having a weird problem with memory in my Tomcat , when I check the log I 
am getting this

06-Jul-2016 12:11:47.583 SEVERE 
[ContainerBackgroundProcessor[StandardEngine[Catalina]]]
org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks
The web application [presales] created a ThreadLocal with key of type 
[java.lang.ThreadLocal]
(value [java.lang.ThreadLocal@536ad127]) and a value of type 
[com.googlecode.concurrentlinkedhashmap.ConcurrentHashMapV8.CounterHashCode]
(value 
[com.googlecode.concurrentlinkedhashmap.ConcurrentHashMapV8$CounterHashCode@1f2be536])
 but failed to remove it when the web application was stopped.
Threads are going to be renewed over time to try and avoid a probable memory 
leak.

Somebody can address me in rigth direction to solve this issue. This server 
goes down without reason anytime for memory problems.






AW: Memory Leak

2016-06-29 Thread Steffen Heil (Mailinglisten)
Hi


> > Here, the log.  I am quite sure how to go about troubleshooting it.
> > Any help is greatly appreciated.
> The application has a memory leak.  You need to get it fixed.
> > catalina.out.prob:SEVERE: The web application [] appears to have
> > started a thread named
> > [cluster-ClusterId{value='5745ebcecdb2e06579174645',
> > description='null'}-devnymongodb01.meridiancapital.com:27017] but has
> > failed to stop it. This is very likely to create a memory leak.

There MIGHT be a memory leak but this does NOT have to be one.
I have seen several libraries that tell their maintenance threads to stop and 
they actually DO, but the library itself does no join() so the thread stops a 
little later (depending on the library, "little" was between a few milliseconds 
and 10 minutes).

But this message is only logged when the application is stopped, so if it runs 
out of memory during operation, this is a rather unrelated shutdown problem.

Anyway, check if there is a memory leak by taking a memory dump (probably using 
jmap) and analyze that.
Eclipse MAT has nice tools for this.


> > MemTotal:8061448 kB
> > MemFree: 5399052 kB

This means that your server has 5.3 GB free memory.
Do you configure the amount of memory assigned to tomcat in any way?
If you have 5.3 GB of free memory while tomcat starves, you misconfigured that.
(Misconfiguration might as well include not configuring min/max heap sizes at 
all.)


Regards,
  Steffen




smime.p7s
Description: S/MIME cryptographic signature


Re: Memory Leak

2016-06-28 Thread Felix Schumacher


Am 29. Juni 2016 02:26:57 MESZ, schrieb Leo Donahue <donahu...@gmail.com>:
>On Jun 28, 2016 4:57 PM, "Roman Gelfand" <rgelfa...@gmail.com> wrote:
>>
>> I am running a middleware application in .. tomcat...
>
>Ok.  This is something you wrote and deployed or it is a third party
>war
>file?
>
>>
>> catalina.out.prob:SEVERE: The web application [] appears to have
>started a
>> thread named [cluster-ClusterId{value='5745ebcecdb2e06579174645',
>> description='null'}-devnymongodb01.meridiancapital.com:27017] but has
>> failed to stop it. This is very likely to create a memory leak.
>>
>
>Basically that says either you intentionally created a thread local
>variable that you did not close, or the third party war file did.

To be pedantic, the warning is about a thread not being closed.

Regards, 
Felix 

>
>If not you then ask your vendor to fix their app.
>
>>
>> --
>> Thanks,
>> R. Gelfand


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Memory Leak

2016-06-28 Thread Roman Gelfand
It is third party REST server named espresso.  After looking further into
memory leaks message, I realized this is a thread that writes to mongodb.
I had also found couple of posts relating to leaks mongodb jdbc drivers.
On Jun 28, 2016 8:27 PM, "Leo Donahue" <donahu...@gmail.com> wrote:

> On Jun 28, 2016 4:57 PM, "Roman Gelfand" <rgelfa...@gmail.com> wrote:
> >
> > I am running a middleware application in .. tomcat...
>
> Ok.  This is something you wrote and deployed or it is a third party war
> file?
>
> >
> > catalina.out.prob:SEVERE: The web application [] appears to have started
> a
> > thread named [cluster-ClusterId{value='5745ebcecdb2e06579174645',
> > description='null'}-devnymongodb01.meridiancapital.com:27017] but has
> > failed to stop it. This is very likely to create a memory leak.
> >
>
> Basically that says either you intentionally created a thread local
> variable that you did not close, or the third party war file did.
>
> If not you then ask your vendor to fix their app.
>
> >
> > --
> > Thanks,
> > R. Gelfand
>


Re: Memory Leak

2016-06-28 Thread Leo Donahue
On Jun 28, 2016 4:57 PM, "Roman Gelfand" <rgelfa...@gmail.com> wrote:
>
> I am running a middleware application in .. tomcat...

Ok.  This is something you wrote and deployed or it is a third party war
file?

>
> catalina.out.prob:SEVERE: The web application [] appears to have started a
> thread named [cluster-ClusterId{value='5745ebcecdb2e06579174645',
> description='null'}-devnymongodb01.meridiancapital.com:27017] but has
> failed to stop it. This is very likely to create a memory leak.
>

Basically that says either you intentionally created a thread local
variable that you did not close, or the third party war file did.

If not you then ask your vendor to fix their app.

>
> --
> Thanks,
> R. Gelfand


Re: Memory Leak

2016-06-28 Thread David Kerber

On 6/28/2016 5:57 PM, Roman Gelfand wrote:

I am running a middleware application in the tomcat environment described,
below.  After rebooting the server, the memory consumption is couple of
gigs.  Couple of weeks later, I get a message, I am out of memory.
Moreover, I need to bounce the whole server to start fresh.

Here, the log.  I am quite sure how to go about troubleshooting it.  Any
help is greatly appreciated.


The application has a memory leak.  You need to get it fixed.





catalina.out.prob:SEVERE: The web application [] appears to have started a
thread named [cluster-ClusterId{value='5745ebcecdb2e06579174645',
description='null'}-devnymongodb01.meridiancapital.com:27017] but has
failed to stop it. This is very likely to create a memory leak.


catalina.out.prob:SEVERE: Servlet.service() for servlet [API REST Handler]
in context with path [] threw exception [java.lang.OutOfMemoryError: unable
to create new native thread] with root cause
catalina.out.prob:java.lang.OutOfMemoryError: unable to create new native
thread





Here is my tomcat environment...

Server version: Apache Tomcat/7.0.69
Server built:   Apr 11 2016 07:57:09 UTC
Server number:  7.0.69.0
OS Name:Linux
OS Version: 2.6.32-573.12.1.el6.x86_64
Architecture:   amd64
JVM Version:1.8.0_91-b14
JVM Vendor: Oracle Corporation


uname -a

Linux  2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15 21:19:08 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux


Mem info

MemTotal:8061448 kB
MemFree: 5399052 kB
Buffers:  150360 kB
Cached:   388604 kB
SwapCached:0 kB
Active:  2290720 kB
Inactive: 197764 kB
Active(anon):1949532 kB
Inactive(anon):  160 kB
Active(file): 341188 kB
Inactive(file):   197604 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:   4128764 kB
SwapFree:4128764 kB
Dirty:40 kB
Writeback: 0 kB
AnonPages:   1949572 kB
Mapped:35900 kB
Shmem:   176 kB
Slab:  87844 kB
SReclaimable:  27304 kB
SUnreclaim:60540 kB
KernelStack:5504 kB
PageTables: 9032 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 8159488 kB
Committed_AS:3091324 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  158244 kB
VmallocChunk:   34359576456 kB
HardwareCorrupted: 0 kB
AnonHugePages:   1767424 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M: 8378368 kB


CPU info

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz
stepping: 2
microcode   : 29
cpu MHz : 2533.423
cache size  : 12288 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc
aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt
aes hypervisor lahf_lm ida arat epb dts
bogomips: 5066.84
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz
stepping: 2
microcode   : 29
cpu MHz : 2533.423
cache size  : 12288 KB
physical id : 0
siblings: 2
core id : 1
cpu cores   : 2
apicid  : 1
initial apicid  : 1
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc
aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt
aes hypervisor lahf_lm ida arat epb dts
bogomips: 5066.84
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:





-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Memory Leak

2016-06-28 Thread Roman Gelfand
I am running a middleware application in the tomcat environment described,
below.  After rebooting the server, the memory consumption is couple of
gigs.  Couple of weeks later, I get a message, I am out of memory.
Moreover, I need to bounce the whole server to start fresh.

Here, the log.  I am quite sure how to go about troubleshooting it.  Any
help is greatly appreciated.


catalina.out.prob:SEVERE: The web application [] appears to have started a
thread named [cluster-ClusterId{value='5745ebcecdb2e06579174645',
description='null'}-devnymongodb01.meridiancapital.com:27017] but has
failed to stop it. This is very likely to create a memory leak.


catalina.out.prob:SEVERE: Servlet.service() for servlet [API REST Handler]
in context with path [] threw exception [java.lang.OutOfMemoryError: unable
to create new native thread] with root cause
catalina.out.prob:java.lang.OutOfMemoryError: unable to create new native
thread





Here is my tomcat environment...

Server version: Apache Tomcat/7.0.69
Server built:   Apr 11 2016 07:57:09 UTC
Server number:  7.0.69.0
OS Name:Linux
OS Version: 2.6.32-573.12.1.el6.x86_64
Architecture:   amd64
JVM Version:1.8.0_91-b14
JVM Vendor: Oracle Corporation


uname -a

Linux  2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15 21:19:08 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux


Mem info

MemTotal:8061448 kB
MemFree: 5399052 kB
Buffers:  150360 kB
Cached:   388604 kB
SwapCached:0 kB
Active:  2290720 kB
Inactive: 197764 kB
Active(anon):1949532 kB
Inactive(anon):  160 kB
Active(file): 341188 kB
Inactive(file):   197604 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:   4128764 kB
SwapFree:4128764 kB
Dirty:40 kB
Writeback: 0 kB
AnonPages:   1949572 kB
Mapped:35900 kB
Shmem:   176 kB
Slab:  87844 kB
SReclaimable:  27304 kB
SUnreclaim:60540 kB
KernelStack:5504 kB
PageTables: 9032 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 8159488 kB
Committed_AS:3091324 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  158244 kB
VmallocChunk:   34359576456 kB
HardwareCorrupted: 0 kB
AnonHugePages:   1767424 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M: 8378368 kB


CPU info

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz
stepping: 2
microcode   : 29
cpu MHz : 2533.423
cache size  : 12288 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc
aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt
aes hypervisor lahf_lm ida arat epb dts
bogomips: 5066.84
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5649  @ 2.53GHz
stepping: 2
microcode   : 29
cpu MHz : 2533.423
cache size  : 12288 KB
physical id : 0
siblings: 2
core id : 1
cpu cores   : 2
apicid  : 1
initial apicid  : 1
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc
aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt
aes hypervisor lahf_lm ida arat epb dts
bogomips: 5066.84
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:


-- 
Thanks,
R. Gelfand


Aw: Re: memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText

2016-06-06 Thread devzero
> Then you haven't correctly set development to false or your measurement
> of used memory is not correct

you were right, i set development fals in the wrong section of web.xml.

i put it under 
org.apache.jasper.servlet.JspServlet  no, and 
things behave like expected.

thanks for help and sorry for the noise !


regards
roland

ps:
and indeed this should not be named memory leak, as a memory leak means 
indefinite ressource grow... 


> Gesendet: Sonntag, 05. Juni 2016 um 20:14 Uhr
> Von: "Mark Thomas" <ma...@apache.org>
> An: "Tomcat Users List" <users@tomcat.apache.org>
> Betreff: Re: memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText
>
> On 04/06/2016 09:22, devz...@web.de wrote:
> > thanks for help - but, are you really sure?
> 
> Yes.
> 
> > if i
> > 
> > - set development=false
> > - delete everything within work subdir to force recompile of every jsp
> > 
> > then for me, the initial crawl makes jvm consume the same amount of memory 
> > regardless development true or false - and thats what i'm wondering about.
> 
> Then you haven't correctly set development to false or your measurement
> of used memory is not correct.
> 
> > indeed, with development=false, subsequent jsp access does not consume 
> > memory as there is no recompile.
> > 
> > this is why this param helps workarounding the problem,
> 
> No, it isn't.
> 
> > but it does not make the memory consumption of the initial compile run go 
> > away and i`m curious, why the initial compile run permanently leaves 
> > millions of referenced objects in memory.
> 
> You will only see instances of Mark and TemplateText if you have not
> correctly set development to false.
> 
> Even if you set development to false, your test will still trigger
> significant memory consumption because it will trigger the loading of
> every single Servlet the JSPs have been converted into along with any
> supporting objects.
> 
> Mark
> 
> > 
> > is this to be expected?
> > 
> > regards
> > roland
> > 
> > 
> > Am 03.06.2016, 21:43, Mark Thomas <ma...@apache.org> schrieb:
> > On 03/06/2016 17:14, devz...@web.de wrote:
> > 
> > You are NOT observing a memory leak.
> > 
> > 
> > 
> >> Regardless we have set "development" to true or false in
> >> conf/web.xml, , whenever i recursively crawl our website with wget
> >> (cleaning work dir before to make sure each page is being compiled
> >> again), i can easily trigger an out-of-memory condition in the JVM.
> >> When development=false, then i cannot trigger it when i did
> >> re-compile every jsp in several steps (with restarting tomcat).
> > 
> > You are not correctly configuring development to false. I have confirmed
> > the expected behaviour with a profiler when development is set to false.
> > 
> >> With VisualVM (part of jdk) i found that after wget -r crawl, there
> >> are 13 million instances of the following classes:
> >>
> >> org.apache.jasper.compiler.Mark
> >> org.apache.jasper.compiler.Node$TemplateText
> > 
> > That will only happen if development is true.
> > 
> >> My understanding from a compile run is, that it`s something which is
> >> done once and then it`s ready and done and nothing is left in
> >> memory.
> > 
> > That is not the case when development is false. The results of the
> > parsing are retaining in memory to aid the generation of useful error
> > reports.
> > 
> >> We have some ten-thousands JSPs, i`m not sure how many being crawled
> >> with wget, but i don`t get the point why i see ressources being
> >> allocated from org.apache.jasper.compiler and not being freed after
> >> compile run.
> >>
> >> Does anybody have a clue ? Is this to be expected, and if yes - why
> >> ?
> > 
> > Mark
> > 
> > -
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> > For additional commands, e-mail: users-h...@tomcat.apache.org
> > 
> > 
> > -
> > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> > For additional commands, e-mail: users-h...@tomcat.apache.org
> > 
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText

2016-06-05 Thread Mark Thomas
On 04/06/2016 09:22, devz...@web.de wrote:
> thanks for help - but, are you really sure?

Yes.

> if i
> 
> - set development=false
> - delete everything within work subdir to force recompile of every jsp
> 
> then for me, the initial crawl makes jvm consume the same amount of memory 
> regardless development true or false - and thats what i'm wondering about.

Then you haven't correctly set development to false or your measurement
of used memory is not correct.

> indeed, with development=false, subsequent jsp access does not consume memory 
> as there is no recompile.
> 
> this is why this param helps workarounding the problem,

No, it isn't.

> but it does not make the memory consumption of the initial compile run go 
> away and i`m curious, why the initial compile run permanently leaves millions 
> of referenced objects in memory.

You will only see instances of Mark and TemplateText if you have not
correctly set development to false.

Even if you set development to false, your test will still trigger
significant memory consumption because it will trigger the loading of
every single Servlet the JSPs have been converted into along with any
supporting objects.

Mark

> 
> is this to be expected?
> 
> regards
> roland
> 
> 
> Am 03.06.2016, 21:43, Mark Thomas <ma...@apache.org> schrieb:
> On 03/06/2016 17:14, devz...@web.de wrote:
> 
> You are NOT observing a memory leak.
> 
> 
> 
>> Regardless we have set "development" to true or false in
>> conf/web.xml, , whenever i recursively crawl our website with wget
>> (cleaning work dir before to make sure each page is being compiled
>> again), i can easily trigger an out-of-memory condition in the JVM.
>> When development=false, then i cannot trigger it when i did
>> re-compile every jsp in several steps (with restarting tomcat).
> 
> You are not correctly configuring development to false. I have confirmed
> the expected behaviour with a profiler when development is set to false.
> 
>> With VisualVM (part of jdk) i found that after wget -r crawl, there
>> are 13 million instances of the following classes:
>>
>> org.apache.jasper.compiler.Mark
>> org.apache.jasper.compiler.Node$TemplateText
> 
> That will only happen if development is true.
> 
>> My understanding from a compile run is, that it`s something which is
>> done once and then it`s ready and done and nothing is left in
>> memory.
> 
> That is not the case when development is false. The results of the
> parsing are retaining in memory to aid the generation of useful error
> reports.
> 
>> We have some ten-thousands JSPs, i`m not sure how many being crawled
>> with wget, but i don`t get the point why i see ressources being
>> allocated from org.apache.jasper.compiler and not being freed after
>> compile run.
>>
>> Does anybody have a clue ? Is this to be expected, and if yes - why
>> ?
> 
> Mark
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText

2016-06-04 Thread devzero
thanks for help - but, are you really sure?

if i

- set development=false
- delete everything within work subdir to force recompile of every jsp

then for me, the initial crawl makes jvm consume the same amount of memory 
regardless development true or false - and thats what i'm wondering about. 

indeed, with development=false, subsequent jsp access does not consume memory 
as there is no recompile.

this is why this param helps workarounding the problem, but it does not make 
the memory consumption of the initial compile run go away and i`m curious, why 
the initial compile run permanently leaves millions of referenced objects in 
memory.

is this to be expected?

regards
roland


Am 03.06.2016, 21:43, Mark Thomas <ma...@apache.org> schrieb:
On 03/06/2016 17:14, devz...@web.de wrote:

You are NOT observing a memory leak.



> Regardless we have set "development" to true or false in
> conf/web.xml, , whenever i recursively crawl our website with wget
> (cleaning work dir before to make sure each page is being compiled
> again), i can easily trigger an out-of-memory condition in the JVM.
> When development=false, then i cannot trigger it when i did
> re-compile every jsp in several steps (with restarting tomcat).

You are not correctly configuring development to false. I have confirmed
the expected behaviour with a profiler when development is set to false.

> With VisualVM (part of jdk) i found that after wget -r crawl, there
> are 13 million instances of the following classes:
>
> org.apache.jasper.compiler.Mark
> org.apache.jasper.compiler.Node$TemplateText

That will only happen if development is true.

> My understanding from a compile run is, that it`s something which is
> done once and then it`s ready and done and nothing is left in
> memory.

That is not the case when development is false. The results of the
parsing are retaining in memory to aid the generation of useful error
reports.

> We have some ten-thousands JSPs, i`m not sure how many being crawled
> with wget, but i don`t get the point why i see ressources being
> allocated from org.apache.jasper.compiler and not being freed after
> compile run.
>
> Does anybody have a clue ? Is this to be expected, and if yes - why
> ?

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText

2016-06-03 Thread Mark Thomas
On 03/06/2016 17:14, devz...@web.de wrote:

You are NOT observing a memory leak.



> Regardless we have set "development" to true or false in
> conf/web.xml, , whenever i recursively crawl our website with wget
> (cleaning work dir before to make sure each page is being compiled
> again), i can easily trigger an out-of-memory condition in the JVM.
> When development=false, then i cannot trigger it when i did
> re-compile every jsp in several steps (with restarting tomcat).

You are not correctly configuring development to false. I have confirmed
the expected behaviour with a profiler when development is  set to false.

> With VisualVM (part of jdk) i found that after wget -r crawl, there
> are 13 million instances of the following classes:
> 
> org.apache.jasper.compiler.Mark 
> org.apache.jasper.compiler.Node$TemplateText

That will only happen if development is true.

> My understanding from a compile run is, that it`s something which is
> done once and then it`s ready and done and nothing is left in
> memory.

That is not the case when development is false. The results of the
parsing are retaining in memory to aid the generation of useful error
reports.

> We have some ten-thousands JSPs, i`m not sure how many being crawled
> with wget, but i don`t get the point why i see ressources being
> allocated from org.apache.jasper.compiler and not being freed after
> compile run.
> 
> Does anybody have a clue ? Is this to be expected, and if yes - why
> ?

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



memory-leak in org.apache.jasper.compiler.Mark|Node$TemplateText

2016-06-03 Thread devzero
hi, 

we have a problem with our website for a while.

I tracked it down to a memory-ressource-issue due to memory-requirements for 
compiling.

We can throw memory at the problem to circumvent it, but it looks weird to me.

Regardless we have set "development" to true or false in conf/web.xml, , 
whenever i recursively crawl our website with wget (cleaning work dir before to 
make sure each page is being compiled again), i can easily trigger an 
out-of-memory condition in the JVM. When development=false, then i cannot 
trigger it when i did re-compile every jsp in several steps (with restarting 
tomcat).

With VisualVM (part of jdk) i found that after wget -r crawl, there are 13 
million instances of the following classes: 

org.apache.jasper.compiler.Mark
org.apache.jasper.compiler.Node$TemplateText

My understanding from a compile run is, that it`s something which is done once 
and then it`s ready and done and nothing is left in memory.

We have some ten-thousands JSPs, i`m not sure how many being crawled with wget, 
but i don`t get the point why i see ressources being allocated from 
org.apache.jasper.compiler and not being freed after compile run.

Does anybody have a clue ? Is this to be expected, and if yes - why ?

Maybe the following bugreport is interesting in this context:

https://bz.apache.org/bugzilla/show_bug.cgi?id=44383

regards
Roland

ps:
Tomcat 7.0.42 and 8.0.32

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: memory leak in Tomcat 8.0.9

2016-05-20 Thread Mark Thomas
On 20/05/2016 20:07, Sanka, Ambica wrote:
> Hi Mark,
> Thanks for your response. Doesn't tomcat stop take care of shutting down all 
> the threads?

No. Read the Javadoc for Thread.stop() to find out why.

> Do we need to handle explicity this case?

Always.

Mark


> Thanks
> Ambica.
> 
> -Original Message-
> From: Mark Thomas [mailto:ma...@apache.org] 
> Sent: Friday, May 20, 2016 9:34 AM
> To: Tomcat Users List <users@tomcat.apache.org>
> Subject: Re: memory leak in Tomcat 8.0.9
> 
> First of all, the subject is wrong. There is no memory leak in Tomcat.
> There is a memory leak in the application you are running on Tomcat.
> 
> On 20/05/2016 14:21, Sanka, Ambica wrote:
>> 2016-05-19 14:03:31,161 [localhost-startStop-2] WARN  
>> org.apache.catalina.loader.WebappClassLoader- The web application 
>> [/fmDirectoryService] appears to have started a thread named [Thread-6] but 
>> has failed to stop it. This is very likely to create a memory leak. Stack 
>> trace of thread:
>>
>> java.lang.Thread.sleep(Native Method)
>>
>> net.atpco.cluster.support.BaseLocator$AdminTask.run(BaseLocator.java:1
>> 41)
> 
> What isn't clear in the message above?
> 
> Based on the Java package name and your e-mail address 
> net.atpco.cluster.support.BaseLocator is code that you control.
> BaseLocator starts a thread so it needs to stop that thread when the web 
> application shuts down. ServletContextListener.contextDestroyed() is usually 
> where such clean-up is performed.
> 
> Mark
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: memory leak in Tomcat 8.0.9

2016-05-20 Thread Sanka, Ambica
Hi Mark,
Thanks for your response. Doesn't tomcat stop take care of shutting down all 
the threads? Do we need to handle explicity this case?
Thanks
Ambica.

-Original Message-
From: Mark Thomas [mailto:ma...@apache.org] 
Sent: Friday, May 20, 2016 9:34 AM
To: Tomcat Users List <users@tomcat.apache.org>
Subject: Re: memory leak in Tomcat 8.0.9

First of all, the subject is wrong. There is no memory leak in Tomcat.
There is a memory leak in the application you are running on Tomcat.

On 20/05/2016 14:21, Sanka, Ambica wrote:
> 2016-05-19 14:03:31,161 [localhost-startStop-2] WARN  
> org.apache.catalina.loader.WebappClassLoader- The web application 
> [/fmDirectoryService] appears to have started a thread named [Thread-6] but 
> has failed to stop it. This is very likely to create a memory leak. Stack 
> trace of thread:
> 
> java.lang.Thread.sleep(Native Method)
> 
> net.atpco.cluster.support.BaseLocator$AdminTask.run(BaseLocator.java:1
> 41)

What isn't clear in the message above?

Based on the Java package name and your e-mail address 
net.atpco.cluster.support.BaseLocator is code that you control.
BaseLocator starts a thread so it needs to stop that thread when the web 
application shuts down. ServletContextListener.contextDestroyed() is usually 
where such clean-up is performed.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: memory leak in Tomcat 8.0.9

2016-05-20 Thread Mark Thomas
First of all, the subject is wrong. There is no memory leak in Tomcat.
There is a memory leak in the application you are running on Tomcat.

On 20/05/2016 14:21, Sanka, Ambica wrote:
> 2016-05-19 14:03:31,161 [localhost-startStop-2] WARN  
> org.apache.catalina.loader.WebappClassLoader- The web application 
> [/fmDirectoryService] appears to have started a thread named [Thread-6] but 
> has failed to stop it. This is very likely to create a memory leak. Stack 
> trace of thread:
> 
> java.lang.Thread.sleep(Native Method)
> 
> net.atpco.cluster.support.BaseLocator$AdminTask.run(BaseLocator.java:141)

What isn't clear in the message above?

Based on the Java package name and your e-mail address
net.atpco.cluster.support.BaseLocator is code that you control.
BaseLocator starts a thread so it needs to stop that thread when the web
application shuts down. ServletContextListener.contextDestroyed() is
usually where such clean-up is performed.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



memory leak in Tomcat 8.0.9

2016-05-20 Thread Sanka, Ambica
Support Team,

We have been using Tomcat 8.0.9 for our applications. We noticed below memo= ry 
leak error and out tomcat could not stopped. We had to kill the process = 
manually. I was reading articles in the internet and this got address after=  
tomcat 6. But we found error in higher versions. We are not sure where to = fix 
this? Below is the error we are getting

2016-05-19 14:03:31,161 [localhost-startStop-2] WARN  org.apache.catalina.l=

oader.WebappClassLoader- The web application [/fmDirectoryService] appears = to 
have started a thread named [Thread-6] but has failed to stop it. This i= s 
very likely to create a memory leak. Stack trace of thread:

java.lang.Thread.sleep(Native Method)

net.atpco.cluster.support.BaseLocator$AdminTask.run(BaseLocator.java:141)

2016-05-19 14:03:31,197 [localhost-startStop-2] INFO  org.apache.catalina.c=

ore.ContainerBase.[Catalina].[localhost].[/fmbootstrap]- Destroying Spring = 
FrameworkServlet 'dispatcherServlet'

2016-05-19 14:03:31,210 [localhost-startStop-2] INFO  org.apache.catalina.c=

ore.ContainerBase.[Catalina].[localhost].[/fmbootstrap]- Closing Spring roo= t 
WebApplicationContext

2016-05-19 14:03:31,210 [localhost-startStop-2] INFO  org.springframework.b=

oot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext- Closing=  
org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebAppli=

cationContext@5e119034<mailto:org.springframework.boot.context.embedded.Ann<mailto:cationContext@5e119034%3cmailto:org.springframework.boot.context.embedded.Ann>=

otationConfigEmbeddedWebApplicationContext@5e119034>: startup date [Thu May=

19 08:17:39 EDT 2016]; root of context hierarchy May 19, 2016 2:03:31 PM 
com.mongodb.util.management.jmx.JMXMBeanServer unre= gisterMBean

WARNING: Unable to register MBean org.mongodb.driver:type=3DConnectionPool,=

clusterId=3D1,host=3Dlocalhost,port=3D27017

javax.management.InstanceNotFoundException: org.mongodb.driver:type=3DConne=

ctionPool,clusterId=3D1,host=3Dlocalhost,port=3D27017

at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(D=

efaultMBeanServerInterceptor.java:1095)

at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveU=

nregisterMBean(DefaultMBeanServerInterceptor.java:427)

at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregister=

MBean(DefaultMBeanServerInterceptor.java:415)

at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanS=

erver.java:546)

at com.mongodb.util.management.jmx.JMXMBeanServer.unregisterMBean(J=

MXMBeanServer.java:52)

at com.mongodb.JMXConnectionPoolListener.connectionPoolClosed(JMXCo=

nnectionPoolListener.java:68)

at com.mongodb.PooledConnectionProvider.close(PooledConnectionProvi=

der.java:107)



Any kind of help is appreciated.

Thanks

Ambica.



Re: Tracking down memory leak

2015-10-21 Thread David kerber

On 10/21/2015 1:08 PM, Christopher Schultz wrote:

Rallavagu,

On 10/20/15 9:46 AM, Rallavagu wrote:

Please take a look at Memory Analyzer tool
(http://www.eclipse.org/mat/). Run the app and take the heap dump while
app is running and use the tool to analyze it. You could use VisualVM
with plugins to get instrumentation or you could use hprof
(http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html)


+1

If you have a huge number of a certain type of object, that can help you
understand what is going on. I use YourKit (I get a free license as an
ASF committer) and it can do things like find memory-consuming object
trees, like maybe a cache that is taking up 3GiB when you thought it
would maybe stop at 100MiB.


Thanks for the suggestion, guys!  I used visualvm and was able to get it 
straightened out.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tracking down memory leak

2015-10-21 Thread Christopher Schultz
Rallavagu,

On 10/20/15 9:46 AM, Rallavagu wrote:
> Please take a look at Memory Analyzer tool
> (http://www.eclipse.org/mat/). Run the app and take the heap dump while
> app is running and use the tool to analyze it. You could use VisualVM
> with plugins to get instrumentation or you could use hprof
> (http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html)

+1

If you have a huge number of a certain type of object, that can help you
understand what is going on. I use YourKit (I get a free license as an
ASF committer) and it can do things like find memory-consuming object
trees, like maybe a cache that is taking up 3GiB when you thought it
would maybe stop at 100MiB.

-chris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tracking down memory leak

2015-10-20 Thread David kerber
I'm trying to track down the source of a memory leak in one of my 
applications.  I have examined the code but have been unable to fix it, 
so am looking for some way of instrumenting my app while running on the 
server.  What is the easiest/best (I realize those two criteria may not 
give the same answer!) way?


Running TC 8.0.20-something, JRE 8.0.something recent, on windows Server 
2012 R2.


Thanks for any suggestions!
Dave

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tracking down memory leak

2015-10-20 Thread Rallavagu
Please take a look at Memory Analyzer tool 
(http://www.eclipse.org/mat/). Run the app and take the heap dump while 
app is running and use the tool to analyze it. You could use VisualVM 
with plugins to get instrumentation or you could use hprof 
(http://docs.oracle.com/javase/7/docs/technotes/samples/hprof.html)


HTH

On 10/20/15 6:20 AM, David kerber wrote:

I'm trying to track down the source of a memory leak in one of my
applications.  I have examined the code but have been unable to fix it,
so am looking for some way of instrumenting my app while running on the
server.  What is the easiest/best (I realize those two criteria may not
give the same answer!) way?

Running TC 8.0.20-something, JRE 8.0.something recent, on windows Server
2012 R2.

Thanks for any suggestions!
Dave

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Slow memory leak in mod_jk on Windows

2014-07-16 Thread Wang, Andy
I've narrowed this down to JkOptions +FlushPackets
and some combination of workers.properties configurations causing it.  I'm 
trying to pinpoint which combination.

Opened the following bug:
https://issues.apache.org/bugzilla/show_bug.cgi?id=56733

Andy


On Mon, 2014-07-07 at 19:58 +, Wang, Andy wrote:


On Mon, 2014-07-07 at 15:51 +, Wang, Andy wrote:
 We have a customer that's seeing a very slow memory leak under certain
 circumstances that we haven't yet been able to pinpoint.  I can
 reproduce it, but it requires a very particular method of downloading
 files that I don't quite understand yet. (not entirely sure how this
 would impact mod_jk either).  Our customer is still on an older mod_jk
 1.2.37 but I'm able to see the behavior on 1.2.40.

 They had Microsoft analyze a memory dump and Microsoft came to the
 analysis that mod_jk was leaking 8k at a time and this is the stack of
 each allocation:
 FunctionDestination
 libapr_1!apr_palloc+212 msvcrt!malloc
 libaprutil_1!apr_brigade_create+11  libapr_1!apr_palloc
 libhttpd!ap_rflush+19   libaprutil_1!apr_brigade_create
 mod_jk+3183alibhttpd!ap_rflush
 mod_jk+9729
 mod_jk+f3bd
 kernel32!TlsSetValueStub
 ntdll!_except_handler4
 msvcrt!_except_handler4
 ntdll!_except_handler4
 ntdll!FinalExceptionHandler
 msvcrt!_threadstartex

 I'm far far from a Windows developer :( so I'm slowly getting a system
 configured to do the debugdiag analysis that Microsoft did with the pdb
 files to get the actual symbols.  This will take me some time so I
 thought I'd mail here to see of anyone has seen anything similar or if
 there's any thoughts on what would be slowly leaking the 8k through
 mod_jk.

 If not, I'll try to get the real stack shortly once I get debugdiag
 figured out.

 Thanks,
 Andy


Managed to reproduce this on httpd 2.2.27 and mod_jk 1.2.40

Here's the stack I'm pulling from Microsoft's debugdiag:

Function   Destination
libapr_1!allocator_alloc+d6   msvcr100!malloc
libapr_1!apr_palloc+d9   libapr_1!allocator_alloc
libaprutil_1!apr_brigade_create+11   libapr_1!apr_palloc
libhttpd!ap_rflush+19   libaprutil_1!apr_brigade_create
mod_jk!ws_flush+1a   libhttpd!ap_rflush
mod_jk!ajp_process_callback+586
mod_jk!ajp_get_reply+c4   mod_jk!ajp_process_callback
mod_jk!ajp_service+60a   mod_jk!ajp_get_reply
mod_jk!service+82f
mod_jk!jk_handler+6ea
libhttpd!ap_run_handler+25
libhttpd!ap_invoke_handler+a2   libhttpd!ap_run_handler
libhttpd!ap_process_request+3e   libhttpd!ap_invoke_handler
libhttpd!ap_process_http_connection+52   libhttpd!ap_process_request
libhttpd!ap_run_process_connection+25
libhttpd!ap_process_connection+33   libhttpd!ap_run_process_connection
libhttpd!worker_main+a7   libhttpd!ap_process_connection
msvcr100!_callthreadstartex+1b
msvcr100!_threadstartex+64
kernel32!BaseThreadInitThunk+e
ntdll!__RtlUserThreadStart+70
ntdll!_RtlUserThreadStart+1b   ntdll!__RtlUserThreadStart
msvcr100!_threadstartex

I have a relatively large file I'm downloading through a servlet
(unfortunately I didn't write the servlet and am not completely familiar
with what they're doing) and memory slowly slowly grows and isn't
released.  300mb file grows by a few megs each download.

I can't reproduce with the default tomcat file servlet so not sure what
the deal here is.

Going to poke some more but hoping for some ideas.

Thanks,
Andy

B�CB��[��X��ܚX�KK[XZ[�\�\��][��X��ܚX�P�X�]
 �\X�K�ܙ�B��܈Y][ۘ[��[X[��K[XZ[�\�\��Z[�X�] �\X�K�ܙ�B�




Slow memory leak in mod_jk on Windows

2014-07-07 Thread Wang, Andy
We have a customer that's seeing a very slow memory leak under certain
circumstances that we haven't yet been able to pinpoint.  I can
reproduce it, but it requires a very particular method of downloading
files that I don't quite understand yet. (not entirely sure how this
would impact mod_jk either).  Our customer is still on an older mod_jk
1.2.37 but I'm able to see the behavior on 1.2.40.  

They had Microsoft analyze a memory dump and Microsoft came to the
analysis that mod_jk was leaking 8k at a time and this is the stack of
each allocation:
FunctionDestination
libapr_1!apr_palloc+212 msvcrt!malloc
libaprutil_1!apr_brigade_create+11  libapr_1!apr_palloc
libhttpd!ap_rflush+19   libaprutil_1!apr_brigade_create
mod_jk+3183alibhttpd!ap_rflush
mod_jk+9729
mod_jk+f3bd
kernel32!TlsSetValueStub
ntdll!_except_handler4
msvcrt!_except_handler4
ntdll!_except_handler4
ntdll!FinalExceptionHandler
msvcrt!_threadstartex

I'm far far from a Windows developer :( so I'm slowly getting a system
configured to do the debugdiag analysis that Microsoft did with the pdb
files to get the actual symbols.  This will take me some time so I
thought I'd mail here to see of anyone has seen anything similar or if
there's any thoughts on what would be slowly leaking the 8k through
mod_jk.

If not, I'll try to get the real stack shortly once I get debugdiag
figured out.

Thanks,
Andy



Re: Slow memory leak in mod_jk on Windows

2014-07-07 Thread Wang, Andy
On Mon, 2014-07-07 at 15:51 +, Wang, Andy wrote:
 We have a customer that's seeing a very slow memory leak under certain
 circumstances that we haven't yet been able to pinpoint.  I can
 reproduce it, but it requires a very particular method of downloading
 files that I don't quite understand yet. (not entirely sure how this
 would impact mod_jk either).  Our customer is still on an older mod_jk
 1.2.37 but I'm able to see the behavior on 1.2.40.  
 
 They had Microsoft analyze a memory dump and Microsoft came to the
 analysis that mod_jk was leaking 8k at a time and this is the stack of
 each allocation:
 FunctionDestination
 libapr_1!apr_palloc+212 msvcrt!malloc
 libaprutil_1!apr_brigade_create+11  libapr_1!apr_palloc
 libhttpd!ap_rflush+19   libaprutil_1!apr_brigade_create
 mod_jk+3183alibhttpd!ap_rflush
 mod_jk+9729
 mod_jk+f3bd
 kernel32!TlsSetValueStub
 ntdll!_except_handler4
 msvcrt!_except_handler4
 ntdll!_except_handler4
 ntdll!FinalExceptionHandler
 msvcrt!_threadstartex
 
 I'm far far from a Windows developer :( so I'm slowly getting a system
 configured to do the debugdiag analysis that Microsoft did with the pdb
 files to get the actual symbols.  This will take me some time so I
 thought I'd mail here to see of anyone has seen anything similar or if
 there's any thoughts on what would be slowly leaking the 8k through
 mod_jk.
 
 If not, I'll try to get the real stack shortly once I get debugdiag
 figured out.
 
 Thanks,
 Andy
 

Managed to reproduce this on httpd 2.2.27 and mod_jk 1.2.40

Here's the stack I'm pulling from Microsoft's debugdiag:

Function   Destination 
libapr_1!allocator_alloc+d6   msvcr100!malloc 
libapr_1!apr_palloc+d9   libapr_1!allocator_alloc 
libaprutil_1!apr_brigade_create+11   libapr_1!apr_palloc 
libhttpd!ap_rflush+19   libaprutil_1!apr_brigade_create 
mod_jk!ws_flush+1a   libhttpd!ap_rflush 
mod_jk!ajp_process_callback+586
mod_jk!ajp_get_reply+c4   mod_jk!ajp_process_callback 
mod_jk!ajp_service+60a   mod_jk!ajp_get_reply 
mod_jk!service+82f
mod_jk!jk_handler+6ea
libhttpd!ap_run_handler+25
libhttpd!ap_invoke_handler+a2   libhttpd!ap_run_handler 
libhttpd!ap_process_request+3e   libhttpd!ap_invoke_handler 
libhttpd!ap_process_http_connection+52   libhttpd!ap_process_request 
libhttpd!ap_run_process_connection+25
libhttpd!ap_process_connection+33   libhttpd!ap_run_process_connection 
libhttpd!worker_main+a7   libhttpd!ap_process_connection 
msvcr100!_callthreadstartex+1b
msvcr100!_threadstartex+64
kernel32!BaseThreadInitThunk+e
ntdll!__RtlUserThreadStart+70
ntdll!_RtlUserThreadStart+1b   ntdll!__RtlUserThreadStart 
msvcr100!_threadstartex 

I have a relatively large file I'm downloading through a servlet
(unfortunately I didn't write the servlet and am not completely familiar
with what they're doing) and memory slowly slowly grows and isn't
released.  300mb file grows by a few megs each download.

I can't reproduce with the default tomcat file servlet so not sure what
the deal here is.

Going to poke some more but hoping for some ideas.

Thanks,
Andy



Re: Tomcat classloader memory leak when an object is stored into session

2014-02-10 Thread Michal Botka
On 07/02/2014, Mark Thomas wrote:

 There is no leak.
...

Hello Mark,
thank you very mych for help and your great presentation. You were
absolutely right, there was no memory leak :-)
Obviously there was a different issue in my application causing the leak...
I'm sorry for spamming.
Best regards
Michal

P.S. Regarding the WebappClassLoader instances I'm suprised that there
is quite often an instance with started=false remaining after garbage
collection is performed. However, this instace is collected later as
the used perm gen memory is reaching the maximum.

2014-02-07 11:46 GMT+01:00 Mark Thomas ma...@apache.org:
 On 07/02/2014 06:38, Michal Botka wrote:
 Is there a way how to avoid this leak?

 There is no leak.

 I would like to develop an application which can be safely
 deployed/undeployed without restarting the server.

 That is very much under your control. I'd suggest reading this:
 http://people.apache.org/~markt/presentations/2010-08-05-Memory-Leaks-JavaOne-60mins.pdf

 as it highlights much of what can go wrong.


 OK, now I know that my application cannot store it's objects into
 session, but that is very strong requirement which the most of the
 applications don't meet.

 There is no such requirement. Storing objects in the session does not
 trigger a memory leak on web application reload.

 Thanks for help.

 2014-02-06 22:58 GMT+01:00 David Kerber dcker...@verizon.net:
 On 2/6/2014 3:13 PM, Michal Botka wrote:

 When an application stores an object into the session and then the
 application is reloaded using Tomcat Web Application Manager, the
 classloader cannot be garbage collected. As a result, the
 OutOfMemoryError: PermGen space error occurs after several reloads.


 This is true.  What is your question?

 No, this is not true.

 To illustrate the issue, you can find an example below.
 Thanks in advance :-)

 I've taken the provided test code and confirmed - with a profiler - that
 there is no memory leak.

 There is something else that is triggering your memory leak. Follow the
 steps in the presentation above to find out exactly what it is that is
 pinning the web application class loader in memory.

 Mark


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat classloader memory leak when an object is stored into session

2014-02-07 Thread Mark Thomas
On 07/02/2014 06:38, Michal Botka wrote:
 Is there a way how to avoid this leak?

There is no leak.

 I would like to develop an application which can be safely
 deployed/undeployed without restarting the server.

That is very much under your control. I'd suggest reading this:
http://people.apache.org/~markt/presentations/2010-08-05-Memory-Leaks-JavaOne-60mins.pdf

as it highlights much of what can go wrong.


 OK, now I know that my application cannot store it's objects into
 session, but that is very strong requirement which the most of the
 applications don't meet.

There is no such requirement. Storing objects in the session does not
trigger a memory leak on web application reload.

 Thanks for help.
 
 2014-02-06 22:58 GMT+01:00 David Kerber dcker...@verizon.net:
 On 2/6/2014 3:13 PM, Michal Botka wrote:

 When an application stores an object into the session and then the
 application is reloaded using Tomcat Web Application Manager, the
 classloader cannot be garbage collected. As a result, the
 OutOfMemoryError: PermGen space error occurs after several reloads.


 This is true.  What is your question?

No, this is not true.

 To illustrate the issue, you can find an example below.
 Thanks in advance :-)

I've taken the provided test code and confirmed - with a profiler - that
there is no memory leak.

There is something else that is triggering your memory leak. Follow the
steps in the presentation above to find out exactly what it is that is
pinning the web application class loader in memory.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Michal Botka
When an application stores an object into the session and then the
application is reloaded using Tomcat Web Application Manager, the
classloader cannot be garbage collected. As a result, the
OutOfMemoryError: PermGen space error occurs after several reloads.

To illustrate the issue, you can find an example below.
Thanks in advance :-)


1. The EvilClass class whose instances are stored into the session:

public class EvilClass implements Serializable {

// Eat 100 MB from the JVM heap to see that the class is not
garbage collected
protected static final byte[] MEM = new byte[100  20];

private String value;

public String getValue() {
return value;
}

public void setValue(String value) {
this.value = value;
}

}


2. Servlet which stores EvilClass instances into session

public class TestServlet extends HttpServlet {

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse
resp) throws ServletException, IOException {
EvilClass obj = new EvilClass();
obj.setValue(req.getRequestURI());
req.getSession().setAttribute(test, obj);
getServletContext().log(Attribute stored to session  + obj);
}

}


3. web.xml part which maps the servlet to an URL

servlet
servlet-nameTestServlet/servlet-name
servlet-classtest.TestServlet/servlet-class
/servlet
servlet-mapping
servlet-nameTestServlet/servlet-name
url-pattern/*/url-pattern
/servlet-mapping


Steps to reproduce the issue:
1. Copy application WAR to the webapps directory.
2. Start Apache Tomcat.
3. Hit TestServlet.
4. Check Heap/PermGen size using Java VisualVM.
5. Reload the application thru Tomcat Web Application Manager.
6. Hit TestServlet again.
7. Perform GC and check Heap/PermGen size again.


Environment:
Apache Tomcat version: 7.0.50
OS: Windows 7 64
JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
Java: version 1.7.0_10, vendor Oracle Corporation


Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread David Kerber

On 2/6/2014 3:13 PM, Michal Botka wrote:

When an application stores an object into the session and then the
application is reloaded using Tomcat Web Application Manager, the
classloader cannot be garbage collected. As a result, the
OutOfMemoryError: PermGen space error occurs after several reloads.


This is true.  What is your question?




To illustrate the issue, you can find an example below.
Thanks in advance :-)


1. The EvilClass class whose instances are stored into the session:

public class EvilClass implements Serializable {

 // Eat 100 MB from the JVM heap to see that the class is not
garbage collected
 protected static final byte[] MEM = new byte[100  20];

 private String value;

 public String getValue() {
 return value;
 }

 public void setValue(String value) {
 this.value = value;
 }

}


2. Servlet which stores EvilClass instances into session

public class TestServlet extends HttpServlet {

 @Override
 protected void doGet(HttpServletRequest req, HttpServletResponse
resp) throws ServletException, IOException {
 EvilClass obj = new EvilClass();
 obj.setValue(req.getRequestURI());
 req.getSession().setAttribute(test, obj);
 getServletContext().log(Attribute stored to session  + obj);
 }

}


3. web.xml part which maps the servlet to an URL

servlet
servlet-nameTestServlet/servlet-name
servlet-classtest.TestServlet/servlet-class
/servlet
servlet-mapping
servlet-nameTestServlet/servlet-name
url-pattern/*/url-pattern
/servlet-mapping


Steps to reproduce the issue:
1. Copy application WAR to the webapps directory.
2. Start Apache Tomcat.
3. Hit TestServlet.
4. Check Heap/PermGen size using Java VisualVM.
5. Reload the application thru Tomcat Web Application Manager.
6. Hit TestServlet again.
7. Perform GC and check Heap/PermGen size again.


Environment:
Apache Tomcat version: 7.0.50
OS: Windows 7 64
JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
Java: version 1.7.0_10, vendor Oracle Corporation




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Leon Rosenberg
On Thu, Feb 6, 2014 at 11:58 PM, David Kerber dcker...@verizon.net wrote:

 On 2/6/2014 3:13 PM, Michal Botka wrote:

 When an application stores an object into the session and then the
 application is reloaded using Tomcat Web Application Manager, the
 classloader cannot be garbage collected. As a result, the
 OutOfMemoryError: PermGen space error occurs after several reloads.


 This is true.  What is your question?


I think the OP states, that this shouldn't be the case.

Personally I'm struggling with this one. But since I don't use the
reloading anyway I will relax and wait for enlightenment that is sure to
come from Chuck ;-)

regards
Leon


RE: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Caldarale, Charles R
 From: Leon Rosenberg [mailto:rosenberg.l...@gmail.com] 
 Subject: Re: Tomcat classloader memory leak when an object is stored into 
 session

  When an application stores an object into the session and then the
  application is reloaded using Tomcat Web Application Manager, the
  classloader cannot be garbage collected. As a result, the
  OutOfMemoryError: PermGen space error occurs after several reloads.

 I think the OP states, that this shouldn't be the case.

 Personally I'm struggling with this one. But since I don't use the
 reloading anyway I will relax and wait for enlightenment that is sure to
 come from Chuck ;-)

Since you insist...

Start with the Wiki:
http://wiki.apache.org/tomcat/MemoryLeakProtection

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Leon Rosenberg
On Fri, Feb 7, 2014 at 12:45 AM, Caldarale, Charles R 
chuck.caldar...@unisys.com wrote:

  From: Leon Rosenberg [mailto:rosenberg.l...@gmail.com]
  Subject: Re: Tomcat classloader memory leak when an object is stored
 into session

   When an application stores an object into the session and then the
   application is reloaded using Tomcat Web Application Manager, the
   classloader cannot be garbage collected. As a result, the
   OutOfMemoryError: PermGen space error occurs after several reloads.

  I think the OP states, that this shouldn't be the case.

  Personally I'm struggling with this one. But since I don't use the
  reloading anyway I will relax and wait for enlightenment that is sure to
  come from Chuck ;-)

 Since you insist...


Thank you!
I knew we can always count on you (now seriously) ;-)

best regards
Leon



 Start with the Wiki:
 http://wiki.apache.org/tomcat/MemoryLeakProtection

  - Chuck


 THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
 MATERIAL and is thus for use only by the intended recipient. If you
 received this in error, please contact the sender and delete the e-mail and
 its attachments from all computers.


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Michal Botka
Is there a way how to avoid this leak?
I would like to develop an application which can be safely
deployed/undeployed without restarting the server.
OK, now I know that my application cannot store it's objects into
session, but that is very strong requirement which the most of the
applications don't meet.
Thanks for help.

2014-02-06 22:58 GMT+01:00 David Kerber dcker...@verizon.net:
 On 2/6/2014 3:13 PM, Michal Botka wrote:

 When an application stores an object into the session and then the
 application is reloaded using Tomcat Web Application Manager, the
 classloader cannot be garbage collected. As a result, the
 OutOfMemoryError: PermGen space error occurs after several reloads.


 This is true.  What is your question?




 To illustrate the issue, you can find an example below.
 Thanks in advance :-)


 1. The EvilClass class whose instances are stored into the session:

 public class EvilClass implements Serializable {

  // Eat 100 MB from the JVM heap to see that the class is not
 garbage collected
  protected static final byte[] MEM = new byte[100  20];

  private String value;

  public String getValue() {
  return value;
  }

  public void setValue(String value) {
  this.value = value;
  }

 }


 2. Servlet which stores EvilClass instances into session

 public class TestServlet extends HttpServlet {

  @Override
  protected void doGet(HttpServletRequest req, HttpServletResponse
 resp) throws ServletException, IOException {
  EvilClass obj = new EvilClass();
  obj.setValue(req.getRequestURI());
  req.getSession().setAttribute(test, obj);
  getServletContext().log(Attribute stored to session  + obj);
  }

 }


 3. web.xml part which maps the servlet to an URL

 servlet
 servlet-nameTestServlet/servlet-name
 servlet-classtest.TestServlet/servlet-class
 /servlet
 servlet-mapping
 servlet-nameTestServlet/servlet-name
 url-pattern/*/url-pattern
 /servlet-mapping


 Steps to reproduce the issue:
 1. Copy application WAR to the webapps directory.
 2. Start Apache Tomcat.
 3. Hit TestServlet.
 4. Check Heap/PermGen size using Java VisualVM.
 5. Reload the application thru Tomcat Web Application Manager.
 6. Hit TestServlet again.
 7. Perform GC and check Heap/PermGen size again.


 Environment:
 Apache Tomcat version: 7.0.50
 OS: Windows 7 64
 JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
 Java: version 1.7.0_10, vendor Oracle Corporation



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Leon Rosenberg
On Fri, Feb 7, 2014 at 8:38 AM, Michal Botka mr.bo...@gmail.com wrote:

 Is there a way how to avoid this leak?
 I would like to develop an application which can be safely
 deployed/undeployed without restarting the server.
 OK, now I know that my application cannot store it's objects into
 session, but that is very strong requirement which the most of the
 applications don't meet.
 Thanks for help.


But do you have to serialize your sessions? Switching off session
serialization might help.
regards
Leon



 2014-02-06 22:58 GMT+01:00 David Kerber dcker...@verizon.net:
  On 2/6/2014 3:13 PM, Michal Botka wrote:
 
  When an application stores an object into the session and then the
  application is reloaded using Tomcat Web Application Manager, the
  classloader cannot be garbage collected. As a result, the
  OutOfMemoryError: PermGen space error occurs after several reloads.
 
 
  This is true.  What is your question?
 
 
 
 
  To illustrate the issue, you can find an example below.
  Thanks in advance :-)
 
 
  1. The EvilClass class whose instances are stored into the session:
 
  public class EvilClass implements Serializable {
 
   // Eat 100 MB from the JVM heap to see that the class is not
  garbage collected
   protected static final byte[] MEM = new byte[100  20];
 
   private String value;
 
   public String getValue() {
   return value;
   }
 
   public void setValue(String value) {
   this.value = value;
   }
 
  }
 
 
  2. Servlet which stores EvilClass instances into session
 
  public class TestServlet extends HttpServlet {
 
   @Override
   protected void doGet(HttpServletRequest req, HttpServletResponse
  resp) throws ServletException, IOException {
   EvilClass obj = new EvilClass();
   obj.setValue(req.getRequestURI());
   req.getSession().setAttribute(test, obj);
   getServletContext().log(Attribute stored to session  + obj);
   }
 
  }
 
 
  3. web.xml part which maps the servlet to an URL
 
  servlet
  servlet-nameTestServlet/servlet-name
  servlet-classtest.TestServlet/servlet-class
  /servlet
  servlet-mapping
  servlet-nameTestServlet/servlet-name
  url-pattern/*/url-pattern
  /servlet-mapping
 
 
  Steps to reproduce the issue:
  1. Copy application WAR to the webapps directory.
  2. Start Apache Tomcat.
  3. Hit TestServlet.
  4. Check Heap/PermGen size using Java VisualVM.
  5. Reload the application thru Tomcat Web Application Manager.
  6. Hit TestServlet again.
  7. Perform GC and check Heap/PermGen size again.
 
 
  Environment:
  Apache Tomcat version: 7.0.50
  OS: Windows 7 64
  JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
  Java: version 1.7.0_10, vendor Oracle Corporation
 
 
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Tomcat classloader memory leak when an object is stored into session

2014-02-06 Thread Mark Thomas
Michal Botka mr.bo...@gmail.com wrote:
Is there a way how to avoid this leak?
I would like to develop an application which can be safely
deployed/undeployed without restarting the server.
OK, now I know that my application cannot store it's objects into
session, but that is very strong requirement which the most of the
applications don't meet.

The leak, as described, should not happen.

The sessions are serialized to disk. That disconnects the objects in the 
session from the web application class loader. When they are deseralized it 
will be with the class loader of the new web application.

Therefore, objects in the session should not be retaining references to the old 
web application class loader.

I need to run through the provided test case to see if there is something going 
wrong but at this point I strongly suspect that the source of the leak is 
something else entirely (most likely a leak in a 3rd party library or possibly 
an application bug).

Mark


Thanks for help.

2014-02-06 22:58 GMT+01:00 David Kerber dcker...@verizon.net:
 On 2/6/2014 3:13 PM, Michal Botka wrote:

 When an application stores an object into the session and then the
 application is reloaded using Tomcat Web Application Manager, the
 classloader cannot be garbage collected. As a result, the
 OutOfMemoryError: PermGen space error occurs after several
reloads.


 This is true.  What is your question?




 To illustrate the issue, you can find an example below.
 Thanks in advance :-)


 1. The EvilClass class whose instances are stored into the session:

 public class EvilClass implements Serializable {

  // Eat 100 MB from the JVM heap to see that the class is not
 garbage collected
  protected static final byte[] MEM = new byte[100  20];

  private String value;

  public String getValue() {
  return value;
  }

  public void setValue(String value) {
  this.value = value;
  }

 }


 2. Servlet which stores EvilClass instances into session

 public class TestServlet extends HttpServlet {

  @Override
  protected void doGet(HttpServletRequest req,
HttpServletResponse
 resp) throws ServletException, IOException {
  EvilClass obj = new EvilClass();
  obj.setValue(req.getRequestURI());
  req.getSession().setAttribute(test, obj);
  getServletContext().log(Attribute stored to session  +
obj);
  }

 }


 3. web.xml part which maps the servlet to an URL

 servlet
 servlet-nameTestServlet/servlet-name
 servlet-classtest.TestServlet/servlet-class
 /servlet
 servlet-mapping
 servlet-nameTestServlet/servlet-name
 url-pattern/*/url-pattern
 /servlet-mapping


 Steps to reproduce the issue:
 1. Copy application WAR to the webapps directory.
 2. Start Apache Tomcat.
 3. Hit TestServlet.
 4. Check Heap/PermGen size using Java VisualVM.
 5. Reload the application thru Tomcat Web Application Manager.
 6. Hit TestServlet again.
 7. Perform GC and check Heap/PermGen size again.


 Environment:
 Apache Tomcat version: 7.0.50
 OS: Windows 7 64
 JVM: Java HotSpot(TM) 64-Bit Server VM (23.6-b04, mixed mode)
 Java: version 1.7.0_10, vendor Oracle Corporation



 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Konstantin Kolinko
2013/5/19 Nick Williams nicho...@nicholaswilliams.net:

 On May 19, 2013, at 10:01 AM, Caldarale, Charles R wrote:

 From: Nick Williams [mailto:nicho...@nicholaswilliams.net]
 Subject: Re: LOG4J2-223: IllegalStateException thrown during Tomcat 
 shutdown (memory leak, it looks like)

 Log4j 1 never required a listener to be configured to be shut down
 properly when an application is undeployed.

It did.
E.g. this discussion is from March 2004:
https://issues.apache.org/bugzilla/show_bug.cgi?id=26372#c2


 What bearing does that have on a different logging mechanism?

 To be fair, Log4j 2 is not a different logging mechanism. It is a new version 
 of Log4j 1. My point was mostly philosophical; it feels wrong to have to 
 configure a listener just to support logging.


You can configure the listener from within a library either
a) by providing a javax.servlet.ServletContainerInitializer (starting
with Servlet 3.0) or
b) by configuring it in a TLD file of a tag library (starting with JSP 1.2).



 It should be possible to do this without a listener.

 Not easily.

 Could a `finalize` method be used instead of a shutdown hook/listener?

 Finalizers should be avoided like the plague.  The gyrations the JVM has to 
 go through to handle them result in continual run time impacts, and require 
 at least two GC passes to actually get rid of the objects.

 The extra performance impact is bad, yes, when you're talking about an object 
 who has many short-lived instances in memory that could be garbage collected 
 regularly. However, when you're talking about a lone singleton instance that 
 is created when the application starts and garbage collected when the 
 application shuts down, I would argue this is not a problem at all. Of 
 course, I'm open to the idea that I could be proven wrong.


 What I don't know is if it is guaranteed to be called in non-web 
 applications
 when the JVM just shuts down.

 Finalizers are not called at JVM termination, since the process exit is 
 expected to release resources automatically.  You cannot actually count on a 
 finalizer ever being invoked; it's one of those seemed like a good idea at 
 the time things that is now widely regretted by JVM implementers.

 After some experimentation, it would appear that it's not so much that 
 finalizers are not called at JVM termination as it is that finalizers are not 
 called if the garbage collector never runs, and the garbage collector isn't 
 guaranteed to run at JVM shutdown.

There exists such API as JVM shutdown hooks. An issue with them though
is that if there are several hooks, then all of them are started at
the same time and run in parallel.

When Tomcat is run with JULI it takes care of JULI shutdown (a)
disables JULI's own shutdown hook via
ClassLoaderLogManager.setUseShutdownHook(false), b) shuts down JULI
from its own shutdown hook thread).

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Nick,

On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before shutdown
 for logging to work properly is no better than requiring users to
 register a listener in a web application for logging to work
 properly. Surely there's a better way...

Do you initialize your logging system in a ServletContextListener (or
similar)? If so, then you should destroy it at the same level.

If you aren't initializing your logging system in a
ServletContextListener... then how are you initializing it?

Long ago, I abandoned log4j's auto-initialization primarily because it
sometimes guesses wrong.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRmkekAAoJEBzwKT+lPKRY6uMP/3P72gxh2/wg3Jw5fNllv5PS
MX2gow3Fr+RnBXELnD/Mdtlq95j87tzKSiRIMT99FMXXHWXUWW9iHvA7ojye+SGd
mKaXJlPQsTGrLH7rRJzXX6CXH7xW2mQ3DEYCLQ/97pktn8SgO324BWz2MvJGGtDx
FwVB+rny0HS1JROADLFgzkLfNRRpnR7uvdUqE6G/vY85sbFBq7tWo6k9s6FdWvev
TqSo0WxbN7goHPcJH5mwcq8MATztRunOTMev6XrG7myqjs/wD5FGOcVyAM01j9qW
QgAwdAVd8z9Gkpw1c8FLb5BXKd6YwfjaS2DxDsojbd0MLHIgaVG8jqL8C4/Tdyxv
8IN9fubTKfWIKzj7uQCNGcXZWuAhAj1GWiK1GADZiuMm9Xj9Pdo1z1gqewoOPYqQ
tJnH69+62AcAU9dr/78Y7NvVqtor+fF49o1qzMqkEzT14x2S0fjhk79SmS3gDlyo
GBInETKqKBLycKpwKplcOoFRlXopXwSCsnpZmcuJQP2j2DuZHzwqoWYK5UgZaidu
xFRdTKmvGdX3UcksUDgxTjUQrtKUsqK1XxlOlHnbLYYuob2K21d6KTnLkDj3Sr+p
I5ZErRUX36j25jafGtDHZUv7dDA0QKA7ygrn4xqh1rSiewfz85NcxvZGCIi6XKV8
OxLz5ev4cxVMBGq4x2MC
=uLGt
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Nick Williams

On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Nick,
 
 On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before shutdown
 for logging to work properly is no better than requiring users to
 register a listener in a web application for logging to work
 properly. Surely there's a better way...
 
 Do you initialize your logging system in a ServletContextListener (or
 similar)? If so, then you should destroy it at the same level.
 
 If you aren't initializing your logging system in a
 ServletContextListener... then how are you initializing it?
 
 Long ago, I abandoned log4j's auto-initialization primarily because it
 sometimes guesses wrong.

First, remember that this is Log4j 2, so things are obviously different.

Log4j initializes with the first call to LogManager#getLogger(), whenever that 
occurs. In my case loggers are static, so it happens when the classes are 
initialized. In the specific case of the replication project attached to the 
issue, it happens on the first request to the only Servlet in the application.

Unfortunately, I've just about given up on it being possible to make logging 
work right without a ServletContextListener. Man oh man did I want to avoid 
that...

Nick
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Nick,

On 5/20/13 12:48 PM, Nick Williams wrote:
 
 On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before 
 shutdown for logging to work properly is no better than
 requiring users to register a listener in a web application for
 logging to work properly. Surely there's a better way...
 
 Do you initialize your logging system in a
 ServletContextListener (or similar)? If so, then you should
 destroy it at the same level.
 
 If you aren't initializing your logging system in a 
 ServletContextListener... then how are you initializing it?
 
 Long ago, I abandoned log4j's auto-initialization primarily
 because it sometimes guesses wrong.
 
 First, remember that this is Log4j 2, so things are obviously 
 different.

It's different, but it's the same.

 Log4j initializes with the first call to LogManager#getLogger(), 
 whenever that occurs. In my case loggers are static, so it happens
  when the classes are initialized. In the specific case of the 
 replication project attached to the issue, it happens on the first
  request to the only Servlet in the application.

Right. What I'm saying is that you should take full control over the
initialization (and destruction) of the logging system. Your
ServletContextListeners should be invoked before your servlet classes
are loader.

 Unfortunately, I've just about given up on it being possible to
 make logging work right without a ServletContextListener. Man oh
 man did I want to avoid that...

You act like a ServletContextListener is some evil hack that should be
avoided at all costs. Instead, it's exactly the right mechanism to do
what you are trying to do: configure something at webapp launch and
de-configure it when the webapp is stopped.

Some things just aren't appropriate to do with @Annotations. Sorry.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRmoCoAAoJEBzwKT+lPKRYEkgQALHbPr3SPqsDmIdYWa4Cb2VF
gZubG1MRsupGK2Kqlsq4HDTQMYjM5Twyig8McaVaxmqeUn9pWnSm2VgJCqeP0D8n
kGAsu9LZFoyEqkpO8+6xHwtvkPNCbj3qMrMRuqgXuV11VrlUL4N1q8pMYK3m0c5l
8iytqXUHk7R5MPjwZS4e3zC2jGnMhiIENWwfZa/ulNhmWCpLcC5tIU3Ka1s4VoFT
7S92vWG0CoveGkfVbtl9G9LPrdEYig0PFXeCvALFVE4Ff4rWP/jJiN+fE3GeTBSI
rR4eWpgvHM5BwvgFvSB6dzkaSQJaqX0GV1CJUdR3lvzh6jtRkeAlMzdA7DFFfQD3
pY/J/B+0ZeJzHDLrlYa528NaufA46vbhIr3l/fQqdMO5nHJePzv6bQIUOFu9zcHO
chwXohDvF9rQDAQE1H/DeVuDy7izQqn1k25PbsKDa/Ju86yk4V+ak/AcSzqKIm9o
zPLvHN4v3qPJ5QXElrX8aeXi7HUZEHjsVzvQmWpqpWd1aelNkn6FMSbrF59XpYkh
hTGXYz91gIbpshnwxpE5GQVv/1GwOFICi/HsT54ru0rEKmIDmTu9lu7ByzKp3Idv
U5dxMRUVzx+pJ2hfq2Mcqdy/LbYr9SX1uC3njtwdu2yuWWkbgC/Vnrns1hlDdpw/
+X1XT/+ZnfqzAQ66rger
=w+w1
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Nick Williams

On May 20, 2013, at 2:59 PM, Christopher Schultz wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Nick,
 
 On 5/20/13 12:48 PM, Nick Williams wrote:
 
 On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before 
 shutdown for logging to work properly is no better than
 requiring users to register a listener in a web application for
 logging to work properly. Surely there's a better way...
 
 Do you initialize your logging system in a
 ServletContextListener (or similar)? If so, then you should
 destroy it at the same level.
 
 If you aren't initializing your logging system in a 
 ServletContextListener... then how are you initializing it?
 
 Long ago, I abandoned log4j's auto-initialization primarily
 because it sometimes guesses wrong.
 
 First, remember that this is Log4j 2, so things are obviously 
 different.
 
 It's different, but it's the same.
 
 Log4j initializes with the first call to LogManager#getLogger(), 
 whenever that occurs. In my case loggers are static, so it happens
 when the classes are initialized. In the specific case of the 
 replication project attached to the issue, it happens on the first
 request to the only Servlet in the application.
 
 Right. What I'm saying is that you should take full control over the
 initialization (and destruction) of the logging system. Your
 ServletContextListeners should be invoked before your servlet classes
 are loader.

And I'm saying you shouldn't /have/ to. It should just work without you 
having to do much thinking. See below.

 
 Unfortunately, I've just about given up on it being possible to
 make logging work right without a ServletContextListener. Man oh
 man did I want to avoid that...
 
 You act like a ServletContextListener is some evil hack that should be
 avoided at all costs. Instead, it's exactly the right mechanism to do
 what you are trying to do: configure something at webapp launch and
 de-configure it when the webapp is stopped.

Not what I'm saying at all. I love listeners. They are extremely helpful, and I 
use them all the time.

What I'm saying is that the concept of logging, philosophically, is supposed to 
be as unobtrusive as possible. Something you don't really have to think about 
how exactly it works; you just know to get a logger and put logging statements 
in your code and things just work. The act of having to set up a listener to 
initialize and deinitialize logging, to me, seems like more than Log4j users 
should have to worry about. Perhaps just as importantly, Log4j 1 worked without 
a listener to initialize/deinitialize, so this is yet again one more thing 
users are going to have to do to switch from Log4j 1 to Log4j 2.

Thankfully, we can use web-fragments in Servlet 3.0 and higher to configure the 
listener behind-the-scenes without the user even knowing. That's much more 
acceptable in my book. Users of Servlet 2.5 will still have to declare them 
manually, but I think they will probably be the minority users. So with a 
little more polishing of the Log4j 2 source code we can make this a little 
better. I just wish there was a solution that would work for both standalone 
applications /and/ web applications to initialize and deinitialize Log4j 
correctly without any users (including Servlet 2.5 users) having to think about 
it.

Nick
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Nick,

On 5/20/13 4:10 PM, Nick Williams wrote:
 
 On May 20, 2013, at 2:59 PM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/20/13 12:48 PM, Nick Williams wrote:
 
 On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before 
 shutdown for logging to work properly is no better than 
 requiring users to register a listener in a web
 application for logging to work properly. Surely there's a
 better way...
 
 Do you initialize your logging system in a 
 ServletContextListener (or similar)? If so, then you should 
 destroy it at the same level.
 
 If you aren't initializing your logging system in a 
 ServletContextListener... then how are you initializing it?
 
 Long ago, I abandoned log4j's auto-initialization primarily 
 because it sometimes guesses wrong.
 
 First, remember that this is Log4j 2, so things are obviously 
 different.
 
 It's different, but it's the same.
 
 Log4j initializes with the first call to
 LogManager#getLogger(), whenever that occurs. In my case
 loggers are static, so it happens when the classes are
 initialized. In the specific case of the replication project
 attached to the issue, it happens on the first request to the
 only Servlet in the application.
 
 Right. What I'm saying is that you should take full control over 
 the initialization (and destruction) of the logging system. Your
  ServletContextListeners should be invoked before your servlet 
 classes are loader.
 
 And I'm saying you shouldn't /have/ to. It should just work
 without you having to do much thinking. See below.
 
 
 Unfortunately, I've just about given up on it being possible to
  make logging work right without a ServletContextListener.
 Man oh man did I want to avoid that...
 
 You act like a ServletContextListener is some evil hack that
 should be avoided at all costs. Instead, it's exactly the right
 mechanism to do what you are trying to do: configure something at
 webapp launch and de-configure it when the webapp is stopped.
 
 Not what I'm saying at all. I love listeners. They are extremely 
 helpful, and I use them all the time.
 
 What I'm saying is that the concept of logging, philosophically,
 is supposed to be as unobtrusive as possible. Something you don't
 really have to think about how exactly it works; you just know to
 get a logger and put logging statements in your code and things
 just work. The act of having to set up a listener to initialize
 and deinitialize logging, to me, seems like more than Log4j users
 should have to worry about. Perhaps just as importantly, Log4j 1
 worked without a listener to initialize/deinitialize, so this is
 yet again one more thing users are going to have to do to switch
 from Log4j 1 to Log4j 2.

That's like saying that aspect-oriented programming should just work
without having to run the AOP compiler against the code, first.

This at least used to be a problem in log4j 1 as well: you had to call
LogManager.shutdown in order to free all the resources, flush all the
buffers, etc. when your webapp unloaded, otherwise you ran the risk of
pinning the old webapp's ClassLoader, etc. in memory. The only way to
run LogManager.shutdown() on webapp unload is to configure a
ServletContextListener.

 Thankfully, we can use web-fragments in Servlet 3.0 and higher to 
 configure the listener behind-the-scenes without the user even 
 knowing. That's much more acceptable in my book.

While I agree, it increases the amount of magic that I generally
prefer to keep to a minimum. I know I'm apparently an old guy who just
doesn't get it, but I honestly prefer explicit configuration to
auto-configuration. You can diagnose problems much more easily with
explicit configuration than by attaching a debugger and
stepping-through the entire bootstrap process to figure out wtf is
going on.

 Users of Servlet 2.5 will still have to declare them manually, but
 I think they will probably be the minority users.

Ha ha ha. Don't you see the posts from people trying to figure out how
to move their Tomcat 3.x installations to a new Windows 2000 server?
Again, I'm a apparently a dinosaur, but I don't have a single
servlet-3.0 webapp deployed in production anywhere. Not even in testing.

 So with a little more polishing of the Log4j 2 source code we can 
 make this a little better. I just wish there was a solution that 
 would work for both standalone applications /and/ web applications
 to initialize and deinitialize Log4j correctly without any users 
 (including Servlet 2.5 users) having to think about it.

The solution is to have a magic annotation-processor running all the
time in the JVM, right? Isn't that what OSGi is for?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - 

Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Nick Williams

On May 20, 2013, at 4:39 PM, Christopher Schultz wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 Nick,
 
 On 5/20/13 4:10 PM, Nick Williams wrote:
 
 On May 20, 2013, at 2:59 PM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/20/13 12:48 PM, Nick Williams wrote:
 
 On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 Nick,
 
 On 5/19/13 11:25 AM, Nick Williams wrote:
 Unfortunately, requiring users to call System.gc() before 
 shutdown for logging to work properly is no better than 
 requiring users to register a listener in a web
 application for logging to work properly. Surely there's a
 better way...
 
 Do you initialize your logging system in a 
 ServletContextListener (or similar)? If so, then you should 
 destroy it at the same level.
 
 If you aren't initializing your logging system in a 
 ServletContextListener... then how are you initializing it?
 
 Long ago, I abandoned log4j's auto-initialization primarily 
 because it sometimes guesses wrong.
 
 First, remember that this is Log4j 2, so things are obviously 
 different.
 
 It's different, but it's the same.
 
 Log4j initializes with the first call to
 LogManager#getLogger(), whenever that occurs. In my case
 loggers are static, so it happens when the classes are
 initialized. In the specific case of the replication project
 attached to the issue, it happens on the first request to the
 only Servlet in the application.
 
 Right. What I'm saying is that you should take full control over 
 the initialization (and destruction) of the logging system. Your
 ServletContextListeners should be invoked before your servlet 
 classes are loader.
 
 And I'm saying you shouldn't /have/ to. It should just work
 without you having to do much thinking. See below.
 
 
 Unfortunately, I've just about given up on it being possible to
 make logging work right without a ServletContextListener.
 Man oh man did I want to avoid that...
 
 You act like a ServletContextListener is some evil hack that
 should be avoided at all costs. Instead, it's exactly the right
 mechanism to do what you are trying to do: configure something at
 webapp launch and de-configure it when the webapp is stopped.
 
 Not what I'm saying at all. I love listeners. They are extremely 
 helpful, and I use them all the time.
 
 What I'm saying is that the concept of logging, philosophically,
 is supposed to be as unobtrusive as possible. Something you don't
 really have to think about how exactly it works; you just know to
 get a logger and put logging statements in your code and things
 just work. The act of having to set up a listener to initialize
 and deinitialize logging, to me, seems like more than Log4j users
 should have to worry about. Perhaps just as importantly, Log4j 1
 worked without a listener to initialize/deinitialize, so this is
 yet again one more thing users are going to have to do to switch
 from Log4j 1 to Log4j 2.
 
 That's like saying that aspect-oriented programming should just work
 without having to run the AOP compiler against the code, first.
 
 This at least used to be a problem in log4j 1 as well: you had to call
 LogManager.shutdown in order to free all the resources, flush all the
 buffers, etc. when your webapp unloaded, otherwise you ran the risk of
 pinning the old webapp's ClassLoader, etc. in memory. The only way to
 run LogManager.shutdown() on webapp unload is to configure a
 ServletContextListener.
 
 Thankfully, we can use web-fragments in Servlet 3.0 and higher to 
 configure the listener behind-the-scenes without the user even 
 knowing. That's much more acceptable in my book.
 
 While I agree, it increases the amount of magic that I generally
 prefer to keep to a minimum. I know I'm apparently an old guy who just
 doesn't get it, but I honestly prefer explicit configuration to
 auto-configuration. You can diagnose problems much more easily with
 explicit configuration than by attaching a debugger and
 stepping-through the entire bootstrap process to figure out wtf is
 going on.
 
 Users of Servlet 2.5 will still have to declare them manually, but
 I think they will probably be the minority users.
 
 Ha ha ha. Don't you see the posts from people trying to figure out how
 to move their Tomcat 3.x installations to a new Windows 2000 server?
 Again, I'm a apparently a dinosaur, but I don't have a single
 servlet-3.0 webapp deployed in production anywhere. Not even in testing.

Oh, I see the posts. I just figure if they're that far behind they won't be 
using Log4j 2 for at least another 10 years.

Nick

 
 So with a little more polishing of the Log4j 2 source code we can 
 make this a little better. I just wish there was a solution that 
 would work for both standalone applications /and/ web applications
 to initialize and deinitialize Log4j correctly without any users 
 (including Servlet 2.5 users) having to think about it.
 
 The 

Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-20 Thread Mark Eggers

On 5/20/2013 2:45 PM, Nick Williams wrote:


On May 20, 2013, at 4:39 PM, Christopher Schultz wrote:


-BEGIN PGP SIGNED MESSAGE- Hash: SHA256

Nick,

On 5/20/13 4:10 PM, Nick Williams wrote:


On May 20, 2013, at 2:59 PM, Christopher Schultz wrote:


-BEGIN PGP SIGNED MESSAGE- Hash: SHA256

Nick,

On 5/20/13 12:48 PM, Nick Williams wrote:


On May 20, 2013, at 10:56 AM, Christopher Schultz wrote:


-BEGIN PGP SIGNED MESSAGE- Hash: SHA256

Nick,

On 5/19/13 11:25 AM, Nick Williams wrote:

Unfortunately, requiring users to call System.gc()
before shutdown for logging to work properly is no better
than requiring users to register a listener in a web
application for logging to work properly. Surely there's
a better way...


Do you initialize your logging system in a
ServletContextListener (or similar)? If so, then you
should destroy it at the same level.

If you aren't initializing your logging system in a
ServletContextListener... then how are you initializing
it?

Long ago, I abandoned log4j's auto-initialization
primarily because it sometimes guesses wrong.


First, remember that this is Log4j 2, so things are
obviously different.


It's different, but it's the same.


Log4j initializes with the first call to
LogManager#getLogger(), whenever that occurs. In my case
loggers are static, so it happens when the classes are
initialized. In the specific case of the replication project
attached to the issue, it happens on the first request to
the only Servlet in the application.


Right. What I'm saying is that you should take full control
over the initialization (and destruction) of the logging
system. Your ServletContextListeners should be invoked before
your servlet classes are loader.


And I'm saying you shouldn't /have/ to. It should just work
without you having to do much thinking. See below.




Unfortunately, I've just about given up on it being possible
to make logging work right without a
ServletContextListener. Man oh man did I want to avoid
that...


You act like a ServletContextListener is some evil hack that
should be avoided at all costs. Instead, it's exactly the
right mechanism to do what you are trying to do: configure
something at webapp launch and de-configure it when the webapp
is stopped.


Not what I'm saying at all. I love listeners. They are extremely
helpful, and I use them all the time.

What I'm saying is that the concept of logging, philosophically,
is supposed to be as unobtrusive as possible. Something you
don't really have to think about how exactly it works; you just
know to get a logger and put logging statements in your code and
things just work. The act of having to set up a listener to
initialize and deinitialize logging, to me, seems like more than
Log4j users should have to worry about. Perhaps just as
importantly, Log4j 1 worked without a listener to
initialize/deinitialize, so this is yet again one more thing
users are going to have to do to switch from Log4j 1 to Log4j 2.


That's like saying that aspect-oriented programming should just
work without having to run the AOP compiler against the code,
first.

This at least used to be a problem in log4j 1 as well: you had to
call LogManager.shutdown in order to free all the resources, flush
all the buffers, etc. when your webapp unloaded, otherwise you ran
the risk of pinning the old webapp's ClassLoader, etc. in memory.
The only way to run LogManager.shutdown() on webapp unload is to
configure a ServletContextListener.


Thankfully, we can use web-fragments in Servlet 3.0 and higher
to configure the listener behind-the-scenes without the user
even knowing. That's much more acceptable in my book.


While I agree, it increases the amount of magic that I generally
prefer to keep to a minimum. I know I'm apparently an old guy who
just doesn't get it, but I honestly prefer explicit configuration
to auto-configuration. You can diagnose problems much more easily
with explicit configuration than by attaching a debugger and
stepping-through the entire bootstrap process to figure out wtf is
going on.


Users of Servlet 2.5 will still have to declare them manually,
but I think they will probably be the minority users.


Ha ha ha. Don't you see the posts from people trying to figure out
how to move their Tomcat 3.x installations to a new Windows 2000
server? Again, I'm a apparently a dinosaur, but I don't have a
single servlet-3.0 webapp deployed in production anywhere. Not even
in testing.


Oh, I see the posts. I just figure if they're that far behind they
won't be using Log4j 2 for at least another 10 years.

Nick



Hopefully sometime this year (currently porting one of those types of 
applications).


I do have a Maven artifact with some utility listeners that I make use 
of (including a log4j listener). I just add the dependency, modify the 
web.xml accordingly, and I'm good to go.


Yes, annotations would make things a little less cumbersome. However, I 
still like seeing everything. Maybe once I get a 

Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-19 Thread Mark Thomas
On 19/05/2013 05:57, Nick Williams wrote:
 Can one of the very knowledgeable developers that have been
 discussing memory leaks in the last few days (re: Possible
 false-postive with JreMemoryLeakPreventionListener and Tomcat's JDBC
 Pool and OracleTimeoutPollingThread) chime in on this Log4j 2 bug
 [1]?
 
 Log4j 2 appears to be registering a shutdown hook that, I believe,
 will result in a memory leak in Tomcat. The
 JreMemoryLeakPreventionListener does not detect it (which might be a
 separate Tomcat bug, assuming I'm right that it's a memory leak). I
 don't know nearly as much about class loaders and memory leaks in a
 web application as some of the guys I've read talking on here the
 last few days, and it would be helpful for us to get the
 knowledgeable opinion of one or more Tomcat developers about how to
 solve this.

It looks like Ralph has already answered this [2].

If log4j2 is initialised by the webapp, it needs to be shutdown by the
webapp.

 (Note: I don't normally post to both lists, but since the memory leak
 topic was occurring on the user's list, and I also wanted to get the
 attention of as many developers as possible, I made an exception this
 time.)

No matter how important you think your issue is, please do not cross
post. As a general guide, if you aren't sure, use the users list. The
committers are all active or lurking here so they will all see it and
will move the discussion to the dev list of necessary.

Mark

[1] https://issues.apache.org/jira/browse/LOG4J2-223

[2]
https://issues.apache.org/jira/browse/LOG4J2-223?focusedCommentId=13661501page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13661501

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-19 Thread Nick Williams

On May 19, 2013, at 3:33 AM, Mark Thomas wrote:

 On 19/05/2013 05:57, Nick Williams wrote:
 Can one of the very knowledgeable developers that have been
 discussing memory leaks in the last few days (re: Possible
 false-postive with JreMemoryLeakPreventionListener and Tomcat's JDBC
 Pool and OracleTimeoutPollingThread) chime in on this Log4j 2 bug
 [1]?
 
 Log4j 2 appears to be registering a shutdown hook that, I believe,
 will result in a memory leak in Tomcat. The
 JreMemoryLeakPreventionListener does not detect it (which might be a
 separate Tomcat bug, assuming I'm right that it's a memory leak). I
 don't know nearly as much about class loaders and memory leaks in a
 web application as some of the guys I've read talking on here the
 last few days, and it would be helpful for us to get the
 knowledgeable opinion of one or more Tomcat developers about how to
 solve this.
 
 It looks like Ralph has already answered this [2].
 
 If log4j2 is initialised by the webapp, it needs to be shutdown by the
 webapp.

Ralph may have responded, but I don't believe it's the right answer. Log4j 1 
never required a listener to be configured to be shut down properly when an 
application is undeployed. It should be possible to do this without a listener. 
Just a thought, is there a way to detect if a ClassLoader is being shut 
down/unloaded? Could a `finalize` method be used instead of a shutdown 
hook/listener? I'm fairly confident the finalize method would always be called 
when the application is undeployed (though I could be wrong). What I don't know 
is if it is guaranteed to be called in non-web applications when the JVM just 
shuts down.

 
 (Note: I don't normally post to both lists, but since the memory leak
 topic was occurring on the user's list, and I also wanted to get the
 attention of as many developers as possible, I made an exception this
 time.)
 
 No matter how important you think your issue is, please do not cross
 post. As a general guide, if you aren't sure, use the users list. The
 committers are all active or lurking here so they will all see it and
 will move the discussion to the dev list of necessary.

My apologies.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-19 Thread Caldarale, Charles R
 From: Nick Williams [mailto:nicho...@nicholaswilliams.net] 
 Subject: Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown 
 (memory leak, it looks like)

 Log4j 1 never required a listener to be configured to be shut down 
 properly when an application is undeployed.

What bearing does that have on a different logging mechanism?

 It should be possible to do this without a listener.

Not easily.

 Could a `finalize` method be used instead of a shutdown hook/listener?

Finalizers should be avoided like the plague.  The gyrations the JVM has to go 
through to handle them result in continual run time impacts, and require at 
least two GC passes to actually get rid of the objects.

 What I don't know is if it is guaranteed to be called in non-web applications 
 when the JVM just shuts down.

Finalizers are not called at JVM termination, since the process exit is 
expected to release resources automatically.  You cannot actually count on a 
finalizer ever being invoked; it's one of those seemed like a good idea at the 
time things that is now widely regretted by JVM implementers.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-19 Thread Nick Williams

On May 19, 2013, at 10:01 AM, Caldarale, Charles R wrote:

 From: Nick Williams [mailto:nicho...@nicholaswilliams.net] 
 Subject: Re: LOG4J2-223: IllegalStateException thrown during Tomcat shutdown 
 (memory leak, it looks like)
 
 Log4j 1 never required a listener to be configured to be shut down 
 properly when an application is undeployed.
 
 What bearing does that have on a different logging mechanism?

To be fair, Log4j 2 is not a different logging mechanism. It is a new version 
of Log4j 1. My point was mostly philosophical; it feels wrong to have to 
configure a listener just to support logging.

 
 It should be possible to do this without a listener.
 
 Not easily.
 
 Could a `finalize` method be used instead of a shutdown hook/listener?
 
 Finalizers should be avoided like the plague.  The gyrations the JVM has to 
 go through to handle them result in continual run time impacts, and require 
 at least two GC passes to actually get rid of the objects.

The extra performance impact is bad, yes, when you're talking about an object 
who has many short-lived instances in memory that could be garbage collected 
regularly. However, when you're talking about a lone singleton instance that is 
created when the application starts and garbage collected when the application 
shuts down, I would argue this is not a problem at all. Of course, I'm open to 
the idea that I could be proven wrong.

 
 What I don't know is if it is guaranteed to be called in non-web 
 applications 
 when the JVM just shuts down.
 
 Finalizers are not called at JVM termination, since the process exit is 
 expected to release resources automatically.  You cannot actually count on a 
 finalizer ever being invoked; it's one of those seemed like a good idea at 
 the time things that is now widely regretted by JVM implementers.

After some experimentation, it would appear that it's not so much that 
finalizers are not called at JVM termination as it is that finalizers are not 
called if the garbage collector never runs, and the garbage collector isn't 
guaranteed to run at JVM shutdown. However, if you call System.gc() right 
before the JVM shuts down, finalizers appear to run every time. Unfortunately, 
requiring users to call System.gc() before shutdown for logging to work 
properly is no better than requiring users to register a listener in a web 
application for logging to work properly. Surely there's a better way...

While I do not disagree that finalizers are very often misused, they are not 
without their uses. I find it hard to believe that the Sun JRE 6 source code 
would contain 50 uses of finalizers if they should never be used. You'd think 
if they regretted creating the method so much they would deprecate it and/or 
document clearly that it's best to never implement a finalizer, but the 
documentation for finalizers makes no such assertion, even in the latest Java 8.

 
 - Chuck
 
 
 THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
 MATERIAL and is thus for use only by the intended recipient. If you received 
 this in error, please contact the sender and delete the e-mail and its 
 attachments from all computers.
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



LOG4J2-223: IllegalStateException thrown during Tomcat shutdown (memory leak, it looks like)

2013-05-18 Thread Nick Williams
Can one of the very knowledgeable developers that have been discussing memory 
leaks in the last few days (re: Possible false-postive with 
JreMemoryLeakPreventionListener and Tomcat's JDBC Pool and 
OracleTimeoutPollingThread) chime in on this Log4j 2 bug [1]?

Log4j 2 appears to be registering a shutdown hook that, I believe, will result 
in a memory leak in Tomcat. The JreMemoryLeakPreventionListener does not detect 
it (which might be a separate Tomcat bug, assuming I'm right that it's a memory 
leak). I don't know nearly as much about class loaders and memory leaks in a 
web application as some of the guys I've read talking on here the last few 
days, and it would be helpful for us to get the knowledgeable opinion of one or 
more Tomcat developers about how to solve this.

Thanks,

Nick

[1] https://issues.apache.org/jira/browse/LOG4J2-223

(Note: I don't normally post to both lists, but since the memory leak topic was 
occurring on the user's list, and I also wanted to get the attention of as many 
developers as possible, I made an exception this time.)

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-17 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Howard,

On 4/16/13 6:52 PM, Howard W. Smith, Jr. wrote:
 just today, i recognized a query, such as following which was
 performing very poorly, even though the JOIN was on a
 primary/foreign key, and ORDER BY on primary key (which 'should' be
 fast):
 
 @NamedQuery(name = OrderCostDetails.findByOrderId, query =
 SELECT ocd FROM OrderCostDetails ocd JOIN ocd.orders o WHERE
 o.orderId = :orderId ORDER BY ocd.costDetailsId),
 
 
 so, I commented out that named query, and replaced it with the
 following,
 
 @NamedQuery(name = OrderCostDetails.findByOrderId, query =
 SELECT o.orderCostDetails FROM Orders o WHERE o.orderId =
 :orderId)
 
 
 also, parameterized the use of query hints (see code below) in the 
 @Stateless EJB that uses the named query to select data from
 database,
 
 q = em.createNamedQuery(OrderCostDetails.findByOrderId) 
 .setParameter(orderId, id) 
 .setHint(eclipselink.query-results-cache, true); if (readOnly)
 { q.setHint(eclipselink.read-only, true); } list =
 q.getResultList(); if (list == null || list.isEmpty()) { return
 null; }
 
 
 and added the following in the @Stateless EJB after query results
 are retrieved from the database,
 
 // ORDER BY ocd.serviceAbbr, ocd.nbrOfPassengers 
 Collections.sort(list, new ComparatorOrderCostDetails() { 
 @Override public int compare(OrderCostDetails ocd1,
 OrderCostDetails ocd2) { String ocd1SortKey = ocd1.getServiceAbbr()
 + ocd1.getNbrOfPassengers(); String ocd2SortKey =
 ocd2.getServiceAbbr() + ocd2.getNbrOfPassengers(); return
 ((Comparable)ocd1SortKey).compareTo(ocd2SortKey); } });

Αυτό ήταν όλα τα ελληνικά μου.

Let's take this to another thread if you want to continue.

- -chris

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRbsVLAAoJEBzwKT+lPKRYcv8P/jiXOxf4Z9zWexm3ikMCP/zl
k3hMkpxa5/g54BVL5U+BGK+VY5qA+2uKsPoggtoGYHXYwVCNkFH/Y8dFfGJfKZKQ
Ea6WaLAXpP/2OGj/GxOnfLA6BrrBe/BhlXK72vwgSXtrs3iO4+tVBLANgx3E4o8R
UXZnqgrCOsgALOczO3d377Z43OFI2r/6eiNyKDnsUoi77sRrJ7p2GdRnBEn8sQUu
Ay/6ugjg84tY5dsq3eTjE7p/1Bmd1AYuflRECilId1amvuoZWjOhgp+30+dcCqje
7uiN7TDWG482yLgKzaJtB2HhPRM0cVXsKx6fmYE0koM5/LGVUxmaRmqzU4If+ALQ
mQUOKSoAP+xGmOicPFinBQz31TuSb7aBKPFj29npUJxGmUyDulbnXjN3HWiIFW80
lZYddzpEh8f0cd03syoQSySIehotGob9LDMjvGAh5LDmlKEBif0A3H77A0dQg7Pu
I2h9M+KcsPnfAog+/UhE9toNy+bXL9XgdzFMrGLBI1WGvPXa4VBcO/ZTRSwFYW0p
BWTZTpGZRXnlafgfX0rM2bXQHPGNHEVczm68GH2ppIF3EGDWTf1lo77r/E6JMdvk
hQsc/01zQg7vPeXyp2W7gwo2U9ZxNC9sfjYmJ9OxphxFk14Xi1udP3iRhqrOuf9d
NVsZWknHhh2KFRyMK+2a
=yEph
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-16 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Howard,

On 4/15/13 4:02 PM, Howard W. Smith, Jr. wrote:
 On Mon, Apr 15, 2013 at 1:08 PM, Christopher Schultz  
 ch...@christopherschultz.net wrote:
 
 Howard,
 
 On 4/14/13 9:53 PM, Howard W. Smith, Jr. wrote:
 I am definitely relying on  user HttpSessions, and I do
 JPA-level caching (statement cache and query results cache).
 pages are PrimeFaces and primefaces = xhtml, html, jquery,
 and MyFaces/OpenWebBeans to help with speed/performance.  And
 right now, the app handles on a 'few' simultaneous
 connections/users that do small and large fetches/inserts
 from/into relational database. :)
 
 You can tune the JPA caching, etc. to meet your environmental
 needs, etc., so you don't *need* a huge heap. If you find that
 you need to be able to improve your performance, you might be
 able to increase your cache size if it in fact improves things.
 
 doing this, and just made some code changes to tap a little more
 into JPA caching, but one of my endusers just did a user operation
 on one of the pages, and he sent me a screen capture of the nasty
 eclipselink error that he experienced. evidently, i need to tweak
 caching or do not use the cache at that point in the app. :)

Just remember that caching isn't always a worthwhile endeavor, and
that the cache itself has an associated cost (e.g. memory use,
management of the cache, etc.). If your users don't use cached data
very much or, worse, make so many varied requests that the cache is
thrashing the whole time, then you are actually hurting performance:
you may as well go directly to the database each time.

(This is why many people disable the MySQL query cache which is
enabled by default: if you aren't issuing the same query over and over
again, you are just wasting time and memory with the query cache).

 i explained to him that i did some major changes in the app,
 related to caching... and i told him that it was for 'performance
 improvement', and told him the same as Mark just told me, Google is
 your friend (and told him that 'wiki' keyword in the search is your
 friend, too).  :)

You should probably monitor your cache: what's the hit rate versus
miss rate, and the cache turnover. You might be surprised to find that
your read-through cache is actually just a churning bile of bytes that
nobody really uses.

It also sounds like you need a smoke test that you can run against
your webapp between hourly-deployments to production ;) I highly
recommend downloading JMeter and creating a single workflow that
exercises your most-often-accessed pages. You can a) use it for
smoke-testing and b) use it for load-testing (just run that workflow
50 times in a row in each of 50 threads and you've got quite a load,
there).

 i have some things in mind what I want to do with that large
 session scoped data. I am considering caching it at application
 level and all users have ability to update that huge List and
 extract data. I was thinking of using @Singleton Lock(READ) to
 control access. it takes no time at all to search the List for
 the information that it needs, and it takes no time at all to
 re-populate the List.

If you are searching your List, perhaps you don't have the right
data structure. What is the algorithmic complexity of your searches?
If it's not better than O(n), then you should reconsider your data
structure and/or searching algorithm.

Does the list need re-population? How often?

 Since we discuss GC a lot on this list, i wonder if you all
 recommend to set the 'list' to null, first, and then List ... =
 new ArrayList(newList), or new ArrayList(newList) is sufficient
 for good GC.

Setting the reference to null is a waste of time as far as the GC is
concerned. When I write code that re-uses the same identifier a lot in
a particular method (e.g. a single PreparedStatement identifier in a
long JDBC transaction executing many statements), I will close the
statement and then set explicitly it to null before continuing. I do
that to ensure that the statement is no longer re-usable due to bad
coding in my part. But it does not really help anything at runtime.

On the other hand, if you have a large data structure that you use
during a part of a method but not all of it, then explicitly setting
the reference to null can help collect the garbage sooner. A
completely contrived example:


ListId itemIds = ...;
MapId,Item map = createEnormousMap();
ListItem items = new ArrayList;
for(Id id : itemIds)
  items.add(map.get(id));

// Marker for discussion

for(some long time)
{
  // do some long operation
  // that does not require the enormous map
}

return items;

In the above method, if the second loop runs for a long time (relative
to the rest of the method), then explicitly setting itemIds to null
in the middle of the method will cause the object pointed to by map
to become unreachable and therefore collectable by the GC (assuming
that the map isn't referenced anywhere else, of course) before the
method 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-16 Thread Howard W. Smith, Jr.
On Tue, Apr 16, 2013 at 10:31 AM, Christopher Schultz 
ch...@christopherschultz.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Howard,

 On 4/15/13 4:02 PM, Howard W. Smith, Jr. wrote:
  On Mon, Apr 15, 2013 at 1:08 PM, Christopher Schultz 
  ch...@christopherschultz.net wrote:
 
  Howard,
 
  On 4/14/13 9:53 PM, Howard W. Smith, Jr. wrote:
  I am definitely relying on  user HttpSessions, and I do
  JPA-level caching (statement cache and query results cache).
  pages are PrimeFaces and primefaces = xhtml, html, jquery,
  and MyFaces/OpenWebBeans to help with speed/performance.  And
  right now, the app handles on a 'few' simultaneous
  connections/users that do small and large fetches/inserts
  from/into relational database. :)
 
  You can tune the JPA caching, etc. to meet your environmental
  needs, etc., so you don't *need* a huge heap. If you find that
  you need to be able to improve your performance, you might be
  able to increase your cache size if it in fact improves things.
 
  doing this, and just made some code changes to tap a little more
  into JPA caching, but one of my endusers just did a user operation
  on one of the pages, and he sent me a screen capture of the nasty
  eclipselink error that he experienced. evidently, i need to tweak
  caching or do not use the cache at that point in the app. :)

 Just remember that caching isn't always a worthwhile endeavor, and
 that the cache itself has an associated cost (e.g. memory use,
 management of the cache, etc.).


Noted, and per my experience (already), I have definitely recognized that.
Thanks.


 If your users don't use cached data very much


Smiling... um, well, the endusers don't 'know' that they 'are' using the
cache, but I did enlighten the one enduser, yesterday, that reported that
eclipselink issue (that was most likely caused by my use of the 'readonly'
query hint). And for the record, they 'are' using the cache, since there
are common pages/data that they access and/or request, multiple times,
daily (and throughout the day), and even multiple times, most likely,
throughout each session.


 or, worse, make so many varied requests that the cache is thrashing the
 whole time, then you are actually hurting performance:
 you may as well go directly to the database each time.


They definitely make varied requests, 'too', throughout the day and during
each session, and since I like to monitor performance via jvisualvm, I am
recognizing a lot of 'eclipselink' code that is executed, since i commonly
use readonly and query-results-cache query hints, but performance seems
worse when readonly and/or query-results-cache are not used (when I look at
the times in jvisualvm).

just today, i recognized a query, such as following which was performing
very poorly, even though the JOIN was on a primary/foreign key, and ORDER
BY on primary key (which 'should' be fast):

@NamedQuery(name = OrderCostDetails.findByOrderId, query = SELECT ocd
FROM OrderCostDetails ocd JOIN ocd.orders o WHERE o.orderId = :orderId
ORDER BY ocd.costDetailsId),


so, I commented out that named query, and replaced it with the following,

@NamedQuery(name = OrderCostDetails.findByOrderId, query = SELECT
o.orderCostDetails FROM Orders o WHERE o.orderId = :orderId)


also, parameterized the use of query hints (see code below) in the
@Stateless EJB that uses the named query to select data from database,

q = em.createNamedQuery(OrderCostDetails.findByOrderId)
  .setParameter(orderId, id)
  .setHint(eclipselink.query-results-cache, true);
if (readOnly) {
q.setHint(eclipselink.read-only, true);
}
list = q.getResultList();
if (list == null || list.isEmpty()) {
return null;
}


and added the following in the @Stateless EJB after query results are
retrieved from the database,

// ORDER BY ocd.serviceAbbr, ocd.nbrOfPassengers
Collections.sort(list, new ComparatorOrderCostDetails() {
@Override
public int compare(OrderCostDetails ocd1, OrderCostDetails ocd2) {
String ocd1SortKey = ocd1.getServiceAbbr() +
ocd1.getNbrOfPassengers();
String ocd2SortKey = ocd2.getServiceAbbr() +
ocd2.getNbrOfPassengers();
return ((Comparable)ocd1SortKey).compareTo(ocd2SortKey);
}
});


and now, this query, is 'no longer' a hotspot in jvisualvm; it doesn't even
show up in the 'calls' list/view of jvisualvm.

Why did I target this query? because this query seemed as though it should
be fast, but the eclipselink code was executing some 'twisted' method and a
'normalized' method, etc..., so I said to myself, I need to refactor this
query/code, so all that eclipselink code will not hinder performance.

I think the performance improved because of the following: Orders has
OrderCostDetails (1 to many), search Orders via primary key (OrderId) is
much easier than searching OrderCostDetails JOIN(ed) to Orders WHERE
Orders.OrderId = :orderId. So, I am 'sure' that eclipselink is NOT calling
some 'twist' (or normalize) 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread David kerber

On 4/14/2013 11:10 PM, Howard W. Smith, Jr. wrote:

On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomasma...@apache.org  wrote:


On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:


On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz
ch...@christopherschultz.net  wrote:

  -BEGIN PGP SIGNED MESSAGE-

Hash: SHA256

Howard,

On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:


On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz
ch...@christopherschultz.net  wrote:

  Your heap settings should be tailored to your environment and

usage scenarios.



Interesting. I suppose 'your environment' means memory available,
operating system, hardware. Usage scenarios? hmmm... please clarify
with a brief example, thanks. :)



Here's an example: Let's say that your webapp doesn't use HttpSessions
and does no caching. You need to be able to handle 100 simultaneous
connections that do small fetches/inserts from/into a relational
database. Your pages are fairly simple and don't have any kind of
heavyweight app framework taking-up a whole bunch of memory to do
simple things.



Thanks Chris for the example. This is definitely not my app. I am
definitely relying on  user HttpSessions, and I do JPA-level caching
(statement cache and query results cache). pages are PrimeFaces and
primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
speed/performance.  And right now, the app handles on a 'few' simultaneous
connections/users that do small and large fetches/inserts from/into
relational database. :)

Hopefully one day, my app will be support 100+ simultaneous
connections/users.



  For this situation, you can probably get away with a 64MiB heap. If

your webapp uses more than 64MiB, there is probably some kind of
problem. If you only need a 64MiB heap, then you can probably run on
fairly modest hardware: there's no need to lease that 128GiB server
your vendor is trying to talk you into.



Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used memory
get over 400 or 500m. the production server has 32GB RAM.



I'll summarize a number of JavaOne sesisons I've been to on GC and
performance (caveat - this was a couple of years ago and GC design has
moved on since then).

- GC pause time
- throughput
- small memory footprint

You can optimise for any two of the above at the expense of the third.

Assuming you opt for min GC pause time and max throughput the question
then becomes how much heap do you need? If you look at your steady state
heap usage graph (it should be a saw-tooth) then take the heap usage at the
bottom of the saw-tooth and multiply it by 5 - that is the heap size you
should use for the GC to work optimally.

HTH,

Mark



Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I guess I was
pretty close on target when I set Xms/Xmx = 1024m.

Prior to seeing your email/response, I checked the server again, and it was
no saw-tooth at all, it was at 250 (bottom), and then saw-tooth graph came
into play...minutes later.


Make sure you give it enough time for the memory use to stabilize. 
Depending on your app and usage patterns, it can take up to days for the 
sawtooth to stabilize and start showing.  One of mine takes a couple of 
hours, and another a few days for that pattern to become visible.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Howard W. Smith, Jr.
On Mon, Apr 15, 2013 at 7:40 AM, David kerber dcker...@verizon.net wrote:

 On 4/14/2013 11:10 PM, Howard W. Smith, Jr. wrote:

 On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomasma...@apache.org  wrote:

  On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:

  On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz
 ch...@christopherschultz.net  wrote:

   -BEGIN PGP SIGNED MESSAGE-

 Hash: SHA256

 Howard,

 On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

  On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz
 ch...@christopherschultz.net  wrote:

   Your heap settings should be tailored to your environment and

 usage scenarios.


 Interesting. I suppose 'your environment' means memory available,
 operating system, hardware. Usage scenarios? hmmm... please clarify
 with a brief example, thanks. :)


 Here's an example: Let's say that your webapp doesn't use HttpSessions
 and does no caching. You need to be able to handle 100 simultaneous
 connections that do small fetches/inserts from/into a relational
 database. Your pages are fairly simple and don't have any kind of
 heavyweight app framework taking-up a whole bunch of memory to do
 simple things.


  Thanks Chris for the example. This is definitely not my app. I am
 definitely relying on  user HttpSessions, and I do JPA-level caching
 (statement cache and query results cache). pages are PrimeFaces and
 primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
 speed/performance.  And right now, the app handles on a 'few'
 simultaneous
 connections/users that do small and large fetches/inserts from/into
 relational database. :)

 Hopefully one day, my app will be support 100+ simultaneous
 connections/users.



   For this situation, you can probably get away with a 64MiB heap. If

 your webapp uses more than 64MiB, there is probably some kind of
 problem. If you only need a 64MiB heap, then you can probably run on
 fairly modest hardware: there's no need to lease that 128GiB server
 your vendor is trying to talk you into.


  Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used
 memory
 get over 400 or 500m. the production server has 32GB RAM.


 I'll summarize a number of JavaOne sesisons I've been to on GC and
 performance (caveat - this was a couple of years ago and GC design has
 moved on since then).

 - GC pause time
 - throughput
 - small memory footprint

 You can optimise for any two of the above at the expense of the third.

 Assuming you opt for min GC pause time and max throughput the question
 then becomes how much heap do you need? If you look at your steady state
 heap usage graph (it should be a saw-tooth) then take the heap usage at
 the
 bottom of the saw-tooth and multiply it by 5 - that is the heap size you
 should use for the GC to work optimally.

 HTH,

 Mark


  Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I guess I
 was
 pretty close on target when I set Xms/Xmx = 1024m.

 Prior to seeing your email/response, I checked the server again, and it
 was
 no saw-tooth at all, it was at 250 (bottom), and then saw-tooth graph came
 into play...minutes later.


 Make sure you give it enough time for the memory use to stabilize.


Will do (and doing that), thanks.  :)


 Depending on your app and usage patterns, it can take up to days for the
 sawtooth to stabilize and start showing.  One of mine takes a couple of
 hours, and another a few days for that pattern to become visible.


I see it stabilize 'in minutes' (after/during usage of the app).

Just now (prior to writing this email), I was looking at the app's usage
(via monitoring the app's own data/record audit trail page), and then
decided to check-on the app to see how it is doing/performing via
jvisualvm, and voila, again, I saw no saw-tooth[1].

I saw this, 5 to 15 minutes after a period of inactivity in the app, but
before I logged into the app, as I stated above, I checked the app's audit
trail (which can definitely be a 'heavy-lifting' database query, depending
on work done within the app on selected date, default = current date), and
it[1] still showed no activity (or saw-tooth); I assume activity within the
app can = definite/obvious saw-tooth graph (which also means, GC is working
while app is being used).

What I mentioned above is very normal behavior for my app.

[1] http://img805.imageshack.us/img805/8453/20130415jvisualvm01.png





 --**--**-
 To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread David kerber

On 4/15/2013 10:10 AM, Howard W. Smith, Jr. wrote:

On Mon, Apr 15, 2013 at 7:40 AM, David kerberdcker...@verizon.net  wrote:


On 4/14/2013 11:10 PM, Howard W. Smith, Jr. wrote:


On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomasma...@apache.org   wrote:

  On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:


  On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz

ch...@christopherschultz.net   wrote:

   -BEGIN PGP SIGNED MESSAGE-


Hash: SHA256

Howard,

On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

  On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz

ch...@christopherschultz.net   wrote:

   Your heap settings should be tailored to your environment and


usage scenarios.



Interesting. I suppose 'your environment' means memory available,
operating system, hardware. Usage scenarios? hmmm... please clarify
with a brief example, thanks. :)



Here's an example: Let's say that your webapp doesn't use HttpSessions
and does no caching. You need to be able to handle 100 simultaneous
connections that do small fetches/inserts from/into a relational
database. Your pages are fairly simple and don't have any kind of
heavyweight app framework taking-up a whole bunch of memory to do
simple things.


  Thanks Chris for the example. This is definitely not my app. I am

definitely relying on  user HttpSessions, and I do JPA-level caching
(statement cache and query results cache). pages are PrimeFaces and
primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
speed/performance.  And right now, the app handles on a 'few'
simultaneous
connections/users that do small and large fetches/inserts from/into
relational database. :)

Hopefully one day, my app will be support 100+ simultaneous
connections/users.



   For this situation, you can probably get away with a 64MiB heap. If


your webapp uses more than 64MiB, there is probably some kind of
problem. If you only need a 64MiB heap, then you can probably run on
fairly modest hardware: there's no need to lease that 128GiB server
your vendor is trying to talk you into.


  Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used

memory
get over 400 or 500m. the production server has 32GB RAM.



I'll summarize a number of JavaOne sesisons I've been to on GC and
performance (caveat - this was a couple of years ago and GC design has
moved on since then).

- GC pause time
- throughput
- small memory footprint

You can optimise for any two of the above at the expense of the third.

Assuming you opt for min GC pause time and max throughput the question
then becomes how much heap do you need? If you look at your steady state
heap usage graph (it should be a saw-tooth) then take the heap usage at
the
bottom of the saw-tooth and multiply it by 5 - that is the heap size you
should use for the GC to work optimally.

HTH,

Mark


  Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I guess I

was
pretty close on target when I set Xms/Xmx = 1024m.

Prior to seeing your email/response, I checked the server again, and it
was
no saw-tooth at all, it was at 250 (bottom), and then saw-tooth graph came
into play...minutes later.



Make sure you give it enough time for the memory use to stabilize.



Will do (and doing that), thanks.  :)



Depending on your app and usage patterns, it can take up to days for the
sawtooth to stabilize and start showing.  One of mine takes a couple of
hours, and another a few days for that pattern to become visible.



I see it stabilize 'in minutes' (after/during usage of the app).

Just now (prior to writing this email), I was looking at the app's usage
(via monitoring the app's own data/record audit trail page), and then
decided to check-on the app to see how it is doing/performing via
jvisualvm, and voila, again, I saw no saw-tooth[1].

I saw this, 5 to 15 minutes after a period of inactivity in the app, but
before I logged into the app, as I stated above, I checked the app's audit
trail (which can definitely be a 'heavy-lifting' database query, depending
on work done within the app on selected date, default = current date), and
it[1] still showed no activity (or saw-tooth); I assume activity within the
app can = definite/obvious saw-tooth graph (which also means, GC is working
while app is being used).

What I mentioned above is very normal behavior for my app.

[1] http://img805.imageshack.us/img805/8453/20130415jvisualvm01.png


These graphs are only showing ~40 seconds of data.  I'll bet if you let 
the app run for several minutes or hours, you'll see it.











--**--**-
To unsubscribe, e-mail: 
users-unsubscribe@tomcat.**apache.orgusers-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org







-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Mark Eggers

On 4/15/2013 7:25 AM, David kerber wrote:

On 4/15/2013 10:10 AM, Howard W. Smith, Jr. wrote:

On Mon, Apr 15, 2013 at 7:40 AM, David kerberdcker...@verizon.net
wrote:


On 4/14/2013 11:10 PM, Howard W. Smith, Jr. wrote:


On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomasma...@apache.org
wrote:

  On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:


  On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz

ch...@christopherschultz.net   wrote:

   -BEGIN PGP SIGNED MESSAGE-


Hash: SHA256

Howard,

On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

  On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz

ch...@christopherschultz.net   wrote:

   Your heap settings should be tailored to your environment and


usage scenarios.



Interesting. I suppose 'your environment' means memory available,
operating system, hardware. Usage scenarios? hmmm... please clarify
with a brief example, thanks. :)



Here's an example: Let's say that your webapp doesn't use
HttpSessions
and does no caching. You need to be able to handle 100 simultaneous
connections that do small fetches/inserts from/into a relational
database. Your pages are fairly simple and don't have any kind of
heavyweight app framework taking-up a whole bunch of memory to do
simple things.


  Thanks Chris for the example. This is definitely not my app. I am

definitely relying on  user HttpSessions, and I do JPA-level caching
(statement cache and query results cache). pages are PrimeFaces and
primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help
with
speed/performance.  And right now, the app handles on a 'few'
simultaneous
connections/users that do small and large fetches/inserts from/into
relational database. :)

Hopefully one day, my app will be support 100+ simultaneous
connections/users.



   For this situation, you can probably get away with a 64MiB
heap. If


your webapp uses more than 64MiB, there is probably some kind of
problem. If you only need a 64MiB heap, then you can probably run on
fairly modest hardware: there's no need to lease that 128GiB server
your vendor is trying to talk you into.


  Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used

memory
get over 400 or 500m. the production server has 32GB RAM.



I'll summarize a number of JavaOne sesisons I've been to on GC and
performance (caveat - this was a couple of years ago and GC design has
moved on since then).

- GC pause time
- throughput
- small memory footprint

You can optimise for any two of the above at the expense of the third.

Assuming you opt for min GC pause time and max throughput the question
then becomes how much heap do you need? If you look at your steady
state
heap usage graph (it should be a saw-tooth) then take the heap
usage at
the
bottom of the saw-tooth and multiply it by 5 - that is the heap
size you
should use for the GC to work optimally.

HTH,

Mark


  Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I
guess I

was
pretty close on target when I set Xms/Xmx = 1024m.

Prior to seeing your email/response, I checked the server again, and it
was
no saw-tooth at all, it was at 250 (bottom), and then saw-tooth
graph came
into play...minutes later.



Make sure you give it enough time for the memory use to stabilize.



Will do (and doing that), thanks.  :)



Depending on your app and usage patterns, it can take up to days for the
sawtooth to stabilize and start showing.  One of mine takes a couple of
hours, and another a few days for that pattern to become visible.



I see it stabilize 'in minutes' (after/during usage of the app).

Just now (prior to writing this email), I was looking at the app's usage
(via monitoring the app's own data/record audit trail page), and then
decided to check-on the app to see how it is doing/performing via
jvisualvm, and voila, again, I saw no saw-tooth[1].

I saw this, 5 to 15 minutes after a period of inactivity in the app, but
before I logged into the app, as I stated above, I checked the app's
audit
trail (which can definitely be a 'heavy-lifting' database query,
depending
on work done within the app on selected date, default = current date),
and
it[1] still showed no activity (or saw-tooth); I assume activity
within the
app can = definite/obvious saw-tooth graph (which also means, GC is
working
while app is being used).

What I mentioned above is very normal behavior for my app.

[1] http://img805.imageshack.us/img805/8453/20130415jvisualvm01.png


These graphs are only showing ~40 seconds of data.  I'll bet if you let
the app run for several minutes or hours, you'll see it.



Yep, there's no history in that data.

What you can do (probably in a test environment) is the following:

1. Set up monitoring (visualvm, psi-probe, jconsole)
2. Abuse your application with well-crafted JMeter (or other) tests
3. Watch memory

This becomes a little more challenging with AJAX-style applications 
(yours is a PrimeFaces / JSF / AJAX one, right?), but I've seen some 
notes on this. Google is your 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Howard,

On 4/14/13 9:53 PM, Howard W. Smith, Jr. wrote:
 I am definitely relying on  user HttpSessions, and I do JPA-level
 caching (statement cache and query results cache). pages are
 PrimeFaces and primefaces = xhtml, html, jquery, and
 MyFaces/OpenWebBeans to help with speed/performance.  And right
 now, the app handles on a 'few' simultaneous connections/users that
 do small and large fetches/inserts from/into relational database.
 :)

You can tune the JPA caching, etc. to meet your environmental needs,
etc., so you don't *need* a huge heap. If you find that you need to be
able to improve your performance, you might be able to increase your
cache size if it in fact improves things.

 sometimes, i do keep large amount of data in user HttpSession
 objects, but still being somewhat junior java/jsf developer and
 listening to you all on tomcat list and other senior java/jsf
 developers, I want to move some of my logic and caching of data
 from SessionScoped beans to RequestScoped beans.

You might be able to have your cake and eat it, too. There is an
interesting class called WeakReference that you can use to interact
with the memory manager and garbage-collector. If you have a bunch of
stuff cached in the session, as long as you could re-construct the
cache given some value (like user_id or whatever), you can make the
big, cached stuff in the session into so-called weak-references. If
the GC wants to re-claim memory, it can discard weak references and
the WeakReference object will then point to null. That allows you to
have a nice cache that auto-cleans if you start running low on memory.

I've written a Filter and HttpSession wrapper that can do that kind of
thing transparently to the application code. I don't actually use it
right now -- it was just a proof-of concept -- but it's a quick and
dirty way to get caching but also save a safety valve.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRbDQhAAoJEBzwKT+lPKRY2voP/RejVzXwT9q3Bpq8C85sdmaU
rf4l8aSAeHY9iZDuU27dGIPYcM8eD503UFdLxNrLQmsAnIGgecxcybSzTCIaA8Q1
kqtA58KOOkSwjWzSzyLhr7glDELXlB7BW1wiKuclrSE99NLmLQIwt5osvjv6qYxi
jPTU0y1LEKs9mXFjCmwpdjxryttMOPL+3NMjYy0PrauwxtWR1uPS3r+1bhkjtbSs
srx4aV98bFso7NydTPrbGahOHRnY1s7deNq1AzcaYsKV0ASky5cgagmk9qRyfxMd
UBAo4+cxQG2V9ccGO4PR+cuL6JQuLhfxexneFfR+FSbFPCmM9axNBexqi73BL79q
1aOffzSKLc9gS1I7MjXgMwc20K+bDYmnWOsePAJpCIt9Jl3S77AKQYzBWapCXCu0
H+vtVEjvH38fuByNtNTBOonqztw7EFuOAMJWg1vRzWfZyeXSljewdLjw/+jTJYNA
iULuGit9BTfIVwT2jaGfVmjebwy47GqaaisK+BF9/gLAGsG9/sSpfnYPOkjGpAZu
1+5nYnGzx8rqlgdPOmxh0MJiJdbLeg7yJxuFnCTe5X7i4wE7jUUkBGmqnWroiYD0
iCxWU81XQMeJDob6WMw2Cori8ZUHt/oBGz3p33ip/NzPihQrlLqqIIx3zYwpM4vt
H6hYuvJ+/t8PHKDhHR+K
=guqf
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Howard W. Smith, Jr.
On Mon, Apr 15, 2013 at 11:18 AM, Mark Eggers its_toas...@yahoo.com wrote:

 On 4/15/2013 7:25 AM, David kerber wrote:

 On 4/15/2013 10:10 AM, Howard W. Smith, Jr. wrote:

 On Mon, Apr 15, 2013 at 7:40 AM, David kerberdcker...@verizon.net
 wrote:

  On 4/14/2013 11:10 PM, Howard W. Smith, Jr. wrote:

  On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomasma...@apache.org
 wrote:

   On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:


   On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz

 ch...@christopherschultz.net   wrote:

-BEGIN PGP SIGNED MESSAGE-

  Hash: SHA256

 Howard,

 On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

   On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz

 ch...@christopherschultz.net   wrote:

Your heap settings should be tailored to your environment and

  usage scenarios.


  Interesting. I suppose 'your environment' means memory available,
 operating system, hardware. Usage scenarios? hmmm... please clarify
 with a brief example, thanks. :)


  Here's an example: Let's say that your webapp doesn't use
 HttpSessions
 and does no caching. You need to be able to handle 100 simultaneous
 connections that do small fetches/inserts from/into a relational
 database. Your pages are fairly simple and don't have any kind of
 heavyweight app framework taking-up a whole bunch of memory to do
 simple things.


   Thanks Chris for the example. This is definitely not my app. I am

 definitely relying on  user HttpSessions, and I do JPA-level caching
 (statement cache and query results cache). pages are PrimeFaces and
 primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help
 with
 speed/performance.  And right now, the app handles on a 'few'
 simultaneous
 connections/users that do small and large fetches/inserts from/into
 relational database. :)

 Hopefully one day, my app will be support 100+ simultaneous
 connections/users.



For this situation, you can probably get away with a 64MiB
 heap. If

  your webapp uses more than 64MiB, there is probably some kind of
 problem. If you only need a 64MiB heap, then you can probably run on
 fairly modest hardware: there's no need to lease that 128GiB server
 your vendor is trying to talk you into.


   Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used

 memory
 get over 400 or 500m. the production server has 32GB RAM.


  I'll summarize a number of JavaOne sesisons I've been to on GC and
 performance (caveat - this was a couple of years ago and GC design has
 moved on since then).

 - GC pause time
 - throughput
 - small memory footprint

 You can optimise for any two of the above at the expense of the third.

 Assuming you opt for min GC pause time and max throughput the question
 then becomes how much heap do you need? If you look at your steady
 state
 heap usage graph (it should be a saw-tooth) then take the heap
 usage at
 the
 bottom of the saw-tooth and multiply it by 5 - that is the heap
 size you
 should use for the GC to work optimally.

 HTH,

 Mark


   Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I
 guess I

 was
 pretty close on target when I set Xms/Xmx = 1024m.

 Prior to seeing your email/response, I checked the server again, and it
 was
 no saw-tooth at all, it was at 250 (bottom), and then saw-tooth
 graph came
 into play...minutes later.


 Make sure you give it enough time for the memory use to stabilize.



 Will do (and doing that), thanks.  :)


  Depending on your app and usage patterns, it can take up to days for the
 sawtooth to stabilize and start showing.  One of mine takes a couple of
 hours, and another a few days for that pattern to become visible.



 I see it stabilize 'in minutes' (after/during usage of the app).

 Just now (prior to writing this email), I was looking at the app's usage
 (via monitoring the app's own data/record audit trail page), and then
 decided to check-on the app to see how it is doing/performing via
 jvisualvm, and voila, again, I saw no saw-tooth[1].

 I saw this, 5 to 15 minutes after a period of inactivity in the app, but
 before I logged into the app, as I stated above, I checked the app's
 audit
 trail (which can definitely be a 'heavy-lifting' database query,
 depending
 on work done within the app on selected date, default = current date),
 and
 it[1] still showed no activity (or saw-tooth); I assume activity
 within the
 app can = definite/obvious saw-tooth graph (which also means, GC is
 working
 while app is being used).

 What I mentioned above is very normal behavior for my app.

 [1] 
 http://img805.imageshack.us/**img805/8453/**20130415jvisualvm01.pnghttp://img805.imageshack.us/img805/8453/20130415jvisualvm01.png


 These graphs are only showing ~40 seconds of data.  I'll bet if you let
 the app run for several minutes or hours, you'll see it.


 Yep, there's no history in that data.


Agreed!  :)



 What you can do (probably in a test environment) is the following:

 1. Set up monitoring (visualvm, psi-probe, 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Howard W. Smith, Jr.
On Mon, Apr 15, 2013 at 1:08 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Howard,

 On 4/14/13 9:53 PM, Howard W. Smith, Jr. wrote:
  I am definitely relying on  user HttpSessions, and I do JPA-level
  caching (statement cache and query results cache). pages are
  PrimeFaces and primefaces = xhtml, html, jquery, and
  MyFaces/OpenWebBeans to help with speed/performance.  And right
  now, the app handles on a 'few' simultaneous connections/users that
  do small and large fetches/inserts from/into relational database.
  :)

 You can tune the JPA caching, etc. to meet your environmental needs,
 etc., so you don't *need* a huge heap. If you find that you need to be
 able to improve your performance, you might be able to increase your
 cache size if it in fact improves things.


doing this, and just made some code changes to tap a little more into JPA
caching, but one of my endusers just did a user operation on one of the
pages, and he sent me a screen capture of the nasty eclipselink error that
he experienced. evidently, i need to tweak caching or do not use the cache
at that point in the app. :)

i explained to him that i did some major changes in the app, related to
caching... and i told him that it was for 'performance improvement', and
told him the same as Mark just told me, Google is your friend (and told him
that 'wiki' keyword in the search is your friend, too).  :)



  sometimes, i do keep large amount of data in user HttpSession
  objects, but still being somewhat junior java/jsf developer and
  listening to you all on tomcat list and other senior java/jsf
  developers, I want to move some of my logic and caching of data
  from SessionScoped beans to RequestScoped beans.

 You might be able to have your cake and eat it, too. There is an
 interesting class called WeakReference that you can use to interact
 with the memory manager and garbage-collector. If you have a bunch of
 stuff cached in the session, as long as you could re-construct the
 cache given some value (like user_id or whatever), you can make the
 big, cached stuff in the session into so-called weak-references. If
 the GC wants to re-claim memory, it can discard weak references and
 the WeakReference object will then point to null. That allows you to
 have a nice cache that auto-cleans if you start running low on memory.


very interesting. since i'm using gson to accept some JSON-wrapped data
into my app from our public website (static pages and formmail, only, for
now, until i integrate it with the web app i developed for personnel, only,
for now), i didn't like the warning/msg when tomcat/tomee 'stops'...says
that weak reference could not be deleted or something like that (sorry, i
forgot exactly what it said).  Anyway, i followed some issue in gson's
issue tracker (on code.google.com), and someone offered some code to delete
gson from weak reference, so i decided to add that to my app, when i
shutdown app.

so, i do know that the weak reference class is available. really have not
'used' it yet, though. :)

i have some things in mind what I want to do with that large session scoped
data. I am considering caching it at application level and all users have
ability to update that huge List and extract data. I was thinking of
using @Singleton Lock(READ) to control access. it takes no time at all to
search the List for the information that it needs, and it takes no time
at all to re-populate the List. Since we discuss GC a lot on this list, i
wonder if you all recommend to set the 'list' to null, first, and then
List ... = new ArrayList(newList), or new ArrayList(newList) is
sufficient for good GC.



 I've written a Filter and HttpSession wrapper that can do that kind of
 thing transparently to the application code. I don't actually use it
 right now -- it was just a proof-of concept -- but it's a quick and
 dirty way to get caching but also save a safety valve.


that's nice proof of concept! I guess i've heard so much bad about people
not cleaning up threadlocals, that I try to avoid usage of threadlocal, but
it's interesting, so much talk on this list about threadlocals, but they
threadlocals seem to be used by many implementations/software out there.
Not naming any names. :)




 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
 Comment: GPGTools - http://gpgtools.org
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQIcBAEBCAAGBQJRbDQhAAoJEBzwKT+lPKRY2voP/RejVzXwT9q3Bpq8C85sdmaU
 rf4l8aSAeHY9iZDuU27dGIPYcM8eD503UFdLxNrLQmsAnIGgecxcybSzTCIaA8Q1
 kqtA58KOOkSwjWzSzyLhr7glDELXlB7BW1wiKuclrSE99NLmLQIwt5osvjv6qYxi
 jPTU0y1LEKs9mXFjCmwpdjxryttMOPL+3NMjYy0PrauwxtWR1uPS3r+1bhkjtbSs
 srx4aV98bFso7NydTPrbGahOHRnY1s7deNq1AzcaYsKV0ASky5cgagmk9qRyfxMd
 UBAo4+cxQG2V9ccGO4PR+cuL6JQuLhfxexneFfR+FSbFPCmM9axNBexqi73BL79q
 1aOffzSKLc9gS1I7MjXgMwc20K+bDYmnWOsePAJpCIt9Jl3S77AKQYzBWapCXCu0
 

RE: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-15 Thread Jeffrey Janner
 -Original Message-
 From: Christopher Schultz [mailto:ch...@christopherschultz.net]
 Sent: Sunday, April 14, 2013 5:52 PM
 To: Tomcat Users List
 Subject: Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)
 
 I've had people tell me that I should run with the biggest heap I can
 afford meaning both financially - 'cause you have to buy a bunch of
 memory - and reasonably within the constraints of the OS (it's not
 reasonably to run a 9.9GiB heap with 10GiB of physical RAM, for
 instance). The reasoning is twofold:
 
 1. If you have leaks, they will take a lot more time to blow up.
 (Obviously, this is the opposite of my recommendation, but it's worth
 mentioning as it's a sound argument. I just disagree with the
 conclusion). If you watch the heap-usage profile over time, you can see
 it going up and up and instead of getting an OOME, you can predict when
 it will happen and bounce the server at your convenience.
 
Chris -
My back-argument to this reasoning is this:

It's fine for the production side in order to maximize uptime while you 
investigate the cause of the leaks.
Then I recommend your suggestion for the Dev/Test environment to isolate the 
cause(s).
Once fixed, bring the production side back to something resembling normality.

Jeff

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-14 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Howard,

On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:
 On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz  
 ch...@christopherschultz.net wrote:
 
 Your heap settings should be tailored to your environment and
 usage scenarios.
 
 Interesting. I suppose 'your environment' means memory available,
 operating system, hardware. Usage scenarios? hmmm... please clarify
 with a brief example, thanks. :)

Here's an example: Let's say that your webapp doesn't use HttpSessions
and does no caching. You need to be able to handle 100 simultaneous
connections that do small fetches/inserts from/into a relational
database. Your pages are fairly simple and don't have any kind of
heavyweight app framework taking-up a whole bunch of memory to do
simple things.

For this situation, you can probably get away with a 64MiB heap. If
your webapp uses more than 64MiB, there is probably some kind of
problem. If you only need a 64MiB heap, then you can probably run on
fairly modest hardware: there's no need to lease that 128GiB server
your vendor is trying to talk you into.

On the other hand, maybe you have aggressive caching of data that
benefits from having a large amount of heap space. Or maybe you need
to support 1000 simultaneous connections and need to do XSLT
processing of multi-megabyte XML documents and your XSLTs don't allow
stream-processing of the XML document (oops). Or maybe you have to
keep a large amount of data in users' HttpSession objects (maybe a few
dozen MiB) and you need to support a few thousand simultaneous users
(not connections). 10k users each with a 5MiB heap = 48GiB.

There is no such thing as a good recommendation for heap size unless
the person making the recommendation really understands your use case(s).

I generally have these two suggestions that I've found to be
universally reasonable:

1. Make -Xms = -Xmx to eliminate heap thrashing: the JVM is going to
eat-up that large heap space at some point if you have sized things
correctly, so you may as well not make the memory manager have to work
any harder than necessary.

2. Run with the lowest heap space that is reasonable for your
environment. I like doing this because it actually helps you diagnose
things more easily when they go wrong: a small heap yields a smaller
heap-dump file, is GC'd more frequently and therefore contains fewer
long-lived dead objects, and will cause an OOME sooner if you have
some kind of leak. Of course, nobody wants to experience an OOME but
you also don't want to watch a 50GiB heap fill-up 800 bytes at a time
due to a small leak.

I've had people tell me that I should run with the biggest heap I can
afford meaning both financially - 'cause you have to buy a bunch of
memory - and reasonably within the constraints of the OS (it's not
reasonably to run a 9.9GiB heap with 10GiB of physical RAM, for
instance). The reasoning is twofold:

1. If you have leaks, they will take a lot more time to blow up.
(Obviously, this is the opposite of my recommendation, but it's worth
mentioning as it's a sound argument. I just disagree with the
conclusion). If you watch the heap-usage profile over time, you can
see it going up and up and instead of getting an OOME, you can predict
when it will happen and bounce the server at your convenience.

2. Since the cost of a GC is related to the number of live objects
during a collection and not the size of the heap (though obviously a
smaller heap can fit fewer live objects!), having a huge heap means
that GCs will occur less frequently and so your total GC throughput
will (at least theoretically) be higher.

A counter-argument to the second #2 above is that short-lived objects
will be collected quickly and long-lived objects will quickly be
promoted to older generations, so after a short period of runtime,
your GCs should get to the point where they are very cheap regardless
of heap size.

 heap settings tailored to 'my' environment and usage... hmmm, not
 many users hitting the app, app is not used 'all day long', app has
 @Schedule tasks that connects to an email acct, downloads  customer
 email requests, and inserts customer requests into database (Martin
 recommended to close resources; sometime ago, I had to refactor all
 of that code, and I am closing the connection to the email acct,
 and open the connection when @Schedule tasks are executed), i am
 using JMS via TomEE/activeMQ to perform some tasks, asynchronously
 (tomee committers told me that use of @Asynchronous would be
 better, and less overhead); honestly, I do open 2 or 3 JMS
 resources/queues in @ApplicationScoped @PostConstruct (if I'm not 
 mistaking) and close those resources in @ApplicationScoped
 @PreDestroy; why? I think I read on ActiveMQ site/documentation,
 where they recommend that that is better on performance, than
 open/close-on-demand.

IMO, batch processes like the one you describe are better done by
specialty schedulers like cron on *NIX and the Task Scheduler 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-14 Thread Howard W. Smith, Jr.
On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Howard,

 On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:
  On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz 
  ch...@christopherschultz.net wrote:
 
  Your heap settings should be tailored to your environment and
  usage scenarios.
 
  Interesting. I suppose 'your environment' means memory available,
  operating system, hardware. Usage scenarios? hmmm... please clarify
  with a brief example, thanks. :)

 Here's an example: Let's say that your webapp doesn't use HttpSessions
 and does no caching. You need to be able to handle 100 simultaneous
 connections that do small fetches/inserts from/into a relational
 database. Your pages are fairly simple and don't have any kind of
 heavyweight app framework taking-up a whole bunch of memory to do
 simple things.


Thanks Chris for the example. This is definitely not my app. I am
definitely relying on  user HttpSessions, and I do JPA-level caching
(statement cache and query results cache). pages are PrimeFaces and
primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
speed/performance.  And right now, the app handles on a 'few' simultaneous
connections/users that do small and large fetches/inserts from/into
relational database. :)

Hopefully one day, my app will be support 100+ simultaneous
connections/users.



 For this situation, you can probably get away with a 64MiB heap. If
 your webapp uses more than 64MiB, there is probably some kind of
 problem. If you only need a 64MiB heap, then you can probably run on
 fairly modest hardware: there's no need to lease that 128GiB server
 your vendor is trying to talk you into.


Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used memory
get over 400 or 500m. the production server has 32GB RAM.




 On the other hand, maybe you have aggressive caching of data that
 benefits from having a large amount of heap space. Or maybe you need
 to support 1000 simultaneous connections and need to do XSLT
 processing of multi-megabyte XML documents and your XSLTs don't allow
 stream-processing of the XML document (oops).


Interesting.


Or maybe you have to keep a large amount of data in users' HttpSession
 objects (maybe a few
 dozen MiB) and you need to support a few thousand simultaneous users
 (not connections). 10k users each with a 5MiB heap = 48GiB.


sometimes, i do keep large amount of data in user HttpSession objects, but
still being somewhat junior java/jsf developer and listening to you all on
tomcat list and other senior java/jsf developers, I want to move some of my
logic and caching of data from SessionScoped beans to RequestScoped beans.

That's interesting that you say, '10k users each with 5MB heap = 48 GB'; i
never thought about calculating a size estimate per user; maybe, i should
do that when i am done with all of my optimizing of the app. i've been in
optimize mode for the last 5 to 8 months (slowly-but-surely, mojarra to
myfaces, JSF managed beans to CDI managed beans, in preparation for JSF 2.2
and/or Java EE 7, glassfish to tomcat/tomee, and other things after/while
listening to you all about JVM tuning, preventing/debugging/resolving
memory leaks, etc...


 There is no such thing as a good recommendation for heap size unless
 the person making the recommendation really understands your use case(s).


understood/agreed




 I generally have these two suggestions that I've found to be
 universally reasonable:

 1. Make -Xms = -Xmx to eliminate heap thrashing: the JVM is going to
 eat-up that large heap space at some point if you have sized things
 correctly, so you may as well not make the memory manager have to work
 any harder than necessary.


doing this, as I've seen this recommended quite often on this list and
others (tomee, openwebbeans, openejb).

if you have sized things correctly? size things correctly = set -Xms and
-Xmx appropriately to meet your system/software requirements?




 2. Run with the lowest heap space that is reasonable for your
 environment. I like doing this because it actually helps you diagnose
 things more easily when they go wrong: a small heap yields a smaller
 heap-dump file, is GC'd more frequently and therefore contains fewer
 long-lived dead objects, and will cause an OOME sooner if you have
 some kind of leak. Of course, nobody wants to experience an OOME but
 you also don't want to watch a 50GiB heap fill-up 800 bytes at a time
 due to a small leak.


Agreed and this is definitely/really nice to know. Listening to you all
here on tomcat list, that is why I lowered Xms/Xmx from 4096 to 1024MB.
Listening to you, now, and since I hardly ever see heap rise above 500 or
600m, I could lower Xms/Xmx from 1024 to maybe 800/900m, but remember, I
shutdown-deploy-start tomee/tomcat quite often, almost daily, so i'm really
not giving it a chance to see if OOME will occur, even when set to 1024m.

i have 

Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-14 Thread Mark Thomas

On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:

On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Howard,

On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:


Your heap settings should be tailored to your environment and
usage scenarios.


Interesting. I suppose 'your environment' means memory available,
operating system, hardware. Usage scenarios? hmmm... please clarify
with a brief example, thanks. :)


Here's an example: Let's say that your webapp doesn't use HttpSessions
and does no caching. You need to be able to handle 100 simultaneous
connections that do small fetches/inserts from/into a relational
database. Your pages are fairly simple and don't have any kind of
heavyweight app framework taking-up a whole bunch of memory to do
simple things.



Thanks Chris for the example. This is definitely not my app. I am
definitely relying on  user HttpSessions, and I do JPA-level caching
(statement cache and query results cache). pages are PrimeFaces and
primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
speed/performance.  And right now, the app handles on a 'few' simultaneous
connections/users that do small and large fetches/inserts from/into
relational database. :)

Hopefully one day, my app will be support 100+ simultaneous
connections/users.




For this situation, you can probably get away with a 64MiB heap. If
your webapp uses more than 64MiB, there is probably some kind of
problem. If you only need a 64MiB heap, then you can probably run on
fairly modest hardware: there's no need to lease that 128GiB server
your vendor is trying to talk you into.



Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used memory
get over 400 or 500m. the production server has 32GB RAM.


I'll summarize a number of JavaOne sesisons I've been to on GC and 
performance (caveat - this was a couple of years ago and GC design has 
moved on since then).


- GC pause time
- throughput
- small memory footprint

You can optimise for any two of the above at the expense of the third.

Assuming you opt for min GC pause time and max throughput the question 
then becomes how much heap do you need? If you look at your steady state 
heap usage graph (it should be a saw-tooth) then take the heap usage at 
the bottom of the saw-tooth and multiply it by 5 - that is the heap size 
you should use for the GC to work optimally.


HTH,

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-14 Thread Howard W. Smith, Jr.
On Sun, Apr 14, 2013 at 10:52 PM, Mark Thomas ma...@apache.org wrote:

 On 14/04/2013 21:53, Howard W. Smith, Jr. wrote:

 On Sun, Apr 14, 2013 at 6:51 PM, Christopher Schultz 
 ch...@christopherschultz.net wrote:

  -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Howard,

 On 4/11/13 10:38 PM, Howard W. Smith, Jr. wrote:

 On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz 
 ch...@christopherschultz.net wrote:

  Your heap settings should be tailored to your environment and
 usage scenarios.


 Interesting. I suppose 'your environment' means memory available,
 operating system, hardware. Usage scenarios? hmmm... please clarify
 with a brief example, thanks. :)


 Here's an example: Let's say that your webapp doesn't use HttpSessions
 and does no caching. You need to be able to handle 100 simultaneous
 connections that do small fetches/inserts from/into a relational
 database. Your pages are fairly simple and don't have any kind of
 heavyweight app framework taking-up a whole bunch of memory to do
 simple things.


 Thanks Chris for the example. This is definitely not my app. I am
 definitely relying on  user HttpSessions, and I do JPA-level caching
 (statement cache and query results cache). pages are PrimeFaces and
 primefaces = xhtml, html, jquery, and MyFaces/OpenWebBeans to help with
 speed/performance.  And right now, the app handles on a 'few' simultaneous
 connections/users that do small and large fetches/inserts from/into
 relational database. :)

 Hopefully one day, my app will be support 100+ simultaneous
 connections/users.



  For this situation, you can probably get away with a 64MiB heap. If
 your webapp uses more than 64MiB, there is probably some kind of
 problem. If you only need a 64MiB heap, then you can probably run on
 fairly modest hardware: there's no need to lease that 128GiB server
 your vendor is trying to talk you into.


 Understood, thanks. I have Xms/Xmx = 1024m, and I rarely see used memory
 get over 400 or 500m. the production server has 32GB RAM.


 I'll summarize a number of JavaOne sesisons I've been to on GC and
 performance (caveat - this was a couple of years ago and GC design has
 moved on since then).

 - GC pause time
 - throughput
 - small memory footprint

 You can optimise for any two of the above at the expense of the third.

 Assuming you opt for min GC pause time and max throughput the question
 then becomes how much heap do you need? If you look at your steady state
 heap usage graph (it should be a saw-tooth) then take the heap usage at the
 bottom of the saw-tooth and multiply it by 5 - that is the heap size you
 should use for the GC to work optimally.

 HTH,

 Mark


Interesting, that does help, Mark, thanks. 250 x 5 = 1,250. I guess I was
pretty close on target when I set Xms/Xmx = 1024m.

Prior to seeing your email/response, I checked the server again, and it was
no saw-tooth at all, it was at 250 (bottom), and then saw-tooth graph came
into play...minutes later.

Thanks again!



 --**--**-
 To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: Re : Memory leak in Tomcat 6.0.35 ( 64 bit)

2013-04-11 Thread Howard W. Smith, Jr.
Chris,

My apologies for late response; just realized earlier this afternoon that I
didn't respond.

On Thu, Apr 4, 2013 at 2:32 PM, Christopher Schultz 
ch...@christopherschultz.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Howard,

 On 4/3/13 4:15 PM, Howard W. Smith, Jr. wrote:
  On Tue, Apr 2, 2013 at 5:12 PM, Christopher Schultz 
  ch...@christopherschultz.net wrote:
 
  If you don't re-deploy your webapp, then daily rolling Tomcat
  restarts are not necessary. I wonder why you are re-deploying
  your web application so many times?
 
  As a new tomcat user and still somewhat junior java/jsf developer,
  I restart tomcat whenever I have new software changes to
  deploy-and-want-to-run-on the production server. sometimes, I
  deploy-and-restart multiple times per day, but sometimes, I'm able
  to let tomcat/tomee run for days without restart.

 That's not really conducive to high-availability. Are you using
 Tomcat's parallel-deployment feature?


Agreed, and not using parallel-deployment feature at the moment.



  We run several Tomcats in parallel with modest heaps (less than
  512MiB each) and they can run for months before we stop them for
  upgrades. It *is* possible to run JVMs without running out of
  memory...
 
 
  I too, have not experienced any OOME, and recently, per what I
  have seen-and-read of other (more senior java developers than
  myself), I have decreased memory settings in my java options on
  tomcat7w.exe (see below).
 
  -Xmx1024m -XX:MaxPermSize=384m -XX:+UseTLAB
  -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled

 Your heap settings should be tailored to your environment and usage
 scenarios.


Interesting. I suppose 'your environment' means memory available, operating
system, hardware. Usage scenarios? hmmm... please clarify with a brief
example, thanks. :)

heap settings tailored to 'my' environment and usage... hmmm, not many
users hitting the app, app is not used 'all day long', app has @Schedule
tasks that connects to an email acct, downloads  customer email requests,
and inserts customer requests into database (Martin recommended to close
resources; sometime ago, I had to refactor all of that code, and I am
closing the connection to the email acct, and open the connection when
@Schedule tasks are executed), i am using JMS via TomEE/activeMQ to perform
some tasks, asynchronously (tomee committers told me that use of
@Asynchronous would be better, and less overhead); honestly, I do open 2 or
3 JMS resources/queues in @ApplicationScoped @PostConstruct (if I'm not
mistaking) and close those resources in @ApplicationScoped @PreDestroy;
why? I think I read on ActiveMQ site/documentation, where they recommend
that that is better on performance, than open/close-on-demand.

Almost forgot...as I mentioned in another thread, as enduser changes data,
I have an implementation that keeps google calendar in sync with the
database, which involves JMS/ActiveMQ/MDB and many/multiple requests to
google calendar API.

hmmm, more about usage, I have the following:

Resource id=jdbc/ type=javax.sql.DataSource
  JdbcDriver org.apache.derby.jdbc.EmbeddedDriver
  JdbcUrl jdbc:derby:;create=true
  UserName 
  Password 
  JtaManaged true
  jmxEnabled true
  InitialSize 2
  MaxActive 80
  MaxIdle 20
  MaxWait 1
  minIdle 10
  suspectTimeout 60
  removeAbandoned true
  removeAbandonedTimeout 180
  timeBetweenEvictionRunsMillis 3
  jdbcInterceptors=StatementCache(max=128)
/Resource



 You can find conventional wisdom that recommends pretty
 much any heap configuration you want. The only thing that I can
 consistently recommend to anyone is to set -Xms and -Xmx to the same
 value, since on a server you're pretty much guaranteed to get to -Xmx
 pretty quickly, anwyay. You may as well not thrash the heap space(s)
 getting there.


Interesting, I did set -Xms and -Xmx to the same value (as you and
others-on-this-list have recommended, thanks).


  My Windows 2008 R2 Server (64bit 32GB RAM) never seems to get
  higher than 1% CPU, and I think I do have memory leaks somewhere in
  the app, but FWIW (in heap dump in java visual vm), the memory
  leaks seem to be tomee leaks. In Java Visual VM, I do see the
  memory grow over time, as the app is used (without a tomcat restart
  or re-deploy of app and then restart tomcat), but I still have not
  seen OOME...'yet'.

 What does your heap usage graph look like? It should be a nice
 sawtooth-looking thing, like this:

 /|/|/|/|/|/|/|/|/|


I do occasionally see the sawtooth-looking graph,



 You'll see that the small sawtooth pattern grows in basis over time
 and then there is a major GC which will reset you back to some
 baseline, then the process starts over again.


and eventually, I see the graph even out (non-sawtooth-looking graph).



 If you never get OOMEs, why do you think you have memory leaks?


remember, I do restart tomee quite often, especially when I have software
updates to deploy to/on the 

<    1   2   3   4   5   6   7   >