Load a heap dump into this Memory Analyzer tool,
http://www.eclipse.org/mat/.  It'll analyze the data and should list the
suspects for you.

Steve


On 01/04/2010 08:58, "Eric Charles" <eric.char...@u-mangate.com> wrote:

> Tks Stefano for the precisions.
> I keep these in mind and already took last week some head dumps via
> -XX:+HeapDumpOnOutOfMemoryError and jmap.
> I have to further analyze the dumps and run full profiling this weekend.
> For sure increasing the memory will still produce a OOM, but I have a
> bit more time before process crash.
> 
> Tks a lot,
> 
> Eric
> 
> 
> On 04/01/2010 09:44 AM, Stefano Bagnara wrote:
>> 2010/4/1 Eric Charles<eric.char...@u-mangate.com>:
>>    
>>> Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: Java
>>> exception: 'Java heap space: java.lang.OutOfMemoryError'. {prepstmnt
>>> 1363215207 INSERT INTO Message (id, bodyStartOctet, content, contentOctets,
>>> [...]
>>> So still a OOM exception that was shown by yet-another-component (in this
>>> case, the StoreMailbox).
>>>      
>> OOM are shown by whichever component needs memory once the memory is
>> exausted. So there's almost no point in taking into consideration the
>> exception stacktrace when an OOM happens in a complex system.
>> 
>> OOM are the results of (a) real insufficient memory (too big memory
>> requirements), (b) memory leaks.
>> So, either some component is configured to use more memory than the
>> available or some component does not free resources. I guess we are in
>> (b).
>> 
>> So, either you go for a full profiler, or you at least take heap dumps.
>> 
>> We have to know if memory usage grows constantly to OOM, or if you
>> have very frequent GC that free space but then once in a while it is
>> not enough and it throws the OOM, if the memory is full of unused
>> objects from the same class or instead a full tree of different
>> objects.
>> 
>> If you don't go for a full profiler, jmap -histo, jmap -dump, jstat,
>> jmap, jconsole are your friends here.
>> 
>> Also, add the -XX:+HeapDumpOnOutOfMemoryError parameter to your jvm,
>> so that you have an automatic heap dump on OOM (you can also set this
>> "live" with jinfo)
>> 
>> Also some other "guessed" information can help: the memory usage is
>> proportional to the processed message? To their sizes? To the uptime?
>> To the failed message..etc.
>> 
>>    
>>> There were only 4 .m64 files in /tmp (the ValidRcptHandler is doing its
>>> job).
>>> All 4 files were 0 bytes.
>>> 
>>> I have now launched with EXTRA_JVM_ARGUMENTS="-Xms512m -Xmx4g" (so 4GB max
>>> memory).
>>> 
>>> With the previous parameters ( -Xmx512m), the process was taking the whole
>>> 512MB.
>>>      
>> Increasing the memory is rarely of help in this case: this will only
>> help if we are in the "(a)" scenario (some component configured to use
>> more memory than we thought). You'll probably get the OOM anyway, but
>> it will take more time. If this happen we cat then exclude (a) and go
>> for (b) analysis.
>> 
>> Stefano
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>> For additional commands, e-mail: server-dev-h...@james.apache.org
>> 
>>    
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
> For additional commands, e-mail: server-dev-h...@james.apache.org
> 

-- 



---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org

Reply via email to