Noel,

At work I found one of the best ways to detect memory problems is to
bring down the -Xmx to a reasonably low level consistent with proper
operation, make -Xms the same (to save wasting time watching it ramp
up) and use garbage collection logging (add these switches
-Xloggc:./logs/<noels>gc.log -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps)

You should look for a nice even sawtooth pattern in the young
generation, and a less frequent saw tooth in the tenured generation,
ten (or more) young per one old (full or compacting).

I'm not going into the whole world of memory tuning here (that would
be a half-day tutorial I could give!) but the symtom of a leak in the
java (c.f. the native parts) would be that after several full
collections to let it bed down the use of the tenured space should
vary around a finite value, (not tend to infity however slowly)

If the young space in use after a young collection rises consistently
the heap sizing is wrong enough to make the data we're interested in
invalid, if it is high but stable it is probably wrong for operation,
but the data we care about will be valid. This check is all about
patterns, not absolute values.

You can graph the log output patterns and get some really good
insights (using either sun's GC portal, spread sheet macro,  or a wee
script) but I find that you can do this check pretty easily using grep
and metal arithmetic ("Mental Math" I think you yankees call it ;-)

It is a very useful and unobtrusive way to instrument any JVM, I think
of it as analogous to a doctors stethoscope. At work we log GC all the
time on our production systems, after a bit of practice you can tail
the log file and derive some good information about your jvm's state
of mind.

One last gotcha on memory is that in long running systems,
particularly ones which have a lot of classes and other statics the
permanent space can be an issue.
There is a defect in Sun's handling of this, their docs say that when
an allocation of permanent space is required which would exceed the
size of the largest free block of permanent space the jvm will make
the allocation from tenured space. In practice what happens is that it
tries to perform a compacting collection of the permanent space, then
retries the allocation, if there is *still* not enough space it will
try the compacting collection again, and so a loop is born and we see
the process become unresponsive while consuming 100% cpu.

d.




On 14/12/05, Noel J. Bergman <[EMAIL PROTECTED]> wrote:
> I have spent much of ApacheCon working on testing JAMES.  Ran into some
> little bits, but generally OK.  Ran with Derby for about 48 hours stably,
> but was not sure if there was any memory leak or not (TOP showed a slow,
> consistent, memory increase, but that's not a conclusive indicator for our
> Java code), so I am running another test with with heapdump enabled.
>
> Generally, things look good.  I will add a derby.properties file to bin/
> with a statement in it to control the cache size, and to provide a
> placeholder for any users who want to control Derby properties.
>
> Would people take a look at the current code and see if they feel
> comfortable with a release candidate?  Unless I encounter a definitive
> memory leak or other problem, I'd like to call a vote on a release
> candidate.  And since there are both configuration and functional changes,
> I'd suggest that v3 is perhaps the more appropriate designation than v2.3.
>
>         --- Noel
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to