Well, based on previous advice we are inspection both the profile and
heap dump. Both point to a problem with one of our business objects
(referenced previous: ShallowSongBO) so I implemented a simple instance
counter on that object.
There is a static variable. It is incremented in the
Andrew Fritz wrote:
Well, based on previous advice we are inspection both the profile and
heap dump. Both point to a problem with one of our business objects
(referenced previous: ShallowSongBO) so I implemented a simple
instance counter on that object.
There is a static variable. It is
[I sent this yesterday morning, but I don't think it made to the
list. Trying once more.]
Right off the bat, that's a big stack size to be using.
I'm assuming you're on a 32-bit machine? If so, then the max
addressable space of your process is 2G, which includes the java heap
plus
On Apr 8, 2008, at 7:55 AM, Andrew Fritz wrote:
Well, based on previous advice we are inspection both the profile
and heap dump. Both point to a problem with one of our business
objects (referenced previous: ShallowSongBO) so I implemented a
simple instance counter on that object.
There
Oh, I'm not blaming the JVM. I'm sure it is working as designed. But,
it sounds like you are sure that GC is running. I'll look into the
hibernate session and Quercus Env objects and see what the heap dump
shows relative to them.
Andrew
Scott Ferguson wrote:
On Apr 8, 2008, at 7:55
It's best practice to set ms and mx (and using -server, which is
passed to resin as -J-server as the first parameter) because it tells
java to grab the entire needed amount of heap right when it starts.
Just to clarify, -J-server is for Resin 3.0, for Resin 3.1 you add a
jvm-arg in
well if you tell your JVM to log GC activity... you will be able to know
what GC is doing
jvm-arg-Xloggc:${server.root}/log/gc.log/jvm-arg
jvm-arg-XX:+PrintGCTimeStamps/jvm-arg
jvm-arg-XX:+PrintGCDetails/jvm-arg
Andrew Fritz wrote:
Oh, I'm not blaming the
We got heap dump working. Didn't have it enabled in the config...
I'm spending some time looking through the profiles and heap dump now
to see if I can see anything of interest.
Andrew
Scott Ferguson wrote:
On Apr 2, 2008, at 8:21 AM, Andrew Fritz wrote:
Our production
Hi Scott,
I work with Andrew and we are still fighting this problem. Thanks for the
advice.. we are analyzing the heap dump as you suggest.
For added color to the problem, linked is an image of one of our servers
load (courtesy of Munin). The other server behaves similarly, but the two
do not
So after doing some more research, and comparing the profiles of one
server (just restarted) to the other server in the loaded state I've
got a lead so to speak, but I'm not sure what it means...
On the loaded server about 34% of the time is spent in "readNative"
compared to 80%+ on the
On Apr 7, 2008, at 9:18 AM, Sandeep Ghael wrote:
Hi Scott,
I work with Andrew and we are still fighting this problem. Thanks
for the advice.. we are analyzing the heap dump as you suggest.
For added color to the problem, linked is an image of one of our
servers load (courtesy of
On Apr 7, 2008, at 9:18 AM, Sandeep Ghael wrote:
jvm-arg-Xdebug/jvm-arg
In my experience the debug switch sometimes causes the JVM to behave
erratically under heavy load. I would get rid of it and try again.
-Knut
___
Our production servers have their maximum memory set to 2048m.
Everything is fine for a while. Eventually the java process ends up with
all 2048m allocated. At this point server load starts going up and
response time gets bad. Eventually request start timing out.
Restarting the server fixes
On Apr 2, 2008, at 8:21 AM, Andrew Fritz wrote:
Our production servers have their maximum memory set to 2048m.
Everything is fine for a while. Eventually the java process ends up
with
all 2048m allocated. At this point server load starts going up and
response time gets bad. Eventually
14 matches
Mail list logo