Nope - one page never holds on to another.  I never even pass pages into
another page or link or something as a reference.

Interestingly, I DECREASED the memory the JVM could have from 1.5 GB to 1.0
GB today, and it has been stable all day (after also releasing a version
using Wicket 1.3.3).  That's not a definite sign - it was stable for several
days after upgrading to 1.3.2 from 1.2.6 before freaking out.  But I'll
watch it closely.  The memory creeped slowly up to the max, and has stayed
there, but without the site crashing, and without any degradation of
performance.  Does that give anyone any ideas?  I'm so exhausted, I think
that I'm starting to lose my ability to think freshly about it.

Thank you,
Jeremy

On Thu, Apr 3, 2008 at 5:44 PM, Matej Knopp <[EMAIL PROTECTED]> wrote:

> This is really weird. Do you have any inter-page references in your
> application?
>
> -Matej
>
> On Thu, Apr 3, 2008 at 9:35 PM, Jeremy Thomerson
> <[EMAIL PROTECTED]> wrote:
> > The oddness is what baffles me: Tomcat has no output anywhere.  I have
> >  grepped and tailed the entire Tomcat logs directory, stdout*, stderr*,
> >  localhost*, etc.  Nothing in eventvwr.
> >
> >  It must be memory related, though.  There is a steadily increasing
> memory
> >  footprint - it was increasing so fast yesterday because we were getting
> >  pounded by tons of traffic and Google's crawler and Ask's crawler all
> >  simultaneously.  Of course, the traffic was still no higher than it has
> been
> >  in the past - this is definitely a new problem.
> >
> >  I redeployed today with the pending 1.3.3 release built by Frank to see
> if
> >  my leak could be the same as Martijn's below, but the memory continues
> to
> >  increase.  It will die soon.  I have added the parameter to tell it to
> dump
> >  on OOM - hopefully I got the right parameter and it will work.
> >
> >  Anyone here know how to (or if you can) use jstat / jmap with
> tomcat5.exe,
> >  running as Windows service?  All my development is on Linux machines,
> and I
> >  can easily use those tools, but on the Windows prod environment (ughh),
> jps
> >  doesn't give me a VMID for Tomcat.
> >
> >  Thank you for your help!
> >  Jeremy
> >
> >
> >
> >  On Thu, Apr 3, 2008 at 2:27 PM, Al Maw <[EMAIL PROTECTED]> wrote:
> >
> >  > You can use as many anonymous inner classes as you like. I have them
> >  > coming
> >  > out of my ears, personally.
> >  >
> >  > It's very odd for tomcat to die with no output. There will be output
> >  > somewhere. Check logs/catalina.out and also logs/localhost*. If the
> JVM
> >  > dies, it will hotspot or even segfault and log that, at least. If you
> have
> >  > gradually increasing memory footprint then this should be pretty easy
> to
> >  > track down with a profiler.
> >  >
> >  > Make sure you run Tomcat with a sensible amount of permanent
> generation
> >  > space (128M+).
> >  >
> >  > Regards,
> >  >
> >  > Alastair
> >  >
> >  >
> >  >
> >  > On Thu, Apr 3, 2008 at 6:43 AM, Martijn Dashorst <
> >  > [EMAIL PROTECTED]>
> >  > wrote:
> >  >
> >  > > There are commandline options for the jvm to dump on OOM.
> >  > >
> >  > > Anyway, doesn't the log file give any insight into what is
> happening
> >  > > in your application? Did you (or your sysadmin) disable logging for
> >  > > Wicket?
> >  > >
> >  > > You can also run external tools to see what is happening inside
> your
> >  > > JVM without blocking the app. e.g. use jmap -histo to see how many
> >  > > objects are alive at a particular moment. The top 10 is always
> >  > > interesting. In my case I found a memory leak in the diskpagestore
> >  > > when exceptions occurred during writing to disk. This is solved in
> >  > > 1.3.3 (which is just days away from an official release, try it!)
> >  > >
> >  > > jstat -gc -h50 <pid> 1000 will log the garbage collector statistics
> >  > > every second.
> >  > >
> >  > > Martijn
> >  > >
> >  > > On 4/3/08, Jeremy Thomerson <[EMAIL PROTECTED]> wrote:
> >  > > > I upgraded my biggest production app from 1.2.6 to 1.3 last week.
>  I
> >  > > have
> >  > > >  had several apps running on 1.3 since it was in beta with no
> problems
> >  > -
> >  > > >  running for months without restarting.
> >  > > >
> >  > > >  This app receives more traffic than any of the rest.  We have a
> >  > decent
> >  > > >  server, and I had always allowed Tomcat 1.5GB of RAM to operate
> with.
> >  > >  It
> >  > > >  never had a problem doing so, and I didn't have OutOfMemory
> errors.
> >  > >  Now,
> >  > > >  after the upgrade to 1.3.2, I am having all sorts of trouble.
>  It ran
> >  > > for
> >  > > >  several days without a problem, but then started dying a couple
> times
> >  > a
> >  > > >  day.  Today it has died four times.  Here are a couple odd
> things
> >  > about
> >  > > >  this:
> >  > > >
> >  > > >    - On 1.2.6, I never had a problem with stability - the app
> would
> >  > run
> >  > > >    weeks between restarts (I restart once per deployment,
> anywhere
> >  > from
> >  > > once a
> >  > > >    week to at the longest about two months between deploy /
> restart).
> >  > > >    - Tomcat DIES instead of hanging when there is a problem.
>  Always
> >  > > >    before, if I had an issue, Tomcat would hang, and there would
> be
> >  > OOM
> >  > > in the
> >  > > >    logs.  Now, when it crashes, and I sign in to the server,
> Tomcat is
> >  > > not
> >  > > >    running at all.  There is nothing in the Tomcat logs that says
> >  > > anything, or
> >  > > >    in eventvwr.
> >  > > >    - I do not get OutOfMemory error in any logs, whereas I have
> always
> >  > > >    seen it in the logs before when I had an issue with other
> apps.  I
> >  > am
> >  > > >    running Tomcat as a service on Windows, but it writes stdout /
> >  > stderr
> >  > > to
> >  > > >    logs, and I write my logging out to logs, and none of these
> logs
> >  > > include ANY
> >  > > >    errors - they all just suddenly stop at the time of the crash.
> >  > > >
> >  > > >  My money is that it is an OOM error caused by somewhere that I
> am
> >  > doing
> >  > > >  something I shouldn't be with Wicket.  There's no logs that even
> say
> >  > it
> >  > > is
> >  > > >  an OOM, but the memory continues to increase linearly over time
> as
> >  > the
> >  > > app
> >  > > >  runs now (it didn't do that before).  My first guess is my
> previous
> >  > > >  proliferate use of anonymous inner classes.  I have seen in the
> email
> >  > > >  threads that this shouldn't be done in 1.3.
> >  > > >
> >  > > >  Of course, the real answer is that I'm going to be digging
> through
> >  > > profilers
> >  > > >  and lines of code until I get this fixed.
> >  > > >
> >  > > >  My question, though, is from the Wicket devs / experienced users
> -
> >  > > where
> >  > > >  should I look first?  Is there something that changed between
> 1.2.6
> >  > and
> >  > > 1.3
> >  > > >  that might have caused me problems where 1.2.6 was more
> forgiving?
> >  > > >
> >  > > >  I'm running the app with JProbe right now so that I can get a
> >  > snapshot
> >  > > of
> >  > > >  memory when it gets really high.
> >  > > >
> >  > > >  Thank you,
> >  > > >
> >  > > > Jeremy Thomerson
> >  > > >
> >  > >
> >  > >
> >  > > --
> >  > > Buy Wicket in Action: http://manning.com/dashorst
> >  > > Apache Wicket 1.3.2 is released
> >  > > Get it now: http://www.apache.org/dyn/closer.cgi/wicket/1.3.2
> >  > >
> >  > >
> ---------------------------------------------------------------------
> >  > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> >  > > For additional commands, e-mail: [EMAIL PROTECTED]
> >  > >
> >  > >
> >  >
> >
>
>
>
> --
> Resizable and reorderable grid components.
> http://www.inmethod.com
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

Reply via email to