Thanks for the replies so far, they have been helpful. I will be looking into both the application server's maintenance of the /tmp folder - where it does seem Jetty is unpacking the WAR - as well as the memory usage of the application and optimizations for that.

The production version is frozen at 5.1.0.15 unfortunately, but I recently begun development on the next release version. That, coinciding with the release of ChenilleKit for 5.2.4, has allowed me to successfully upgrade the next version to 5.2.4, which I'm very excited about.

Thanks,
Rich

On 03/21/2011 02:17 PM, Thiago H. de Paula Figueiredo wrote:
Using 5.2.4 will help, as the memory consumption is lowered by not using a page pool anymore.

On Mon, 21 Mar 2011 14:51:01 -0300, Kalle Korhonen <kalle.o.korho...@gmail.com> wrote:

I can confirm I've seen similar behavior on Jetty. It's linked to heap
space running out and in my case it was directly related to sending
huge amounts gzipped form data. See related Jetty issue at
http://jira.codehaus.org/browse/JETTY-1167 and my comments there.
Apparently Jetty in some cases shuts down the application in order to
keep the container running, after which you'd see the behavior you
described. Adjust jvm settings and allocate more memory to your
process. Allocate enough until the whole process dies (if you are on
Linux platform - especially virtualized with no swap space, the OOM
killer will shut down the JVM way before your process consumes all of
the available RAM). Then investigate if any of your forms are
reserving a high amount of memory, possibly play with other JVM
settings and *if* you are sending a lot of gzipped data, turn gzipping
off and see if it makes a difference.

Kalle


On Mon, Mar 21, 2011 at 8:21 AM, Rich M <rich...@moremagic.com> wrote:
Hi,

I've been running a production version of a tapestry application for a
couple months now. Just the other day it was reported to me that the
application was no longer resolving pages normally, but instead displaying a
directory view from the root context.

Restarting the application solved the problem, but I'm at a loss as to the
cause. Looking through the application logs, the only unusual logging I
noticed was that sometime during the timeframe in which the problem was
expected to occur, the TapestryModule.ComponentClassResolver fired off 3
lines of logging similar to when you execute the start-up of a Tapestry
application.

Namely, it displayed lists of available pages, available components, and available mixins. The list of available pages was significantly reduced from
the actual pages within the application. It seemed like an at-random
mini-subset of the actual pages in the application. There is nothing to
indicate someone had tried to start/restart the application again or
anything along those lines.

Considering the minimal amount of information I have at hand, I was curious to know if this ComponentClassResolver issue might look familiar to anyone? Or perhaps at least an idea of what might cause the ComponentClassResolver to behave as it had so I can follow up with a code review of my application?
I run the application in a standalone Jetty 6.1.26 as a WAR deployment.

Thanks,
Rich


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org

Reply via email to