RE: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-15 Thread Ben Sommerville
I finally had a chance to test this & I'm confident it is fixed.

Under 6u1 I was consistently getting a deadlock, under 6u2 I cannot
reproduce it. 

cheers
Ben

> -Original Message-
> From: Ben Sommerville [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, 11 July 2007 4:24 PM
> To: 'Tapestry users'
> Subject: RE: T5: NPE in Base64InputStream and 
> locked/waitingCheckForUpdatesFilter
> 
> One work around that I found is to create the read/write lock in 
> ConcurrentBarrier in "fair" mode instead of the default mode.  
> (fair mode changes the algorithm used to decide which thread 
> gets the lock next) 
> 
> That worked for me but subjectively things seemed a little slower 
> (no actual measurement so take that for what it is worth)
> 
> If it is fixed in 6u2 then I'd be tempted just to document the 
> problem with 6u1 :)
> 
> cheers
> Ben
> 
> PS: I'll try to actually confirm that is it fixed in the next 
> few days... may not get to it till the weekend tho
> 
> 
> > -Original Message-
> > From: Howard Lewis Ship [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, 11 July 2007 4:14 PM
> > To: Tapestry users
> > Subject: Re: T5: NPE in Base64InputStream and 
> > locked/waitingCheckForUpdatesFilter
> > 
> > Great find!  People may need to deploy on JDK 1.5 to see if 
> that's the
> > underlying cause.
> > 
> > I wonder if we could create a work around by setting a wait time to
> > acquire the read lock?  In a loop?
> > 
> > On 7/10/07, Ben Sommerville <[EMAIL PROTECTED]> wrote:
> > > If you are running under jdk 6u1 and tapestry 5.0.5 (or 
> > greater) then
> > > there is
> > > a jvm bug that can cause a deadlock.
> > >
> > > The bug report is at
> > > http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.
> > > It is supposed to be fixed in jdk 6u2 (which was release 
> > recently) but I
> > > haven't had a chance to test it yet.
> > >
> > > cheers
> > > Ben
> > >
> > > > -Original Message-
> > > > From: Martin Grotzke [mailto:[EMAIL PROTECTED]
> > > > Sent: Wednesday, 11 July 2007 9:06 AM
> > > > To: Tapestry users
> > > > Subject: Re: T5: NPE in Base64InputStream and
> > > > locked/waitingCheckForUpdatesFilter
> > > >
> > > > The NPE seems to be caused by a missing t:formdata 
> > request parameter.
> > > >
> > > > I just "reproduced" this (weird) situation by removing 
> > the hidden form
> > > > field "t:formdata" with firebug, so that I could 
> produce the NPE.
> > > >
> > > > Although, following request went through well, and the number
> > > > of request
> > > > processing threads did not increase.
> > > >
> > > > So there seems to be no (direct) interrelationship 
> between the NPE
> > > > and the locked threads.
> > > >
> > > > Cheers,
> > > > Martin
> > > >
> > > >
> > > >
> > > > On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > > > > Hi,
> > > > >
> > > > > we had an issue with our deployed application that did 
> > not respond
> > > > > anymore. This happened two or three times in the last 4 
> > days, but
> > > > > I was not able to reproduce it until now.
> > > > >
> > > > > The analysis of the logs showed, that there was a NPE in
> > > > > Base64InputStream, and afterwards the application did 
> > not respond
> > > > > anymore.
> > > > >
> > > > > When I triggered a thread dump, all 200 tomcat threads were
> > > > in status
> > > > > WAITING, like this one:
> > > > >
> > > > > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00
> > > > nid=0x3f05 waiting on condition
> > > > [0x4459e000..0x4459fbc0]
> > > > >java.lang.Thread.State: WAITING (parking)
> > > > > at sun.misc.Unsafe.park(Native Method)
> > > > > - parking to wait for  <0x2aaab8228360> (a
> > > > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > > > > at
> > > > 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > > > > 

RE: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-11 Thread Matt Ayres
I haven't had a chance to run a load test of j6u1 vs j6u2 with 5.0.5,
however switching to j6u2 on our public test site for the last day seems
to have solved the problem. The previous day on j6u1 it stopped
responding 4 times. Thanks for pointing out the bug report.

-Matt

-Original Message-
From: Ben Sommerville [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 10, 2007 10:52 PM
To: 'Tapestry users'
Subject: RE: T5: NPE in Base64InputStream and
locked/waitingCheckForUpdatesFilter

If you are running under jdk 6u1 and tapestry 5.0.5 (or greater) then
there is
a jvm bug that can cause a deadlock.

The bug report is at
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.  
It is supposed to be fixed in jdk 6u2 (which was release recently) but I

haven't had a chance to test it yet.

cheers
Ben

> -Original Message-
> From: Martin Grotzke [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, 11 July 2007 9:06 AM
> To: Tapestry users
> Subject: Re: T5: NPE in Base64InputStream and 
> locked/waitingCheckForUpdatesFilter
> 
> The NPE seems to be caused by a missing t:formdata request parameter.
> 
> I just "reproduced" this (weird) situation by removing the hidden form
> field "t:formdata" with firebug, so that I could produce the NPE.
> 
> Although, following request went through well, and the number 
> of request
> processing threads did not increase.
> 
> So there seems to be no (direct) interrelationship between the NPE
> and the locked threads.
> 
> Cheers,
> Martin
> 
> 
> 
> On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > Hi,
> > 
> > we had an issue with our deployed application that did not respond
> > anymore. This happened two or three times in the last 4 days, but
> > I was not able to reproduce it until now.
> > 
> > The analysis of the logs showed, that there was a NPE in
> > Base64InputStream, and afterwards the application did not respond
> > anymore.
> > 
> > When I triggered a thread dump, all 200 tomcat threads were 
> in status
> > WAITING, like this one:
> > 
> > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00 
> nid=0x3f05 waiting on condition 
> [0x4459e000..0x4459fbc0]
> >java.lang.Thread.State: WAITING (parking)
> > at sun.misc.Unsafe.park(Native Method)
> > - parking to wait for  <0x2aaab8228360> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndC
> heckInterrupt(AbstractQueuedSynchronizer.java:712)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquir
> eShared(AbstractQueuedSynchronizer.java:842)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireS
> hared(AbstractQueuedSynchronizer.java:1162)
> > at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.loc
k(ReentrantReadWriteLock.java:594)
> > at 
> org.apache.tapestry.ioc.internal.util.ConcurrentBarrier.withRe
> ad(ConcurrentBarrier.java:70)
> > at 
> org.apache.tapestry.internal.services.CheckForUpdatesFilter.se
> rvice(CheckForUpdatesFilter.java:110)
> > at 
> $RequestHandler_1139c29ae4a.service($RequestHandler_1139c29ae4a.java)
> > at 
> $RequestHandler_1139c29ae41.service($RequestHandler_1139c29ae41.java)
> > at 
> org.apache.tapestry.services.TapestryModule$11.service(Tapestr
yModule.java:1044)
> > at 
> $HttpServletRequestHandler_1139c29ae40.service($HttpServletReq
uestHandler_1139c29ae40.java)
> > at 
> org.apache.tapestry.TapestryFilter.doFilter(TapestryFilter.java:135)
> > at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilt
> er(ApplicationFilterChain.java:235)
> > at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(Appli
> cationFilterChain.java:206)
> > at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardW
> rapperValve.java:230)
> > at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardC
> ontextValve.java:175)
> > at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHost
> Valve.java:128)
> > at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReport
> Valve.java:104)
> > at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEn
> gineValve.java:109)
> > at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdap
> ter.java:

RE: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-11 Thread Martin Grotzke
Ben, thanx a lot for this info!

We were running our app under jdk 6u1 and tapestry 5.0.5, so it's
matching perfectly.

I just installed jdk 6u2 on our test-system and started our app
with this - so we'll see if it happens again.

I'll give a feedback after say 5 days when our app is fine, or of course
if this problem occurs again.

Btw: it's very amazing communicating on this list, as there's so much
useful and fast feedback - really great!!

Thanx for now,
cheers,
Martin



On Tue, 2007-07-10 at 19:52 -1000, Ben Sommerville wrote:
> If you are running under jdk 6u1 and tapestry 5.0.5 (or greater) then
> there is
> a jvm bug that can cause a deadlock.
> 
> The bug report is at
> http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.  
> It is supposed to be fixed in jdk 6u2 (which was release recently) but I 
> haven't had a chance to test it yet.
> 
> cheers
> Ben
> 
> > -Original Message-
> > From: Martin Grotzke [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, 11 July 2007 9:06 AM
> > To: Tapestry users
> > Subject: Re: T5: NPE in Base64InputStream and 
> > locked/waitingCheckForUpdatesFilter
> > 
> > The NPE seems to be caused by a missing t:formdata request parameter.
> > 
> > I just "reproduced" this (weird) situation by removing the hidden form
> > field "t:formdata" with firebug, so that I could produce the NPE.
> > 
> > Although, following request went through well, and the number 
> > of request
> > processing threads did not increase.
> > 
> > So there seems to be no (direct) interrelationship between the NPE
> > and the locked threads.
> > 
> > Cheers,
> > Martin
> > 
> > 
> > 
> > On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > > Hi,
> > > 
> > > we had an issue with our deployed application that did not respond
> > > anymore. This happened two or three times in the last 4 days, but
> > > I was not able to reproduce it until now.
> > > 
> > > The analysis of the logs showed, that there was a NPE in
> > > Base64InputStream, and afterwards the application did not respond
> > > anymore.
> > > 
> > > When I triggered a thread dump, all 200 tomcat threads were 
> > in status
> > > WAITING, like this one:
> > > 
> > > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00 
> > nid=0x3f05 waiting on condition 
> > [0x4459e000..0x4459fbc0]
> > >java.lang.Thread.State: WAITING (parking)
> > > at sun.misc.Unsafe.park(Native Method)
> > > - parking to wait for  <0x2aaab8228360> (a 
> > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > > at 
> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > > at 
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndC
> > heckInterrupt(AbstractQueuedSynchronizer.java:712)
> > > at 
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquir
> > eShared(AbstractQueuedSynchronizer.java:842)
> > > at 
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireS
> > hared(AbstractQueuedSynchronizer.java:1162)
> > > at 
> > java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.loc
> k(ReentrantReadWriteLock.java:594)
> > > at 
> > org.apache.tapestry.ioc.internal.util.ConcurrentBarrier.withRe
> > ad(ConcurrentBarrier.java:70)
> > > at 
> > org.apache.tapestry.internal.services.CheckForUpdatesFilter.se
> > rvice(CheckForUpdatesFilter.java:110)
> > > at 
> > $RequestHandler_1139c29ae4a.service($RequestHandler_1139c29ae4a.java)
> > > at 
> > $RequestHandler_1139c29ae41.service($RequestHandler_1139c29ae41.java)
> > > at 
> > org.apache.tapestry.services.TapestryModule$11.service(Tapestr
> yModule.java:1044)
> > > at 
> > $HttpServletRequestHandler_1139c29ae40.service($HttpServletReq
> uestHandler_1139c29ae40.java)
> > > at 
> > org.apache.tapestry.TapestryFilter.doFilter(TapestryFilter.java:135)
> > > at 
> > org.apache.catalina.core.ApplicationFilterChain.internalDoFilt
> > er(ApplicationFilterChain.java:235)
> > > at 
> > org.apache.catalina.core.ApplicationFilterChain.doFilter(Appli
> > cationFilterChain.java:206)
> > > at 
> > org.apache.catalina.core.StandardWrapperValve.invoke(StandardW
> > rapperValve.java:230)
> >

RE: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-10 Thread Ben Sommerville
One work around that I found is to create the read/write lock in 
ConcurrentBarrier in "fair" mode instead of the default mode.  
(fair mode changes the algorithm used to decide which thread 
gets the lock next) 

That worked for me but subjectively things seemed a little slower 
(no actual measurement so take that for what it is worth)

If it is fixed in 6u2 then I'd be tempted just to document the 
problem with 6u1 :)

cheers
Ben

PS: I'll try to actually confirm that is it fixed in the next 
few days... may not get to it till the weekend tho


> -Original Message-
> From: Howard Lewis Ship [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, 11 July 2007 4:14 PM
> To: Tapestry users
> Subject: Re: T5: NPE in Base64InputStream and 
> locked/waitingCheckForUpdatesFilter
> 
> Great find!  People may need to deploy on JDK 1.5 to see if that's the
> underlying cause.
> 
> I wonder if we could create a work around by setting a wait time to
> acquire the read lock?  In a loop?
> 
> On 7/10/07, Ben Sommerville <[EMAIL PROTECTED]> wrote:
> > If you are running under jdk 6u1 and tapestry 5.0.5 (or 
> greater) then
> > there is
> > a jvm bug that can cause a deadlock.
> >
> > The bug report is at
> > http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.
> > It is supposed to be fixed in jdk 6u2 (which was release 
> recently) but I
> > haven't had a chance to test it yet.
> >
> > cheers
> > Ben
> >
> > > -----Original Message-
> > > From: Martin Grotzke [mailto:[EMAIL PROTECTED]
> > > Sent: Wednesday, 11 July 2007 9:06 AM
> > > To: Tapestry users
> > > Subject: Re: T5: NPE in Base64InputStream and
> > > locked/waitingCheckForUpdatesFilter
> > >
> > > The NPE seems to be caused by a missing t:formdata 
> request parameter.
> > >
> > > I just "reproduced" this (weird) situation by removing 
> the hidden form
> > > field "t:formdata" with firebug, so that I could produce the NPE.
> > >
> > > Although, following request went through well, and the number
> > > of request
> > > processing threads did not increase.
> > >
> > > So there seems to be no (direct) interrelationship between the NPE
> > > and the locked threads.
> > >
> > > Cheers,
> > > Martin
> > >
> > >
> > >
> > > On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > > > Hi,
> > > >
> > > > we had an issue with our deployed application that did 
> not respond
> > > > anymore. This happened two or three times in the last 4 
> days, but
> > > > I was not able to reproduce it until now.
> > > >
> > > > The analysis of the logs showed, that there was a NPE in
> > > > Base64InputStream, and afterwards the application did 
> not respond
> > > > anymore.
> > > >
> > > > When I triggered a thread dump, all 200 tomcat threads were
> > > in status
> > > > WAITING, like this one:
> > > >
> > > > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00
> > > nid=0x3f05 waiting on condition
> > > [0x4459e000..0x4459fbc0]
> > > >java.lang.Thread.State: WAITING (parking)
> > > > at sun.misc.Unsafe.park(Native Method)
> > > > - parking to wait for  <0x2aaab8228360> (a
> > > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > > > at
> > > java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > > > at
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndC
> > > heckInterrupt(AbstractQueuedSynchronizer.java:712)
> > > > at
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquir
> > > eShared(AbstractQueuedSynchronizer.java:842)
> > > > at
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireS
> > > hared(AbstractQueuedSynchronizer.java:1162)
> > > > at
> > > java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.loc
> > k(ReentrantReadWriteLock.java:594)
> > > > at
> > > org.apache.tapestry.ioc.internal.util.ConcurrentBarrier.withRe
> > > ad(ConcurrentBarrier.java:70)
> > > > at
> > > org.apache.tapestry.internal.services.CheckForUpdatesFilter.se
> > > rvice(CheckForUpdatesFilter.java:110)
> > > > at
> > &

Re: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-10 Thread Howard Lewis Ship

Great find!  People may need to deploy on JDK 1.5 to see if that's the
underlying cause.

I wonder if we could create a work around by setting a wait time to
acquire the read lock?  In a loop?

On 7/10/07, Ben Sommerville <[EMAIL PROTECTED]> wrote:

If you are running under jdk 6u1 and tapestry 5.0.5 (or greater) then
there is
a jvm bug that can cause a deadlock.

The bug report is at
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.
It is supposed to be fixed in jdk 6u2 (which was release recently) but I
haven't had a chance to test it yet.

cheers
Ben

> -Original Message-
> From: Martin Grotzke [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, 11 July 2007 9:06 AM
> To: Tapestry users
> Subject: Re: T5: NPE in Base64InputStream and
> locked/waitingCheckForUpdatesFilter
>
> The NPE seems to be caused by a missing t:formdata request parameter.
>
> I just "reproduced" this (weird) situation by removing the hidden form
> field "t:formdata" with firebug, so that I could produce the NPE.
>
> Although, following request went through well, and the number
> of request
> processing threads did not increase.
>
> So there seems to be no (direct) interrelationship between the NPE
> and the locked threads.
>
> Cheers,
> Martin
>
>
>
> On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > Hi,
> >
> > we had an issue with our deployed application that did not respond
> > anymore. This happened two or three times in the last 4 days, but
> > I was not able to reproduce it until now.
> >
> > The analysis of the logs showed, that there was a NPE in
> > Base64InputStream, and afterwards the application did not respond
> > anymore.
> >
> > When I triggered a thread dump, all 200 tomcat threads were
> in status
> > WAITING, like this one:
> >
> > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00
> nid=0x3f05 waiting on condition
> [0x4459e000..0x4459fbc0]
> >java.lang.Thread.State: WAITING (parking)
> > at sun.misc.Unsafe.park(Native Method)
> > - parking to wait for  <0x2aaab8228360> (a
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndC
> heckInterrupt(AbstractQueuedSynchronizer.java:712)
> > at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquir
> eShared(AbstractQueuedSynchronizer.java:842)
> > at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireS
> hared(AbstractQueuedSynchronizer.java:1162)
> > at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.loc
k(ReentrantReadWriteLock.java:594)
> > at
> org.apache.tapestry.ioc.internal.util.ConcurrentBarrier.withRe
> ad(ConcurrentBarrier.java:70)
> > at
> org.apache.tapestry.internal.services.CheckForUpdatesFilter.se
> rvice(CheckForUpdatesFilter.java:110)
> > at
> $RequestHandler_1139c29ae4a.service($RequestHandler_1139c29ae4a.java)
> > at
> $RequestHandler_1139c29ae41.service($RequestHandler_1139c29ae41.java)
> > at
> org.apache.tapestry.services.TapestryModule$11.service(Tapestr
yModule.java:1044)
> > at
> $HttpServletRequestHandler_1139c29ae40.service($HttpServletReq
uestHandler_1139c29ae40.java)
> > at
> org.apache.tapestry.TapestryFilter.doFilter(TapestryFilter.java:135)
> > at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilt
> er(ApplicationFilterChain.java:235)
> > at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(Appli
> cationFilterChain.java:206)
> > at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardW
> rapperValve.java:230)
> > at
> org.apache.catalina.core.StandardContextValve.invoke(StandardC
> ontextValve.java:175)
> > at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHost
> Valve.java:128)
> > at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReport
> Valve.java:104)
> > at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEn
> gineValve.java:109)
> > at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdap
> ter.java:261)
> > at
> org.apache.coyote.http11.Http11Processor.process(Http11Process
> or.java:844)
> > at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandle
r.process(Http11Protocol.java:581)
> > at
> org.apache.tomcat.util.net.JIoEnd

RE: T5: NPE in Base64InputStream and locked/waitingCheckForUpdatesFilter

2007-07-10 Thread Ben Sommerville
If you are running under jdk 6u1 and tapestry 5.0.5 (or greater) then
there is
a jvm bug that can cause a deadlock.

The bug report is at
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6571733.  
It is supposed to be fixed in jdk 6u2 (which was release recently) but I 
haven't had a chance to test it yet.

cheers
Ben

> -Original Message-
> From: Martin Grotzke [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, 11 July 2007 9:06 AM
> To: Tapestry users
> Subject: Re: T5: NPE in Base64InputStream and 
> locked/waitingCheckForUpdatesFilter
> 
> The NPE seems to be caused by a missing t:formdata request parameter.
> 
> I just "reproduced" this (weird) situation by removing the hidden form
> field "t:formdata" with firebug, so that I could produce the NPE.
> 
> Although, following request went through well, and the number 
> of request
> processing threads did not increase.
> 
> So there seems to be no (direct) interrelationship between the NPE
> and the locked threads.
> 
> Cheers,
> Martin
> 
> 
> 
> On Tue, 2007-07-10 at 23:43 +0200, Martin Grotzke wrote:
> > Hi,
> > 
> > we had an issue with our deployed application that did not respond
> > anymore. This happened two or three times in the last 4 days, but
> > I was not able to reproduce it until now.
> > 
> > The analysis of the logs showed, that there was a NPE in
> > Base64InputStream, and afterwards the application did not respond
> > anymore.
> > 
> > When I triggered a thread dump, all 200 tomcat threads were 
> in status
> > WAITING, like this one:
> > 
> > "http-9090-1" daemon prio=10 tid=0x2aaaf7e1fc00 
> nid=0x3f05 waiting on condition 
> [0x4459e000..0x4459fbc0]
> >java.lang.Thread.State: WAITING (parking)
> > at sun.misc.Unsafe.park(Native Method)
> > - parking to wait for  <0x2aaab8228360> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> > at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndC
> heckInterrupt(AbstractQueuedSynchronizer.java:712)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquir
> eShared(AbstractQueuedSynchronizer.java:842)
> > at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireS
> hared(AbstractQueuedSynchronizer.java:1162)
> > at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.loc
k(ReentrantReadWriteLock.java:594)
> > at 
> org.apache.tapestry.ioc.internal.util.ConcurrentBarrier.withRe
> ad(ConcurrentBarrier.java:70)
> > at 
> org.apache.tapestry.internal.services.CheckForUpdatesFilter.se
> rvice(CheckForUpdatesFilter.java:110)
> > at 
> $RequestHandler_1139c29ae4a.service($RequestHandler_1139c29ae4a.java)
> > at 
> $RequestHandler_1139c29ae41.service($RequestHandler_1139c29ae41.java)
> > at 
> org.apache.tapestry.services.TapestryModule$11.service(Tapestr
yModule.java:1044)
> > at 
> $HttpServletRequestHandler_1139c29ae40.service($HttpServletReq
uestHandler_1139c29ae40.java)
> > at 
> org.apache.tapestry.TapestryFilter.doFilter(TapestryFilter.java:135)
> > at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilt
> er(ApplicationFilterChain.java:235)
> > at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(Appli
> cationFilterChain.java:206)
> > at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardW
> rapperValve.java:230)
> > at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardC
> ontextValve.java:175)
> > at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHost
> Valve.java:128)
> > at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReport
> Valve.java:104)
> > at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEn
> gineValve.java:109)
> > at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdap
> ter.java:261)
> > at 
> org.apache.coyote.http11.Http11Processor.process(Http11Process
> or.java:844)
> > at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandle
r.process(Http11Protocol.java:581)
> > at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.
java:447)
> > at java.lang.Thread.run(Thread.java:619)
> > 
> > I'm not sure if the NPE that happened before is the reason 
> for this, as I don't
>