Re: DEAD LOCK in VirtualAssetStreamer
Hi, You can replace default VirtualAssetStreamer with your implementation by contributing it to ServiceOverride (contributeServiceOverride). Your version of VirtualAssetStreamer can be based on existing org.apache.tapestry5.internal.services.VirtualAssetStreamerImpl, just replace all references of ByteArrayOutputStream to your class that extends ByteArrayOutputStream but has not synchronized writeTo(). Another that does not require application change is to deploy a caching proxy to cache just assets. Best regards, Cezary On Wed, Sep 28, 2011 at 1:00 PM, Jens Breitenstein wrote: > Thanks Howard! > > Unortunately we need some time to migrate to 5.2.x and we face this issues > each day, thus loosing our cluster and shop-customers. Downtimes each day > because of this issue is, well, ugly to horrible depending on whom you > ask... > Unfortunately we depend on "core" library mappings in our current version > like "configuration.add(new LibraryMapping("core", "x.y.z.core"));" which > failed in 5.2, therefore migration means more than switching a version or > jar to us. > Is there any workaround or quickfix we can go for immediately to buy us > time to allow proper migration from 5.1.0.5? > > Any hint is highly appreciated > > Jens > > > Am 22.09.11 17:52, schrieb Howard Lewis Ship: > > This is known and fixed in 5.2. >> >> On Thu, Sep 22, 2011 at 3:36 AM, Jens >> Breitenstein> >> wrote: >> >>> Hi All! >>> >>> It seems we encountered a serious concurrency bug in Tapestry 5.1> under >>> high load. >>> In our special case one thread was blocked and unable to respond and >>> write >>> an asset output stream. >>> As virtual assets are shared and the same ByteArrayOutputStream is reused >>> for the same asset accross multiple threads, the one thread hanging >>> causes >>> all other threads which use the same asset to be blocked too. This >>> happens >>> because ByteArrayOutputStream.writeTo uses synchronized internally. To >>> our >>> personal opinion we should only cache the data but not the >>> ByteArrayOutputStream instances. >>> >>> Any idea how to solve this or am I wrong? >>> >>> Jens >>> >>> >>> >>> >>> > > --**--**- > To unsubscribe, e-mail: > users-unsubscribe@tapestry.**apache.org > For additional commands, e-mail: users-h...@tapestry.apache.org > >
Re: DEAD LOCK in VirtualAssetStreamer
Thanks Howard! Unortunately we need some time to migrate to 5.2.x and we face this issues each day, thus loosing our cluster and shop-customers. Downtimes each day because of this issue is, well, ugly to horrible depending on whom you ask... Unfortunately we depend on "core" library mappings in our current version like "configuration.add(new LibraryMapping("core", "x.y.z.core"));" which failed in 5.2, therefore migration means more than switching a version or jar to us. Is there any workaround or quickfix we can go for immediately to buy us time to allow proper migration from 5.1.0.5? Any hint is highly appreciated Jens Am 22.09.11 17:52, schrieb Howard Lewis Ship: This is known and fixed in 5.2. On Thu, Sep 22, 2011 at 3:36 AM, Jens Breitenstein wrote: Hi All! It seems we encountered a serious concurrency bug in Tapestry 5.1> under high load. In our special case one thread was blocked and unable to respond and write an asset output stream. As virtual assets are shared and the same ByteArrayOutputStream is reused for the same asset accross multiple threads, the one thread hanging causes all other threads which use the same asset to be blocked too. This happens because ByteArrayOutputStream.writeTo uses synchronized internally. To our personal opinion we should only cache the data but not the ByteArrayOutputStream instances. Any idea how to solve this or am I wrong? Jens - To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org For additional commands, e-mail: users-h...@tapestry.apache.org
Re: DEAD LOCK in VirtualAssetStreamer
This is known and fixed in 5.2. On Thu, Sep 22, 2011 at 3:36 AM, Jens Breitenstein wrote: > Hi All! > > It seems we encountered a serious concurrency bug in Tapestry 5.1> under > high load. > In our special case one thread was blocked and unable to respond and write > an asset output stream. > As virtual assets are shared and the same ByteArrayOutputStream is reused > for the same asset accross multiple threads, the one thread hanging causes > all other threads which use the same asset to be blocked too. This happens > because ByteArrayOutputStream.writeTo uses synchronized internally. To our > personal opinion we should only cache the data but not the > ByteArrayOutputStream instances. > > Any idea how to solve this or am I wrong? > > Jens > > > > Dump of the locking monitor: > "TP-Processor241" daemon prio=10 tid=0x2aab15451000 nid=0x7a87 runnable > [0x4fd7] > java.lang.Thread.State: RUNNABLE > at java.net.SocketOutputStream.socketWrite0(Native Method) > at > java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) > at java.net.SocketOutputStream.write(SocketOutputStream.java:136) > at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:532) > at org.apache.jk.common.JkInputStream.doWrite(JkInputStream.java:162) > at org.apache.coyote.Response.doWrite(Response.java:560) > at > org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) > at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:354) > at > org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:381) > at > org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:370) > at > org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89) > at > java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:109) > - locked<0x2aaac0ab5e08> > (a java.io.ByteArrayOutputStream) > at > org.apache.tapestry5.internal.services.VirtualAssetStreamerImpl.streamVirtualAsset(VirtualAssetStreamerImpl.java:96) > at > $VirtualAssetStreamer_132873471df.streamVirtualAsset($VirtualAssetStreamer_132873471df.java) > at > org.apache.tapestry5.internal.services.VirtualAssetDispatcher.dispatch(VirtualAssetDispatcher.java:49) > at $Dispatcher_132873471e5.dispatch($Dispatcher_132873471e5.java) > at $Dispatcher_132873471d7.dispatch($Dispatcher_132873471d7.java) > at > org.apache.tapestry5.services.TapestryModule$RequestHandlerTerminator.service(TapestryModule.java:245) > at > ae.sukar.client.http.services.ShopModule$3.service(ShopModule.java:175) > > Dump of one of the locked monitors: > "TP-Processor223" daemon prio=10 tid=0x2aab157f nid=0x7a11 waiting > for monitor entry [0x4eb5e000] > java.lang.Thread.State: BLOCKED (on object monitor) > at java.io.ByteArrayOutputStream.size(ByteArrayOutputStream.java:144) > - waiting to lock<0x2aaac0ab5e08> > (a java.io.ByteArrayOutputStream) > at > org.apache.tapestry5.internal.services.VirtualAssetStreamerImpl.streamVirtualAsset(VirtualAssetStreamerImpl.java:84) > at > $VirtualAssetStreamer_132873471df.streamVirtualAsset($VirtualAssetStreamer_132873471df.java) > at > org.apache.tapestry5.internal.services.VirtualAssetDispatcher.dispatch(VirtualAssetDispatcher.java:49) > at $Dispatcher_132873471e5.dispatch($Dispatcher_132873471e5.java) > at $Dispatcher_132873471d7.dispatch($Dispatcher_132873471d7.java) > at > org.apache.tapestry5.services.TapestryModule$RequestHandlerTerminator.service(TapestryModule.java:245) > at > ae.sukar.client.http.services.ShopModule$3.service(ShopModule.java:175) > at > $RequestHandler_132873471d8.service($RequestHandler_132873471d8.java) > at > nu.localhost.tapestry5.springsecurity.services.internal.RequestFilterWrapper$1.doFilter(RequestFilterWrapper.java:60) > > > > - > To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org > For additional commands, e-mail: users-h...@tapestry.apache.org > > -- Howard M. Lewis Ship Creator of Apache Tapestry The source for Tapestry training, mentoring and support. Contact me to learn how I can get you up and productive in Tapestry fast! (971) 678-5210 http://howardlewisship.com - To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org For additional commands, e-mail: users-h...@tapestry.apache.org
DEAD LOCK in VirtualAssetStreamer
Hi All! It seems we encountered a serious concurrency bug in Tapestry 5.1> under high load. In our special case one thread was blocked and unable to respond and write an asset output stream. As virtual assets are shared and the same ByteArrayOutputStream is reused for the same asset accross multiple threads, the one thread hanging causes all other threads which use the same asset to be blocked too. This happens because ByteArrayOutputStream.writeTo uses synchronized internally. To our personal opinion we should only cache the data but not the ByteArrayOutputStream instances. Any idea how to solve this or am I wrong? Jens Dump of the locking monitor: "TP-Processor241" daemon prio=10 tid=0x2aab15451000 nid=0x7a87 runnable [0x4fd7] java.lang.Thread.State: RUNNABLE at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:532) at org.apache.jk.common.JkInputStream.doWrite(JkInputStream.java:162) at org.apache.coyote.Response.doWrite(Response.java:560) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:354) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:381) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:370) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89) at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:109) - locked<0x2aaac0ab5e08>(a java.io.ByteArrayOutputStream) at org.apache.tapestry5.internal.services.VirtualAssetStreamerImpl.streamVirtualAsset(VirtualAssetStreamerImpl.java:96) at $VirtualAssetStreamer_132873471df.streamVirtualAsset($VirtualAssetStreamer_132873471df.java) at org.apache.tapestry5.internal.services.VirtualAssetDispatcher.dispatch(VirtualAssetDispatcher.java:49) at $Dispatcher_132873471e5.dispatch($Dispatcher_132873471e5.java) at $Dispatcher_132873471d7.dispatch($Dispatcher_132873471d7.java) at org.apache.tapestry5.services.TapestryModule$RequestHandlerTerminator.service(TapestryModule.java:245) at ae.sukar.client.http.services.ShopModule$3.service(ShopModule.java:175) Dump of one of the locked monitors: "TP-Processor223" daemon prio=10 tid=0x2aab157f nid=0x7a11 waiting for monitor entry [0x4eb5e000] java.lang.Thread.State: BLOCKED (on object monitor) at java.io.ByteArrayOutputStream.size(ByteArrayOutputStream.java:144) - waiting to lock<0x2aaac0ab5e08> (a java.io.ByteArrayOutputStream) at org.apache.tapestry5.internal.services.VirtualAssetStreamerImpl.streamVirtualAsset(VirtualAssetStreamerImpl.java:84) at $VirtualAssetStreamer_132873471df.streamVirtualAsset($VirtualAssetStreamer_132873471df.java) at org.apache.tapestry5.internal.services.VirtualAssetDispatcher.dispatch(VirtualAssetDispatcher.java:49) at $Dispatcher_132873471e5.dispatch($Dispatcher_132873471e5.java) at $Dispatcher_132873471d7.dispatch($Dispatcher_132873471d7.java) at org.apache.tapestry5.services.TapestryModule$RequestHandlerTerminator.service(TapestryModule.java:245) at ae.sukar.client.http.services.ShopModule$3.service(ShopModule.java:175) at $RequestHandler_132873471d8.service($RequestHandler_132873471d8.java) at nu.localhost.tapestry5.springsecurity.services.internal.RequestFilterWrapper$1.doFilter(RequestFilterWrapper.java:60) - To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org For additional commands, e-mail: users-h...@tapestry.apache.org