[jira] Updated: (COCOON-2259) Memory leak in PoolableProxyHandler

2009-07-07 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2259:
-

Other Info: [Patch available]

> Memory leak in PoolableProxyHandler
> ---
>
> Key: COCOON-2259
> URL: https://issues.apache.org/jira/browse/COCOON-2259
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.2, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: patchForIssue2259.txt
>
>
> I reproduced the problem with following pipeline and by adding log output to 
> PoolableProxyHandler [1]
> 
>   
>   
>   
>   
>   
>   
>   
>   
>   
> 
> Changing the line 
>  this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
> this.handler.hashCode();
> to
>  this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
> this.hashCode();
> fixes the memory leak.
> Why? The PoolableFactoryBean [2] handler is a singleton for every pipeline 
> component, i.e. one instance for noncaching pipeline, one instance for xalan 
> transformer, ... Therefore the attributeName is the same for every component 
> of the same type but Spring requires an unique value for the destruction 
> callback handler.
> In the example sitemap above two noncaching pipeline instances are needed for 
> processing the request. Both call registerDestructionCallback with the same 
> attributeName. Because the attributeName is the same the callback is only 
> called once and the other component remains in ThreadLocal.
> [1] 
> http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableProxyHandler.java
> [2] 
> http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableFactoryBean.java

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COCOON-2260) wrong parent version in pom of cocoon-flowscript-impl

2009-07-07 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2260:
-

Other Info: [Patch available]

> wrong parent version in pom of cocoon-flowscript-impl
> -
>
> Key: COCOON-2260
> URL: https://issues.apache.org/jira/browse/COCOON-2260
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Build System: Maven
>Affects Versions: 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
>Priority: Trivial
> Attachments: patchForIssue2260.txt
>
>
> /project/parent/version is 6 but it should be 6-SNAPSHOT in trunk

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COCOON-2260) wrong parent version in pom of cocoon-flowscript-impl

2009-07-07 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2260:
-

Attachment: patchForIssue2260.txt

Apply patch with 
   patch -p0 <${downloadFolder}/patchForIssue2260.txt
to trunk.

> wrong parent version in pom of cocoon-flowscript-impl
> -
>
> Key: COCOON-2260
> URL: https://issues.apache.org/jira/browse/COCOON-2260
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Build System: Maven
>Affects Versions: 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
>Priority: Trivial
> Attachments: patchForIssue2260.txt
>
>
> /project/parent/version is 6 but it should be 6-SNAPSHOT in trunk

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COCOON-2260) wrong parent version in pom of cocoon-flowscript-impl

2009-07-07 Thread Alexander Daniel (JIRA)
wrong parent version in pom of cocoon-flowscript-impl
-

 Key: COCOON-2260
 URL: https://issues.apache.org/jira/browse/COCOON-2260
 Project: Cocoon
  Issue Type: Bug
  Components: - Build System: Maven
Affects Versions: 2.2-dev (Current SVN)
Reporter: Alexander Daniel
Priority: Trivial


/project/parent/version is 6 but it should be 6-SNAPSHOT in trunk


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COCOON-2259) Memory leak in PoolableProxyHandler

2009-07-03 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2259:
-

Attachment: patchForIssue2259.txt

The patch includes the fix described in the description and additional logging 
for PoolableProxyHandler.

Patch was created against Cocoon 2.2 trunk. Use patch -p0 
 Memory leak in PoolableProxyHandler
> ---
>
> Key: COCOON-2259
> URL: https://issues.apache.org/jira/browse/COCOON-2259
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.2, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: patchForIssue2259.txt
>
>
> I reproduced the problem with following pipeline and by adding log output to 
> PoolableProxyHandler [1]
> 
>   
>   
>   
>   
>   
>   
>   
>   
>   
> 
> Changing the line 
>  this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
> this.handler.hashCode();
> to
>  this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
> this.hashCode();
> fixes the memory leak.
> Why? The PoolableFactoryBean [2] handler is a singleton for every pipeline 
> component, i.e. one instance for noncaching pipeline, one instance for xalan 
> transformer, ... Therefore the attributeName is the same for every component 
> of the same type but Spring requires an unique value for the destruction 
> callback handler.
> In the example sitemap above two noncaching pipeline instances are needed for 
> processing the request. Both call registerDestructionCallback with the same 
> attributeName. Because the attributeName is the same the callback is only 
> called once and the other component remains in ThreadLocal.
> [1] 
> http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableProxyHandler.java
> [2] 
> http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableFactoryBean.java

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COCOON-2259) Memory leak in PoolableProxyHandler

2009-07-03 Thread Alexander Daniel (JIRA)
Memory leak in PoolableProxyHandler
---

 Key: COCOON-2259
 URL: https://issues.apache.org/jira/browse/COCOON-2259
 Project: Cocoon
  Issue Type: Bug
  Components: * Cocoon Core
Affects Versions: 2.2, 2.2-dev (Current SVN)
Reporter: Alexander Daniel


I reproduced the problem with following pipeline and by adding log output to 
PoolableProxyHandler [1]












Changing the line 
 this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
this.handler.hashCode();
to
 this.attributeName = PoolableProxyHandler.class.getName() + '/' + 
this.hashCode();
fixes the memory leak.

Why? The PoolableFactoryBean [2] handler is a singleton for every pipeline 
component, i.e. one instance for noncaching pipeline, one instance for xalan 
transformer, ... Therefore the attributeName is the same for every component of 
the same type but Spring requires an unique value for the destruction callback 
handler.

In the example sitemap above two noncaching pipeline instances are needed for 
processing the request. Both call registerDestructionCallback with the same 
attributeName. Because the attributeName is the same the callback is only 
called once and the other component remains in ThreadLocal.

[1] 
http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableProxyHandler.java
[2] 
http://svn.apache.org/repos/asf/cocoon/trunk/core/cocoon-sitemap/cocoon-sitemap-impl/src/main/java/org/apache/cocoon/core/container/spring/avalon/PoolableFactoryBean.java

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COCOON3-23) StackOverflowError on CachingPipeline.setup(OutputStream)

2009-01-30 Thread Alexander Daniel (JIRA)
StackOverflowError on CachingPipeline.setup(OutputStream)
-

 Key: COCOON3-23
 URL: https://issues.apache.org/jira/browse/COCOON3-23
 Project: Cocoon 3
  Issue Type: Bug
  Components: cocoon-pipeline
Affects Versions: 3.0.0-alpha-1
Reporter: Alexander Daniel
Assignee: Cocoon Developers Team


CachingPipeline.setup(OutputStream) calls itself resulting in a 
StackOverflowError [1]. I suppose that it should create a CachingOutputStream 
first and then call super.setup like setup(OutputStream outputStream, 
Map parameters) does [2]. [3] is a short code fragment to 
reproduce.

[1] 
@Override
public void setup(OutputStream outputStream) {
this.setup(outputStream);
}

[2]
@Override
public void setup(OutputStream outputStream, Map parameters) {
 // create a caching output stream to intercept the result
 this.cachingOutputStream = new CachingOutputStream(outputStream);

 super.setup(this.cachingOutputStream, parameters);
}

[3]
Pipeline pipeline = new CachingPipeline();
pipeline.addComponent(new StringGenerator(""));
pipeline.addComponent(new XMLSerializer());

ByteArrayOutputStream baos = new ByteArrayOutputStream();
pipeline.setup(baos);
pipeline.execute();


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-26 Thread Alexander Daniel (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12582312#action_12582312
 ] 

Alexander Daniel commented on COCOON-2173:
--

Thanks! That's a good move that locking can be switched off.

I am still seeing the deadlock with the reproduceMultipleThreads test with the 
default settings. The reason for that is that in the patched branch waitForLock 
returns false when a timeout occurs which is different to the patch I provided 
where true is returned when a timeout occurs.


> AbstractCachingProcessingPipeline: Two requests can deadlock each other
> ---
>
> Key: COCOON-2173
> URL: https://issues.apache.org/jira/browse/COCOON-2173
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Components: Sitemap
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: patchFor2.1.11.txt, reproduceMultipleThreads.tar.gz, 
> reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz
>
>
> Two requests can deadlock each other when they depend on the same resources 
> which they acquire in a different order. I can reproduce that in Cocoon 
> 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
> * request A: generating lock for 55933 
> * request B: generating lock for 58840 
> * request B: waiting for lock 55933 which is hold by request A 
> * request A: waiting for lock 58840 which is hold by request B 
> I can reproduce this behaviour with Apache Bench and following pipeline: 
> * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
>  
> * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
>  
> * terminal 3: touching the two data files every second to invalidate the 
> cache (while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 
> * pipeline: 
>  
>
>  
>  label="b"> 
>  
>  
>  
>  
>  
>
>  
>  
>  
>  
>  
>  
>  
>  
>
>  
> After some seconds the deadlock occurs ==> 
> * Apache Bench requests run into a timeout 
> * I can see following pipe locks in the default transient store: 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> I added some logging to AbstractCachingProcessingPipeline.java which 
> reconfirms the explanations above: 
> INFO (2008-03-13) 13:50.16:072 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:074 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK

[jira] Updated: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-21 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2173:
-

Other Info: [Patch available]

> AbstractCachingProcessingPipeline: Two requests can deadlock each other
> ---
>
> Key: COCOON-2173
> URL: https://issues.apache.org/jira/browse/COCOON-2173
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Components: Sitemap
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: patchFor2.1.11.txt, reproduceMultipleThreads.tar.gz, 
> reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz
>
>
> Two requests can deadlock each other when they depend on the same resources 
> which they acquire in a different order. I can reproduce that in Cocoon 
> 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
> * request A: generating lock for 55933 
> * request B: generating lock for 58840 
> * request B: waiting for lock 55933 which is hold by request A 
> * request A: waiting for lock 58840 which is hold by request B 
> I can reproduce this behaviour with Apache Bench and following pipeline: 
> * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
>  
> * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
>  
> * terminal 3: touching the two data files every second to invalidate the 
> cache (while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 
> * pipeline: 
>  
>
>  
>  label="b"> 
>  
>  
>  
>  
>  
>
>  
>  
>  
>  
>  
>  
>  
>  
>
>  
> After some seconds the deadlock occurs ==> 
> * Apache Bench requests run into a timeout 
> * I can see following pipe locks in the default transient store: 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> I added some logging to AbstractCachingProcessingPipeline.java which 
> reconfirms the explanations above: 
> INFO (2008-03-13) 13:50.16:072 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:074 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  
> INFO (2008-03-13) 13:50.16:281 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: waiting for lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/sa

[jira] Updated: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-21 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2173:
-

Attachment: patchFor2.1.11.txt

For applying the patch to Cocoon 2.1.11 use patch -p6 
<$DownloadFolder/patchFor2.1.11.txt

I considered three possibilities to fix this issue
(1) using a timed wait
(2) removing the synchronization of requests with pipe locks completely
(3) redesign of the synchronization of requests

I chose (1) for simplicity. The current waitForLock implementation returns true 
when the lock is the current thread (request for 2.1-dev and 2.2-dev). For the 
calling method validatePipeline this means that there was no cached response 
found. When there is a timeout on the timed wait it behaves the same way. The 
default timeout is 250ms but it can be configured with the 
timeoutForSynchronizingRequests parameter.

Unit tests for the new method timedWait are included in patchFor2.1.11.txt

> AbstractCachingProcessingPipeline: Two requests can deadlock each other
> ---
>
> Key: COCOON-2173
> URL: https://issues.apache.org/jira/browse/COCOON-2173
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Components: Sitemap
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: patchFor2.1.11.txt, reproduceMultipleThreads.tar.gz, 
> reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz
>
>
> Two requests can deadlock each other when they depend on the same resources 
> which they acquire in a different order. I can reproduce that in Cocoon 
> 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
> * request A: generating lock for 55933 
> * request B: generating lock for 58840 
> * request B: waiting for lock 55933 which is hold by request A 
> * request A: waiting for lock 58840 which is hold by request B 
> I can reproduce this behaviour with Apache Bench and following pipeline: 
> * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
>  
> * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
>  
> * terminal 3: touching the two data files every second to invalidate the 
> cache (while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 
> * pipeline: 
>  
>
>  
>  label="b"> 
>  
>  
>  
>  
>  
>
>  
>  
>  
>  
>  
>  
>  
>  
>
>  
> After some seconds the deadlock occurs ==> 
> * Apache Bench requests run into a timeout 
> * I can see following pipe locks in the default transient store: 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> I added some logging to AbstractCachingProcessingPipeline.java which 
> reconfirms the explanations above: 
> INFO (2008-03-13) 13:50.16:072 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:074 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMult

[jira] Commented: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-03-14 Thread Alexander Daniel (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12578760#action_12578760
 ] 

Alexander Daniel commented on COCOON-1985:
--

I filed a separate Jira issue: https://issues.apache.org/jira/browse/COCOON-2173

> AbstractCachingProcessingPipeline locking with IncludeTransformer may hang 
> pipeline
> ---
>
> Key: COCOON-1985
> URL: https://issues.apache.org/jira/browse/COCOON-1985
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Ellis Pritchard
>Priority: Critical
> Fix For: 2.2-dev (Current SVN)
>
> Attachments: caching-trials.patch, includer.xsl, patch.txt, 
> reproduceMultipleThreads.tar.gz, sitemap.xmap
>
>
> Cocoon 2.1.9 introduced the concept of a lock in 
> AbstractCachingProcessingPipeline, an optimization to prevent two concurrent 
> requests from generating the same cached content. The first request adds the 
> pipeline key to the transient cache to 'lock' the cache entry for that 
> pipeline, subsequent concurrent requests wait for the first request to cache 
> the content (by Object.lock()ing the pipeline key entry) before proceeding, 
> and can then use the newly cached content.
> However, this has introduced an incompatibility with the IncludeTransformer: 
> if the inclusions access the same yet-to-be-cached content as the root 
> pipeline, the whole assembly hangs, since a lock will be made on a lock 
> already held by the same thread, and which cannot be satisfied.
> e.g.
> i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient 
> store as a lock.
> iii) subsequently in the root pipeline, the IncludeTransformer is run.
> iv) one of the inclusions also generates with cocoon:/foo.xml, this 
> sub-pipeline locks in AbstractProcessingPipeline.waitForLock() because the 
> sub-pipeline key is already present.
> v) deadlock.
> I've found a (partial, see below) solution for this: instead of a plain 
> Object being added to the transient store as the lock object, the 
> Thread.currentThread() is added; when waitForLock() is called, if the lock 
> object exists, it checks that it is not the same thread before attempting to 
> lock it; if it is the same thread, then waitForLock() returns success, which 
> allows generation to proceed. You loose the efficiency of generating the 
> cache only once in this case, but at least it doesn't hang! With JDK1.5 this 
> can be made neater by using Thread#holdsLock() instead of adding the thread 
> object itself to the transient store.
> See patch file.
> However, even with this fix, parallel includes (when enabled) may still hang, 
> because they pass the not-the-same-thread test, but fail because the root 
> pipeline, which holds the initial lock, cannot complete (and therefore 
> statisfy the lock condition for the parallel threads), before the threads 
> themselves have completed, which then results in a deadlock again.
> The complete solution is probably to avoid locking if the lock is held by the 
> same top-level Request, but that requires more knowledge of Cocoon's 
> processing than I (currently) have!
> IMHO unless a complete solution is found to this, then this optimization 
> should be removed completely, or else made optional by configuration, since 
> it renders the IncludeTransformer dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-14 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2173:
-

Attachment: reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz

For reproduction in Cocoon 2.2RC3-SNAPSHOT

> AbstractCachingProcessingPipeline: Two requests can deadlock each other
> ---
>
> Key: COCOON-2173
> URL: https://issues.apache.org/jira/browse/COCOON-2173
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Components: Sitemap
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: reproduceMultipleThreads.tar.gz, 
> reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz
>
>
> Two requests can deadlock each other when they depend on the same resources 
> which they acquire in a different order. I can reproduce that in Cocoon 
> 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
> * request A: generating lock for 55933 
> * request B: generating lock for 58840 
> * request B: waiting for lock 55933 which is hold by request A 
> * request A: waiting for lock 58840 which is hold by request B 
> I can reproduce this behaviour with Apache Bench and following pipeline: 
> * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
>  
> * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
>  
> * terminal 3: touching the two data files every second to invalidate the 
> cache (while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 
> * pipeline: 
>  
>
>  
>  label="b"> 
>  
>  
>  
>  
>  
>
>  
>  
>  
>  
>  
>  
>  
>  
>
>  
> After some seconds the deadlock occurs ==> 
> * Apache Bench requests run into a timeout 
> * I can see following pipe locks in the default transient store: 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> I added some logging to AbstractCachingProcessingPipeline.java which 
> reconfirms the explanations above: 
> INFO (2008-03-13) 13:50.16:072 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:074 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  
> INFO (2008-03-13) 13:50.16:281 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: waiting for lock 
> PIPELOCK:PK_G-file-file:/

[jira] Updated: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-14 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-2173:
-

Attachment: reproduceMultipleThreads.tar.gz

for reproduction in Cocoon 2.1.11

> AbstractCachingProcessingPipeline: Two requests can deadlock each other
> ---
>
> Key: COCOON-2173
> URL: https://issues.apache.org/jira/browse/COCOON-2173
> Project: Cocoon
>  Issue Type: Bug
>  Components: - Components: Sitemap
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Alexander Daniel
> Attachments: reproduceMultipleThreads.tar.gz
>
>
> Two requests can deadlock each other when they depend on the same resources 
> which they acquire in a different order. I can reproduce that in Cocoon 
> 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
> * request A: generating lock for 55933 
> * request B: generating lock for 58840 
> * request B: waiting for lock 55933 which is hold by request A 
> * request A: waiting for lock 58840 which is hold by request B 
> I can reproduce this behaviour with Apache Bench and following pipeline: 
> * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
>  
> * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
> http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
>  
> * terminal 3: touching the two data files every second to invalidate the 
> cache (while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 
> * pipeline: 
>  
>
>  
>  label="b"> 
>  
>  
>  
>  
>  
>
>  
>  
>  
>  
>  
>  
>  
>  
>
>  
> After some seconds the deadlock occurs ==> 
> * Apache Bench requests run into a timeout 
> * I can see following pipe locks in the default transient store: 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  (class: org.mortbay.util.ThreadPool$PoolThread) 
> I added some logging to AbstractCachingProcessingPipeline.java which 
> reconfirms the explanations above: 
> INFO (2008-03-13) 13:50.16:072 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:074 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
> PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
>  
> INFO (2008-03-13) 13:50.16:075 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
>  
> INFO (2008-03-13) 13:50.16:281 [sitemap] 
> (/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
> PoolThread-6/AbstractCachingProcessingPipeline: waiting for lock 
> PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipl

[jira] Created: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-14 Thread Alexander Daniel (JIRA)
AbstractCachingProcessingPipeline: Two requests can deadlock each other
---

 Key: COCOON-2173
 URL: https://issues.apache.org/jira/browse/COCOON-2173
 Project: Cocoon
  Issue Type: Bug
  Components: - Components: Sitemap
Affects Versions: 2.1.11, 2.1.10, 2.1.9, 2.2-dev (Current SVN)
Reporter: Alexander Daniel
 Attachments: reproduceMultipleThreads.tar.gz

Two requests can deadlock each other when they depend on the same resources 
which they acquire in a different order. I can reproduce that in Cocoon 2.1.11 
and Cocoon 2.2-RC3-SNAPSHOT:
* request A: generating lock for 55933 
* request B: generating lock for 58840 
* request B: waiting for lock 55933 which is hold by request A 
* request A: waiting for lock 58840 which is hold by request B 


I can reproduce this behaviour with Apache Bench and following pipeline: 
* terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
 
* terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
 
* terminal 3: touching the two data files every second to invalidate the cache 
(while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done) 

* pipeline: 
 
   
 
 
 
 
 
 
 

   
 
 
 
 
 
 
 
 
   
 


After some seconds the deadlock occurs ==> 
* Apache Bench requests run into a timeout 

* I can see following pipe locks in the default transient store: 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 (class: org.mortbay.util.ThreadPool$PoolThread) 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 (class: org.mortbay.util.ThreadPool$PoolThread) 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
 (class: org.mortbay.util.ThreadPool$PoolThread) 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
 (class: org.mortbay.util.ThreadPool$PoolThread) 


I added some logging to AbstractCachingProcessingPipeline.java which reconfirms 
the explanations above: 
INFO (2008-03-13) 13:50.16:072 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 
INFO (2008-03-13) 13:50.16:074 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
 
INFO (2008-03-13) 13:50.16:075 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 
INFO (2008-03-13) 13:50.16:075 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
 
INFO (2008-03-13) 13:50.16:281 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: waiting for lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
 
INFO (2008-03-13) 13:50.16:304 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: waiting for lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
 


Reproduce yourself with Cocoon 2.1.11:
* download and extract Cocoon 2.1.11 
* cd $CocoonHome 
* ./build.sh 
* cd build/webapp/samples 
*

[jira] Commented: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-03-13 Thread Alexander Daniel (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12578407#action_12578407
 ] 

Alexander Daniel commented on COCOON-1985:
--

Thanks for the hint Vadim. I will have a look into the trunk.

In Cocoon 2.1.9 a deadlock could be reproduced with one single request running 
in one thread. This has been fixed in Cocoon 2.1.11. The issue I described uses 
two top level http requests which deadlock each other because they depend on 
the same resources which they acquire in a different order. If Cocoon 2.2 trunk 
synchronizes on top level http requests the same issue could occur from my 
understanding.

> AbstractCachingProcessingPipeline locking with IncludeTransformer may hang 
> pipeline
> ---
>
> Key: COCOON-1985
> URL: https://issues.apache.org/jira/browse/COCOON-1985
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Ellis Pritchard
>Priority: Critical
> Fix For: 2.2-dev (Current SVN)
>
> Attachments: caching-trials.patch, includer.xsl, patch.txt, 
> reproduceMultipleThreads.tar.gz, sitemap.xmap
>
>
> Cocoon 2.1.9 introduced the concept of a lock in 
> AbstractCachingProcessingPipeline, an optimization to prevent two concurrent 
> requests from generating the same cached content. The first request adds the 
> pipeline key to the transient cache to 'lock' the cache entry for that 
> pipeline, subsequent concurrent requests wait for the first request to cache 
> the content (by Object.lock()ing the pipeline key entry) before proceeding, 
> and can then use the newly cached content.
> However, this has introduced an incompatibility with the IncludeTransformer: 
> if the inclusions access the same yet-to-be-cached content as the root 
> pipeline, the whole assembly hangs, since a lock will be made on a lock 
> already held by the same thread, and which cannot be satisfied.
> e.g.
> i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient 
> store as a lock.
> iii) subsequently in the root pipeline, the IncludeTransformer is run.
> iv) one of the inclusions also generates with cocoon:/foo.xml, this 
> sub-pipeline locks in AbstractProcessingPipeline.waitForLock() because the 
> sub-pipeline key is already present.
> v) deadlock.
> I've found a (partial, see below) solution for this: instead of a plain 
> Object being added to the transient store as the lock object, the 
> Thread.currentThread() is added; when waitForLock() is called, if the lock 
> object exists, it checks that it is not the same thread before attempting to 
> lock it; if it is the same thread, then waitForLock() returns success, which 
> allows generation to proceed. You loose the efficiency of generating the 
> cache only once in this case, but at least it doesn't hang! With JDK1.5 this 
> can be made neater by using Thread#holdsLock() instead of adding the thread 
> object itself to the transient store.
> See patch file.
> However, even with this fix, parallel includes (when enabled) may still hang, 
> because they pass the not-the-same-thread test, but fail because the root 
> pipeline, which holds the initial lock, cannot complete (and therefore 
> statisfy the lock condition for the parallel threads), before the threads 
> themselves have completed, which then results in a deadlock again.
> The complete solution is probably to avoid locking if the lock is held by the 
> same top-level Request, but that requires more knowledge of Cocoon's 
> processing than I (currently) have!
> IMHO unless a complete solution is found to this, then this optimization 
> should be removed completely, or else made optional by configuration, since 
> it renders the IncludeTransformer dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-03-13 Thread Alexander Daniel (JIRA)

 [ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Daniel updated COCOON-1985:
-

Attachment: reproduceMultipleThreads.tar.gz

Sample code to reproduce deadlock in Cocoon 2.1.11 with two requests

> AbstractCachingProcessingPipeline locking with IncludeTransformer may hang 
> pipeline
> ---
>
> Key: COCOON-1985
> URL: https://issues.apache.org/jira/browse/COCOON-1985
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Ellis Pritchard
>Priority: Critical
> Fix For: 2.2-dev (Current SVN)
>
> Attachments: caching-trials.patch, includer.xsl, patch.txt, 
> reproduceMultipleThreads.tar.gz, sitemap.xmap
>
>
> Cocoon 2.1.9 introduced the concept of a lock in 
> AbstractCachingProcessingPipeline, an optimization to prevent two concurrent 
> requests from generating the same cached content. The first request adds the 
> pipeline key to the transient cache to 'lock' the cache entry for that 
> pipeline, subsequent concurrent requests wait for the first request to cache 
> the content (by Object.lock()ing the pipeline key entry) before proceeding, 
> and can then use the newly cached content.
> However, this has introduced an incompatibility with the IncludeTransformer: 
> if the inclusions access the same yet-to-be-cached content as the root 
> pipeline, the whole assembly hangs, since a lock will be made on a lock 
> already held by the same thread, and which cannot be satisfied.
> e.g.
> i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient 
> store as a lock.
> iii) subsequently in the root pipeline, the IncludeTransformer is run.
> iv) one of the inclusions also generates with cocoon:/foo.xml, this 
> sub-pipeline locks in AbstractProcessingPipeline.waitForLock() because the 
> sub-pipeline key is already present.
> v) deadlock.
> I've found a (partial, see below) solution for this: instead of a plain 
> Object being added to the transient store as the lock object, the 
> Thread.currentThread() is added; when waitForLock() is called, if the lock 
> object exists, it checks that it is not the same thread before attempting to 
> lock it; if it is the same thread, then waitForLock() returns success, which 
> allows generation to proceed. You loose the efficiency of generating the 
> cache only once in this case, but at least it doesn't hang! With JDK1.5 this 
> can be made neater by using Thread#holdsLock() instead of adding the thread 
> object itself to the transient store.
> See patch file.
> However, even with this fix, parallel includes (when enabled) may still hang, 
> because they pass the not-the-same-thread test, but fail because the root 
> pipeline, which holds the initial lock, cannot complete (and therefore 
> statisfy the lock condition for the parallel threads), before the threads 
> themselves have completed, which then results in a deadlock again.
> The complete solution is probably to avoid locking if the lock is held by the 
> same top-level Request, but that requires more knowledge of Cocoon's 
> processing than I (currently) have!
> IMHO unless a complete solution is found to this, then this optimization 
> should be removed completely, or else made optional by configuration, since 
> it renders the IncludeTransformer dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-03-13 Thread Alexander Daniel (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12578347#action_12578347
 ] 

Alexander Daniel commented on COCOON-1985:
--

Two requests can deadlock each other in Cocoon 2.1.11 (without use of parallel 
with include transformer):
* request A: generating lock for 55933
* request B: generating lock for 58840
* request B: waiting for lock 55933 which is hold by request A
* request A: waiting for lock 58840 which is hold by request B


I can reproduce this behaviour with Apache Bench and following pipeline:
* terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
* terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
* terminal 3: touching the two data files every second to invalidate the cache 
(while true; do echo -n "."; touch 55933.xml 58840.xml; sleep 1; done)

* pipeline:

  








  








  



After some seconds the deadlock occurs ==>
* Apache Bench requests run into a timeout

* I can see following pipe locks in the default transient store:
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 (class: org.mortbay.util.ThreadPool$PoolThread)
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
 (class: org.mortbay.util.ThreadPool$PoolThread)
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
 (class: org.mortbay.util.ThreadPool$PoolThread)
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
 (class: org.mortbay.util.ThreadPool$PoolThread)


I added some logging to AbstractCachingProcessingPipeline.java which reconfirms 
the explanations above:
INFO  (2008-03-13) 13:50.16:072 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
INFO  (2008-03-13) 13:50.16:074 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
INFO  (2008-03-13) 13:50.16:075 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
INFO  (2008-03-13) 13:50.16:075 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: generating lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
INFO  (2008-03-13) 13:50.16:281 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/58840/) 
PoolThread-6/AbstractCachingProcessingPipeline: waiting for lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
INFO  (2008-03-13) 13:50.16:304 [sitemap] 
(/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
PoolThread-47/AbstractCachingProcessingPipeline: waiting for lock 
PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml


With the attached reproduceMultipleThreads.tar.gz you can reproduce the 
behaviour yourself:
* download and extract Cocoon 2.1.11
* cd $CocoonHome
* ./build.sh
* cd build/webapp/samples
* tar -xzf $DownloadFolder/reproduceMultipleThreads.tar.gz
* cd ../../..
* ./cocoon.sh
* open 3 terminals and cd into 
$CocoonHome/build/webapp/samples/reproduceMultipleThreads in each

* dry run without invalidating the cache to see that everything is working:
  - terminal 1: ./terminal1.sh 
  - terminal 2: ./terminal2.sh

[jira] Commented: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-02-15 Thread Alexander Daniel (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12569213#action_12569213
 ] 

Alexander Daniel commented on COCOON-1985:
--

>From the source it looks like it is fixed in 2.1.11.

 54: * @version $Id: AbstractCachingProcessingPipeline.java 498470 2007-01-21 
22:34:30Z anathaniel $
... 
183:// Avoid deadlock with self (see JIRA COCOON-1985).
184:if(lock != null && lock != Thread.currentThread()) {
185:try {
186:// become owner of monitor
187:synchronized(lock) {
188:lock.wait();
189:}
190:} catch (InterruptedException e) {
191:if(getLogger().isDebugEnabled()) {
192:getLogger().debug("Got interrupted waiting for 
other pipeline to finish processing, retrying...",e);
193:}
194:return false;
195:}
196:if(getLogger().isDebugEnabled()) {
197:getLogger().debug("Other pipeline finished processing, 
retrying to get cached response.");
198:}
199:return false;
200:}


> AbstractCachingProcessingPipeline locking with IncludeTransformer may hang 
> pipeline
> ---
>
> Key: COCOON-1985
> URL: https://issues.apache.org/jira/browse/COCOON-1985
> Project: Cocoon
>  Issue Type: Bug
>  Components: * Cocoon Core
>Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
>Reporter: Ellis Pritchard
>Priority: Critical
> Fix For: 2.2-dev (Current SVN)
>
> Attachments: caching-trials.patch, includer.xsl, patch.txt, 
> sitemap.xmap
>
>
> Cocoon 2.1.9 introduced the concept of a lock in 
> AbstractCachingProcessingPipeline, an optimization to prevent two concurrent 
> requests from generating the same cached content. The first request adds the 
> pipeline key to the transient cache to 'lock' the cache entry for that 
> pipeline, subsequent concurrent requests wait for the first request to cache 
> the content (by Object.lock()ing the pipeline key entry) before proceeding, 
> and can then use the newly cached content.
> However, this has introduced an incompatibility with the IncludeTransformer: 
> if the inclusions access the same yet-to-be-cached content as the root 
> pipeline, the whole assembly hangs, since a lock will be made on a lock 
> already held by the same thread, and which cannot be satisfied.
> e.g.
> i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient 
> store as a lock.
> iii) subsequently in the root pipeline, the IncludeTransformer is run.
> iv) one of the inclusions also generates with cocoon:/foo.xml, this 
> sub-pipeline locks in AbstractProcessingPipeline.waitForLock() because the 
> sub-pipeline key is already present.
> v) deadlock.
> I've found a (partial, see below) solution for this: instead of a plain 
> Object being added to the transient store as the lock object, the 
> Thread.currentThread() is added; when waitForLock() is called, if the lock 
> object exists, it checks that it is not the same thread before attempting to 
> lock it; if it is the same thread, then waitForLock() returns success, which 
> allows generation to proceed. You loose the efficiency of generating the 
> cache only once in this case, but at least it doesn't hang! With JDK1.5 this 
> can be made neater by using Thread#holdsLock() instead of adding the thread 
> object itself to the transient store.
> See patch file.
> However, even with this fix, parallel includes (when enabled) may still hang, 
> because they pass the not-the-same-thread test, but fail because the root 
> pipeline, which holds the initial lock, cannot complete (and therefore 
> statisfy the lock condition for the parallel threads), before the threads 
> themselves have completed, which then results in a deadlock again.
> The complete solution is probably to avoid locking if the lock is held by the 
> same top-level Request, but that requires more knowledge of Cocoon's 
> processing than I (currently) have!
> IMHO unless a complete solution is found to this, then this optimization 
> should be removed completely, or else made optional by configuration, since 
> it renders the IncludeTransformer dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.