When I fired 100 WebDAV requests in 100 threads concurrently, I saw a
deadlock like below :
Found one Java-level deadlock:
=
http-8080-46:
waiting to lock monitor 0x68a52694 (object 0x7365e870, a
org.apache.jackrabbit.core.state.NodeState),
which is held by
Currently, the CachingHierarchyManager is per session. Is it possble to save
the id-path map in SharedISM?
--
View this message in context:
http://www.nabble.com/Add-id-map-and-path-map-in-SharedISM--tp22578913p22578913.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
at 2:14 PM, defeng defeng...@gmail.com wrote:
Currently, the CachingHierarchyManager is per session. Is it possble to
save
the id-path map in SharedISM?
no. CachingHierarchyManager reflects session-scoped hierarchy information
(i.e. including transient state) whereas SharedISM has global
Dominique
On Wed, Feb 18, 2009 at 6:04 PM, defeng defeng...@gmail.com wrote:
Currently when I update an itemstate, I need to acquire a cluster lock
(Journal.doLock()). This lock will block any update on others itemstates.
I
want to only lock *one* itemstate in the cluster. So I want to modify
,
Dominique Pfister wrote:
Hi,
On Thu, Feb 19, 2009 at 4:25 PM, defeng defeng...@gmail.com wrote:
Dominique,
Thank for your reply. Seems to me, no inconsitence in your sample.
1. CN1 (/a)
2. CN1 (/a/b)
3. CN2 (/a/b)
4. CN1 (/a/b/c)
For Step 3: before CN2 update /a/b, it has to wait
Currently when I update an itemstate, I need to acquire a cluster lock
(Journal.doLock()). This lock will block any update on others itemstates. I
want to only lock *one* itemstate in the cluster. So I want to modify the
SharedISM.update( ). (I donot use XA). Is there any side-effect?
,
Thomas
On Mon, Feb 16, 2009 at 9:02 PM, defeng defeng...@gmail.com wrote:
I am using Jackrabbit 1.4.4 clustering. DB for persistent manager, and
NFS
for datastore. Everything works well, but since the Jackrabbit uses a
Cluster-wide lock, many times, other JCR clients need to wait for a long
I am using Jackrabbit 1.4.4 clustering. DB for persistent manager, and NFS
for datastore. Everything works well, but since the Jackrabbit uses a
Cluster-wide lock, many times, other JCR clients need to wait for a long
time to acquire the Global lock.
To solve this issue, I want to use a