Hi guys,
I'm evaluating JackRabbit (2.9) for developing a Web Content Management System.
One common scenario would be for end users to create a project (workspace in 
JCR), make some changes there, preview, etc. and eventually merge the changes 
back into the default JCR workspace a few days later. Full versioning.
I'm seeing OutOfMemory errors like below when trying to clone/copy the default 
workspace. And after several minutes it fails w/o being able to clone it. 
Plenty of memory allocated (4GB) to the JVM, noting else going on there besides 
this test. The repository itself is not very big, about 100K nodes or so, 
Oracle DB used as a back end. The tree is not very flat (less than 100 
immediate children per node), 4 levels deep from the root, each node is 
1KB-2KBs in total size, 15 properties or so.
If I change the code to do the workspace clone by copying not the whole tree in 
one pass but each of the immediate children of root one by one - it succeeds 
easily in 5-6 minutes.
Has anyone had good experience with "workspace.copy()" or "workspace.clone()"? 
How do people usually create parallel workspaces for concurrent changes in 
long-lived projects? JCR's workspaces seem like the obvious choice but issues 
like these shake my confidence in JackRabbit... And there is no implementation 
for deleting work-spaces through the APIs so it doesn't seem to me that people 
feel the need for using workspaces that much. Is there a better way to model 
concurrent projects using JCR (maybe copying a part of the tree inside of the 
default workspace and not using multiple workspaces at all?).
As for the OOM errors - I can play with the GC options of the JVM during my 
evaluation (like -XX:-UseGCOverheadLimit) to avoid the OOM errors but won't 
have that luxury in a Production env.
We will have repositories with one to ten million nodes if we reach production 
with this project. Will JackRabbit+Oracle backend be up to the task or would 
Oak+MongoDB be a better choice?
Thx!-------------Exceptions:Exception in thread "jackrabbit-pool-4" 
java.lang.OutOfMemoryError: GC overhead limit exceeded      at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1857)
       at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2073)
       at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)Exception:
 java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread 
"jackrabbit-pool-4"
Exception in thread "jackrabbit-pool-5" Exception in thread "jackrabbit-pool-3" 
java.lang.OutOfMemoryError: GC overhead limit exceeded  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.addWaiter(AbstractQueuedSynchronizer.java:606)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireInterruptibly(AbstractQueuedSynchronizer.java:883)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1221)
     at 
java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1074)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745)

                                          

Reply via email to