Hi Domenic

I apologize for the late reply, but now I finally had time
to look at your test.

The reason why Oak on MongoDB is so slow with your test is the
write concern that your test specifies when it constructs
the DocumentNodeStore. The test sets it to FSYNCED. This is
an appropriate write concern when you only have a single MongoDB
node but comes with a very high latency. In general MongoDB is
designed to run in production as a replica set and the recommended
write concern with this deployment would be MAJORITY.

More details why Oak on MongoDB performs badly with your test
is also available in OAK-3554 [0].

So, you should either reduce the journalCommitInterval in MongoDB
or test with a replica set and MAJORITY write concern. Both
should give you a significant speedup compared to your current
test setup.

Regards
 Marcel

[0] 
https://issues.apache.org/jira/browse/OAK-3554?focusedCommentId=14991306&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14991306


On 06/04/16 16:20, "Domenic DiTano" wrote:

Hi Marcel,

I uploaded all the source to github along with a summary spreadsheet.  I
would appreciate any time you have to review.

https://github.com/Domenic-Ansys/Jackrabbit2-Oak-Tests

As you stated the move is a non goal, but in comparison to Jackrabbit 2 I
am also finding in my tests that create, update, and copy are all faster
in Jackrabbit 2 (10k nodes).  Any input would be appreciated...

Also, will MySql will not be listed as "Experimental" at some point?

Thanks,
Domenic


-----Original Message-----
From: Marcel Reutegger [mailto:mreut...@adobe.com]
Sent: Thursday, March 31, 2016 6:14 AM
To: oak-dev@jackrabbit.apache.org<mailto:oak-dev@jackrabbit.apache.org>
Subject: Re: Jackrabbit 2.10 vs Oak 1.2.7

Hi Domenic,

On 30/03/16 14:34, "Domenic DiTano" wrote:
"In contrast to Jackrabbit 2, a move of a large subtree is an expensive
operation in Oak"
So should I avoid doing a move of a large number of items using Oak?
If we are using Oak then should we avoid operations with a large number
of items in general?

In general it is fine to have a large change set with Oak. With Oak you
can even have change sets that do not fit into the heap.

  As a FYI - there are other benefits for us to move to Oak, but our
application uses executes JCR operations with a large number of items
quite often.  I am worried about the performance.

The move method is pretty simple - should I be doing it differently?

public static long moveNodes(Session session, Node node, String
newNodeName)
throws Exception{
     long start = System.currentTimeMillis();
session.move(node.getPath(), "/"+newNodeName);
             session.save();
     long end = System.currentTimeMillis();
     return end-start;
}

No, this is fine. As mentioned earlier, with Oak a move operation is not
cheap and is basically implemented as copy to new location and delete at
the old location.

A cheap move operation was considered a non-goal when Oak was designed:
https://wiki.apache.org/jackrabbit/Goals%20and%20non%20goals%20for%20Jackr
a
bbit%203


Regards
Marcel

Reply via email to