[ https://issues.apache.org/jira/browse/OAK-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898394#comment-13898394 ]
Jukka Zitting commented on OAK-1417: ------------------------------------ Are we sure that this should really scale linearly? For example the more content you have, the deeper the content tree is, which adds a tiny amount of extra cost to the access of each leaf node. Like in OAK-1416, a performance goal of {{O\(n log n)}} would seem more realistic. > Processing pending observation events does not scale linearly with the number > of events > --------------------------------------------------------------------------------------- > > Key: OAK-1417 > URL: https://issues.apache.org/jira/browse/OAK-1417 > Project: Jackrabbit Oak > Issue Type: Bug > Components: mongomk, segmentmk > Reporter: Michael Dürig > > {{org.apache.jackrabbit.oak.jcr.LargeOperationIT#largeNumberOfPendingEvents}} > does not scale linearly. Neither on a segment nor on a document node store. > This test asserts that the number of pending observation events (e.g. due to > a large commit or cluster sync) has a linear processing time. That is the > time it takes to process one such event is independent of the total number of > events. > {code} > seg quotients: 0.24215928530697586, 0.4065934065934066, 1.7548262548262548, > 0.6892189218921893, 1.2083000798084598 > doc quotients: 0.2990824434780629, 0.14113885505481122, 0.587378640776699, > 1.3122130394857667, 142.65850244926523 > {code} > While in the case of the segment node store the numbers do not seem too > worrisome, the document node store seems to start lagging behind badly as > soon as there are more than 32768 pending events. > See OAK-1413 for how to read those numbers. -- This message was sent by Atlassian JIRA (v6.1.5#6160)