[jira] [Commented] (OAK-2829) Comparing node states for external changes is too slow
[ https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585733#comment-14585733 ] Chetan Mehrotra commented on OAK-2829: -- [~egli] Saw following exception while running trunk. Looks like getRevisionTimestamp should use '-' instead of '_' for splitting {noformat} 15.06.2015 05:19:22.810 *ERROR* [pool-6-thread-5] org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job execution of org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$4@2c1293b9 : 1 java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.jackrabbit.oak.plugins.document.JournalEntry.getRevisionTimestamp(JournalEntry.java:235) at org.apache.jackrabbit.oak.plugins.document.JournalGarbageCollector.gc(JournalGarbageCollector.java:126) at org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$4.run(DocumentNodeStoreService.java:630) at org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:105) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) {noformat} Further looking at {{JournalGarbageCollector}} ... it would be simpler if you record the journal entry timestamp as an attribute in JournalEntry document and then you can delete all the entries which are older than some time by a simple query. This would avoid fetching all the entries to be deleted on the Oak side Comparing node states for external changes is too slow -- Key: OAK-2829 URL: https://issues.apache.org/jira/browse/OAK-2829 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Blocker Labels: scalability Fix For: 1.3.1, 1.2.3 Attachments: CompareAgainstBaseStateTest.java, graph-1.png, graph.png Comparing node states for local changes has been improved already with OAK-2669. But in a clustered setup generating events for external changes cannot make use of the introduced cache and is therefore slower. This can result in a growing observation queue, eventually reaching the configured limit. See also OAK-2683. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-2991) group.removeMember() on result of user.memberOf() does not work
Alexander Klimetschek created OAK-2991: -- Summary: group.removeMember() on result of user.memberOf() does not work Key: OAK-2991 URL: https://issues.apache.org/jira/browse/OAK-2991 Project: Jackrabbit Oak Issue Type: Bug Components: security Affects Versions: 1.2.2 Reporter: Alexander Klimetschek When using the group from the memberOf() iterator to remove a user from that group, the change is not persisted. One has to fetch the group separately from the UserManager. {code} final IteratorGroup groups = user.memberOf(); while (groups.hasNext()) { Group group = groups.next(); group.removeMember(user); // does not work session.save(); group = userManager.getGroup(group.getID()); group.removeMember(user); // does work session.save(); } {code} Note that {{removeMember()}} always returns true, indicating that the change worked. Debugging through the code, especially MembershipWriter, shows that the rep:members property is correctly updated, so probably some later modification restores the original value before the save is executed. (No JCR events are triggered for the group). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585753#comment-14585753 ] Francesco Mari commented on OAK-2962: - [~amitjain], sure, I will add some tests and submit a new patch. SegmentNodeStoreService fails to handle empty strings in the OSGi configuration --- Key: OAK-2962 URL: https://issues.apache.org/jira/browse/OAK-2962 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Francesco Mari Fix For: 1.3.1 Attachments: OAK-2962-01.patch When an OSGi configuration property is removed from the dictionary associated to a component, the default value assigned to it is an empty string. When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, it is returned to its caller as a valid configuration value. The callers of {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an empty value is found. The method {{SegmentNodeStoreService#lookup}} should check for empty strings in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2829) Comparing node states for external changes is too slow
[ https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli updated OAK-2829: - Attachment: OAK-2829-gc-bug.patch [~chetanm], thx for spotting! Theat {{getRevisionTimestamp()}} was quite buggy though, it also requires parseLong with a base 16.. I've attached [^OAK-2829-gc-bug.patch] which contains the fix plus a dedicated junit method that would have caught both issues. Can you pls review and apply accordingly? Thx! Comparing node states for external changes is too slow -- Key: OAK-2829 URL: https://issues.apache.org/jira/browse/OAK-2829 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Blocker Labels: scalability Fix For: 1.3.1, 1.2.3 Attachments: CompareAgainstBaseStateTest.java, OAK-2829-gc-bug.patch, graph-1.png, graph.png Comparing node states for local changes has been improved already with OAK-2669. But in a clustered setup generating events for external changes cannot make use of the introduced cache and is therefore slower. This can result in a growing observation queue, eventually reaching the configured limit. See also OAK-2683. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-2992) TokenProvider: Make reset of token expiration configurable
angela created OAK-2992: --- Summary: TokenProvider: Make reset of token expiration configurable Key: OAK-2992 URL: https://issues.apache.org/jira/browse/OAK-2992 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: angela Assignee: angela Fix For: 1.3.1 currently the expiration of login tokens is automatically reset. This should be make configurable allowing API consumers to have login tokens to expire. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain updated OAK-2962: --- Assignee: Francesco Mari SegmentNodeStoreService fails to handle empty strings in the OSGi configuration --- Key: OAK-2962 URL: https://issues.apache.org/jira/browse/OAK-2962 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Francesco Mari Assignee: Francesco Mari Fix For: 1.3.1 Attachments: OAK-2962-01.patch When an OSGi configuration property is removed from the dictionary associated to a component, the default value assigned to it is an empty string. When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, it is returned to its caller as a valid configuration value. The callers of {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an empty value is found. The method {{SegmentNodeStoreService#lookup}} should check for empty strings in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585755#comment-14585755 ] Francesco Mari commented on OAK-2962: - [~amitjain], can you assign the issue to me, so I don't lose track of it? I'm not able to do it. SegmentNodeStoreService fails to handle empty strings in the OSGi configuration --- Key: OAK-2962 URL: https://issues.apache.org/jira/browse/OAK-2962 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Francesco Mari Fix For: 1.3.1 Attachments: OAK-2962-01.patch When an OSGi configuration property is removed from the dictionary associated to a component, the default value assigned to it is an empty string. When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, it is returned to its caller as a valid configuration value. The callers of {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an empty value is found. The method {{SegmentNodeStoreService#lookup}} should check for empty strings in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2176) Support for using query engine for search suggestions
[ https://issues.apache.org/jira/browse/OAK-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tommaso Teofili updated OAK-2176: - Fix Version/s: 1.0.15 Support for using query engine for search suggestions - Key: OAK-2176 URL: https://issues.apache.org/jira/browse/OAK-2176 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene, query, solr Affects Versions: 1.1.0 Reporter: Tommaso Teofili Assignee: Tommaso Teofili Fix For: 1.1.7, 1.0.15 Related to OAK-2175 search engines are often used for term suggestions (e.g. for autocompletion, search as you type, etc.) which I think would be good to support also in Oak, especially having both Lucene (https://lucene.apache.org/core/4_10_0/suggest/org/apache/lucene/search/suggest/Lookup.html) and Solr (https://wiki.apache.org/solr/Suggester https://wiki.apache.org/solr/TermsComponent) already implementing this functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-2982) BasicDocumentStoreTest: separate actual unit tests from performance tests
[ https://issues.apache.org/jira/browse/OAK-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke reassigned OAK-2982: --- Assignee: Julian Reschke BasicDocumentStoreTest: separate actual unit tests from performance tests - Key: OAK-2982 URL: https://issues.apache.org/jira/browse/OAK-2982 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.15, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Fix For: 1.3.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2987) RDBDocumentStore: try PreparedStatement batching
[ https://issues.apache.org/jira/browse/OAK-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585814#comment-14585814 ] Julian Reschke commented on OAK-2987: - For dbBatchedAppendingUpdate it's unclear why changing from a single statement that updates many rows to a batch of statements that each affect a single row would help. Do you have any benchmarks that show a difference? RDBDocumentStore: try PreparedStatement batching Key: OAK-2987 URL: https://issues.apache.org/jira/browse/OAK-2987 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.15, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Fix For: 1.3.1 There's at least one place in the code (dbBatchedAppendingUpdate, maybe also dbInsert) where we could use batching on a single PreparedStatement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585751#comment-14585751 ] Amit Jain commented on OAK-2962: [~frm] Is it possible to add a test for this using the test you created for OAK-2883? SegmentNodeStoreService fails to handle empty strings in the OSGi configuration --- Key: OAK-2962 URL: https://issues.apache.org/jira/browse/OAK-2962 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Francesco Mari Fix For: 1.3.1 Attachments: OAK-2962-01.patch When an OSGi configuration property is removed from the dictionary associated to a component, the default value assigned to it is an empty string. When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, it is returned to its caller as a valid configuration value. The callers of {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an empty value is found. The method {{SegmentNodeStoreService#lookup}} should check for empty strings in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2992) TokenProvider: Make reset of token expiration configurable
[ https://issues.apache.org/jira/browse/OAK-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-2992. - Resolution: Fixed Committed revision 1685541 (including updating the documentation in {{tokenmanagement.md}}) TokenProvider: Make reset of token expiration configurable -- Key: OAK-2992 URL: https://issues.apache.org/jira/browse/OAK-2992 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: angela Assignee: angela Fix For: 1.3.1 currently the expiration of login tokens is automatically reset. This should be make configurable allowing API consumers to have login tokens to expire. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-2987) RDBDocumentStore: try PreparedStatement batching
[ https://issues.apache.org/jira/browse/OAK-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585814#comment-14585814 ] Julian Reschke edited comment on OAK-2987 at 6/15/15 11:45 AM: --- For dbBatchedAppendingUpdate it's unclear why changing from a single statement that updates many rows to a batch of statements that each affect a single row would help. Do you have any benchmarks that show a difference? For insertDocuments, I just tried to find any use in the DocumentStore where it's called for more than one document at one, and couldn't find any... was (Author: reschke): For dbBatchedAppendingUpdate it's unclear why changing from a single statement that updates many rows to a batch of statements that each affect a single row would help. Do you have any benchmarks that show a difference? RDBDocumentStore: try PreparedStatement batching Key: OAK-2987 URL: https://issues.apache.org/jira/browse/OAK-2987 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.15, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke There's at least one place in the code (dbBatchedAppendingUpdate, maybe also dbInsert) where we could use batching on a single PreparedStatement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2987) RDBDocumentStore: try PreparedStatement batching
[ https://issues.apache.org/jira/browse/OAK-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2987: Fix Version/s: (was: 1.3.1) RDBDocumentStore: try PreparedStatement batching Key: OAK-2987 URL: https://issues.apache.org/jira/browse/OAK-2987 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.15, 1.3 Reporter: Julian Reschke There's at least one place in the code (dbBatchedAppendingUpdate, maybe also dbInsert) where we could use batching on a single PreparedStatement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2987) RDBDocumentStore: try PreparedStatement batching
[ https://issues.apache.org/jira/browse/OAK-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2987: Assignee: (was: Julian Reschke) RDBDocumentStore: try PreparedStatement batching Key: OAK-2987 URL: https://issues.apache.org/jira/browse/OAK-2987 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.15, 1.3 Reporter: Julian Reschke There's at least one place in the code (dbBatchedAppendingUpdate, maybe also dbInsert) where we could use batching on a single PreparedStatement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2834) LIRS cache: allow to disable it when using the persistent cache
[ https://issues.apache.org/jira/browse/OAK-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2834. - Bulk Close for 1.3.0 LIRS cache: allow to disable it when using the persistent cache --- Key: OAK-2834 URL: https://issues.apache.org/jira/browse/OAK-2834 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Thomas Mueller Assignee: Thomas Mueller Labels: considerFor1.2, doc-impacting Fix For: 1.3.0 Currently, the LIRS cache is always enabled when using the persistent cache. It should be possible to explicitly disable it in this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2711) Troublesome AbstractTree.toString
[ https://issues.apache.org/jira/browse/OAK-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2711. - Bulk Close for 1.3.0 Troublesome AbstractTree.toString - Key: OAK-2711 URL: https://issues.apache.org/jira/browse/OAK-2711 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: angela Assignee: Chetan Mehrotra Labels: technical_debt Fix For: 1.3.0 Attachments: OAK-2711.patch the default {{toString}} for all tree implementations calculates a string containing the path, the toString of all properties as well as the names of all child tree... this is prone to cause troubles in case for trees that have plenty of properties and children. i would strongly recommend to review this and make the toString of trees both meaningful and cheap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2630) Cleanup Oak jobs on buildbot
[ https://issues.apache.org/jira/browse/OAK-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2630. - Bulk Close for 1.3.0 Cleanup Oak jobs on buildbot Key: OAK-2630 URL: https://issues.apache.org/jira/browse/OAK-2630 Project: Jackrabbit Oak Issue Type: Sub-task Reporter: Tommaso Teofili Assignee: Michael Dürig Labels: CI Fix For: 1.3.0 Since we're moving towards Jenkins, let's remove the buildbot jobs for Oak. The buildbot configuration is here: https://svn.apache.org/repos/infra/infrastructure/buildbot/aegis/buildmaster/master1/projects -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2948) Expose DefaultSyncHandler
[ https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2948. - Bulk Close for 1.3.0 Expose DefaultSyncHandler - Key: OAK-2948 URL: https://issues.apache.org/jira/browse/OAK-2948 Project: Jackrabbit Oak Issue Type: Improvement Components: auth-external Reporter: Konrad Windszus Fix For: 1.3.0 We do have the use case of extending the user sync. Unfortunately {{DefaultSyncHandler}} is not exposed, so if you want to change one single aspect of the user synchronisation you have to copy over the code from the {{DefaultSyncHandler}}. Would it be possible to make that class part of the exposed classes, so that deriving your own class from that DefaultSyncHandler is possible? Very often company LDAPs are not very standardized. In our case we face an issue, that the membership is being listed in a user attribute, rather than in a group attribute. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2763) Remove ChangeDispatcher in DocumentNodeStoreBranch
[ https://issues.apache.org/jira/browse/OAK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2763. - Bulk Close for 1.3.0 Remove ChangeDispatcher in DocumentNodeStoreBranch -- Key: OAK-2763 URL: https://issues.apache.org/jira/browse/OAK-2763 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.3.0, 1.2.3 This is a remnant of the AbstractNodeStoreBranch, when there was a MicroKernel based implementation. The DocumentNodeStoreBranch does not need the ChangeDispatcher here. Changes are dispatched by the DocumentNodeStore, which is an Observable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2742) Add UserImport tests that run with a non-admin session
[ https://issues.apache.org/jira/browse/OAK-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2742. - Bulk Close for 1.3.0 Add UserImport tests that run with a non-admin session --- Key: OAK-2742 URL: https://issues.apache.org/jira/browse/OAK-2742 Project: Jackrabbit Oak Issue Type: Test Components: jcr Reporter: angela Assignee: angela Fix For: 1.3.0 Attachments: OAK-2740_importTests.patch the user-import related tests are currently all running with the admin session. they should be refactored such that we can easily run some/all test also with non-admin sessions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2757) Failed to read from tar file
[ https://issues.apache.org/jira/browse/OAK-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2757. - Bulk Close for 1.3.0 Failed to read from tar file - Key: OAK-2757 URL: https://issues.apache.org/jira/browse/OAK-2757 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Priority: Minor Labels: gc, resilience Fix For: 1.3.0 Under some rare circumstances there is a warning in the logs: {noformat} 11:57:47.375 WARN [pool-1-thread-24] FileStore.java:865Failed to read from tar file target/SegmentCompactionIT1331315031754226278dir/data01460a.tar java.io.IOException: Stream Closed at java.io.RandomAccessFile.seek(Native Method) ~[na:1.7.0_75] at org.apache.jackrabbit.oak.plugins.segment.file.FileAccess$Random.read(FileAccess.java:105) ~[classes/:na] at org.apache.jackrabbit.oak.plugins.segment.file.TarReader.readEntry(TarReader.java:502) ~[classes/:na] at org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:860) ~[classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:128) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.Segment.readString(Segment.java:348) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.Segment.readPropsV11(Segment.java:476) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.Segment.loadTemplate(Segment.java:449) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.Segment.readTemplate(Segment.java:402) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.Segment.readTemplate(Segment.java:396) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getChildNodeCount(SegmentNodeState.java:357) [classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentCompactionIT$RandomReader.readRandomTree(SegmentCompactionIT.java:410) [test-classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentCompactionIT$RandomPropertyReader.call(SegmentCompactionIT.java:446) [test-classes/:na] at org.apache.jackrabbit.oak.plugins.segment.SegmentCompactionIT$RandomPropertyReader.call(SegmentCompactionIT.java:439) [test-classes/:na] at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_75] {noformat} This happens due to a race between {{FileStore#readSegment}} reading from tar files and already removed by {{FileStore#flush}}. This isn't a problem as the tar file in question is still present at a newer generation and the {{FileStore}} will eventually read from that one. However the warning looks rather scaring and somewhat implies a defect. We should either lower the log level or remove the race. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2931) RDBDocumentStore: mitigate effects of large query result sets
[ https://issues.apache.org/jira/browse/OAK-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2931. - Bulk Close for 1.3.0 RDBDocumentStore: mitigate effects of large query result sets - Key: OAK-2931 URL: https://issues.apache.org/jira/browse/OAK-2931 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.14, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Labels: resilience Fix For: 1.3.0, 1.2.3, 1.0.15 With the DocumentStore query API, large result sets can happen; and these are returned as ListDocument, potentially causing large amounts of memory to be allocated. In the current implementation, the result list is generated based on a list of internal row presentations (RDBRow). These are currently freed when the method finishes. They should be freed as early as possible. Furthermore, when the result set gets big, RDBDocumentStore should llog an error containing the call chain, so that the component doing the excessive query can be identified (it should use paging instead). (For completeness: we could also change the code to lazily populate the list; but that would be a bigger change) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2893) RepositoryUpgrade.copy() should optionally continue on errors.
[ https://issues.apache.org/jira/browse/OAK-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2893. - Bulk Close for 1.3.0 RepositoryUpgrade.copy() should optionally continue on errors. -- Key: OAK-2893 URL: https://issues.apache.org/jira/browse/OAK-2893 Project: Jackrabbit Oak Issue Type: Improvement Components: upgrade Affects Versions: 1.2.2, 1.0.14 Reporter: Manfred Baedke Assignee: Manfred Baedke Labels: resilience Fix For: 1.3.0, 1.2.3, 1.0.15 Currently RepositoryUpgrade.copy() fails on the first error. In practice this is very inconvenient, because any minor inconsistency in the source repository may cause the upgrade to fail. An option to make best-effort copies is needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2247) CopyOnWriteDirectory implementation for Lucene for use in indexing
[ https://issues.apache.org/jira/browse/OAK-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2247. - Bulk Close for 1.3.0 CopyOnWriteDirectory implementation for Lucene for use in indexing -- Key: OAK-2247 URL: https://issues.apache.org/jira/browse/OAK-2247 Project: Jackrabbit Oak Issue Type: New Feature Components: lucene Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Labels: docs-impacting, performance Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2247-v1.patch Currently a Lucene index when is written directly to OakDirectory. For reindex case it might happen that Lucene merge policy read the written index files again and then perform a sgement merge. This might have lower performance when OakDirectroy is writing to remote storage. Instead of that we can implement a CopyOnWriteDirectory on similar lines to OAK-1724 where CopyOnReadDirectory support copies the index locally for faster access. At high level flow would be # While writing index the index file is first written to local directory # Any write is done locally and once a file is written its written asynchronously to OakDirectory # When IndexWriter is closed it would wait untill all the write is completed This needs to be benchmarked with existing reindex timings to see it its actually beneficial -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-1956) Set correct OSGi package export version
[ https://issues.apache.org/jira/browse/OAK-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-1956. - Bulk Close for 1.3.0 Set correct OSGi package export version --- Key: OAK-1956 URL: https://issues.apache.org/jira/browse/OAK-1956 Project: Jackrabbit Oak Issue Type: Task Reporter: Michael Dürig Assignee: Michael Dürig Priority: Critical Labels: modularization, osgi, technical_debt Fix For: 1.3.0 This issue serves as a reminder to set the correct OSGi package export versions before we release 1.2. OAK-1536 added support for the BND baseline feature: the baseline.xml files in the target directories should help us figuring out the correct versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-1763) OrderedIndex does not comply with JCR's compareTo semantics
[ https://issues.apache.org/jira/browse/OAK-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-1763. - Bulk Close for 1.3.0 OrderedIndex does not comply with JCR's compareTo semantics --- Key: OAK-1763 URL: https://issues.apache.org/jira/browse/OAK-1763 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: Michael Dürig Fix For: 1.3.0 The ordered index currently uses the lexicographical order of the string representation of the values. This does not comply with [JCR's compareTo sementics | http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.6.5.1%20CompareTo%20Semantics] for e.g. double values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2887) Add support for generating mongo export command to oak-mongo
[ https://issues.apache.org/jira/browse/OAK-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2887. - Bulk Close for 1.3.0 Add support for generating mongo export command to oak-mongo Key: OAK-2887 URL: https://issues.apache.org/jira/browse/OAK-2887 Project: Jackrabbit Oak Issue Type: Improvement Components: run Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Labels: tooling Fix For: 1.3.0 Attachments: ExportDetailedDoc-take2.patch, ExportDetailedDoc.patch, createExportCommand.groovy At time to analyse a issue with {{DocumentNodeStore}} running on Mongo we need a dump of various documents so as to recreate the scenario locally. In most case if issue is being observed for a specific path like /a/b then its sufficient to get Mongo documents for /, /a, /a/b and all the split documents for those paths. It would be useful to have a function in oak-mongo which generates the required export command. For e.g. for path like /a/b following export command would dump all required info {noformat} mongoexport -h mongo server --port 27017 --db db name --collection nodes --out all-required-nodes.json --query '{$or:[{_id : /^4:p\/a\/b\//},{_id : /^3:p\/a\//},{_id : /^2:p\//},{_id:{$in:[2:/a/b,1:/a,0:/]}}]}' {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2912) Clear the modified and deleted map in PermissionHook after processing is complete
[ https://issues.apache.org/jira/browse/OAK-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2912. - Bulk Close for 1.3.0 Clear the modified and deleted map in PermissionHook after processing is complete - Key: OAK-2912 URL: https://issues.apache.org/jira/browse/OAK-2912 Project: Jackrabbit Oak Issue Type: Improvement Components: security Reporter: Chetan Mehrotra Assignee: angela Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2912.patch {{PermissionHook}} has in memory state in {{modified}} and {{deleted}} maps. In case of repository migration which is implemented as a large commit this can consume quite a bit of memory. In one of the migration it was taking ~1 GB of memory. In a commit involving multiple commit hooks once {{PermissionHook}} has done the work it can clear that state so that memory is not held up untill all the hooks are applied. Specially as IndexingHook takes long time and also has some memory requirements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-2829) Comparing node states for external changes is too slow
[ https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585832#comment-14585832 ] Stefan Egli edited comment on OAK-2829 at 6/15/15 12:09 PM: [~chetanm], thx for spotting! That {{getRevisionTimestamp()}} was quite buggy though, it also requires parseLong with a base 16.. I've attached [^OAK-2829-gc-bug.patch] which contains the fix plus a dedicated junit method that would have caught both issues. Can you pls review and apply accordingly? Thx! was (Author: egli): [~chetanm], thx for spotting! Theat {{getRevisionTimestamp()}} was quite buggy though, it also requires parseLong with a base 16.. I've attached [^OAK-2829-gc-bug.patch] which contains the fix plus a dedicated junit method that would have caught both issues. Can you pls review and apply accordingly? Thx! Comparing node states for external changes is too slow -- Key: OAK-2829 URL: https://issues.apache.org/jira/browse/OAK-2829 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Blocker Labels: scalability Fix For: 1.3.1, 1.2.3 Attachments: CompareAgainstBaseStateTest.java, OAK-2829-gc-bug.patch, graph-1.png, graph.png Comparing node states for local changes has been improved already with OAK-2669. But in a clustered setup generating events for external changes cannot make use of the introduced cache and is therefore slower. This can result in a growing observation queue, eventually reaching the configured limit. See also OAK-2683. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2748) Oak Implementation for JCR-3836 and JCR-3837 (getting authorizable by type)
[ https://issues.apache.org/jira/browse/OAK-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2748. - Bulk Close for 1.3.0 Oak Implementation for JCR-3836 and JCR-3837 (getting authorizable by type) --- Key: OAK-2748 URL: https://issues.apache.org/jira/browse/OAK-2748 Project: Jackrabbit Oak Issue Type: Improvement Components: core, jcr Reporter: angela Assignee: angela Fix For: 1.3.0 marker issue for the implementation of JCR-3837 in Oak. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2830) LIRS cache: avoid concurrent loading of the same entry if loading is slow
[ https://issues.apache.org/jira/browse/OAK-2830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2830. - Bulk Close for 1.3.0 LIRS cache: avoid concurrent loading of the same entry if loading is slow - Key: OAK-2830 URL: https://issues.apache.org/jira/browse/OAK-2830 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Thomas Mueller Assignee: Thomas Mueller Fix For: 1.3.0, 1.2.3, 1.0.15 Currently, the LIRS cache waits for at most 100 ms for another thread to load a cache entry. After that, the entry is loaded as well (concurrently). This needs to be avoided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2952) RDBConnectionHandler: log failures on setReadOnly() only once
[ https://issues.apache.org/jira/browse/OAK-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2952. - Bulk Close for 1.3.0 RDBConnectionHandler: log failures on setReadOnly() only once - Key: OAK-2952 URL: https://issues.apache.org/jira/browse/OAK-2952 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.3.0, 1.0.14 Reporter: Julian Reschke Assignee: Julian Reschke Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: simulatebadconnection.diff It appears that WAS wraps Oracle JDBC connection objects and throws upon setReadOnly(): {noformat} java.sql.SQLException: DSRA9010E: 'setReadOnly' is not supported on the WebSphere java.sql.Connection implementation. at com.ibm.ws.rsadapter.spi.InternalOracleDataStoreHelper.setReadOnly(InternalOracleDataStoreHelper.java:369) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.setReadOnly(WSJdbcConnection.java:3626) at org.apache.jackrabbit.oak.plugins.document.rdb.RDBConnectionHandler.getROConnection(RDBConnectionHandler.java:61) {noformat} ...which of course is a bug in WAS (setReadOnly() is documented as a hint, the implementation is not supposed to throw an exception here); see also http://www-01.ibm.com/support/docview.wss?uid=swg1PM58588 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2923) RDB/DB2: change minimal supported version from 10.5 to 10.1, also log decimal version numbers as well
[ https://issues.apache.org/jira/browse/OAK-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2923. - Bulk Close for 1.3.0 RDB/DB2: change minimal supported version from 10.5 to 10.1, also log decimal version numbers as well - Key: OAK-2923 URL: https://issues.apache.org/jira/browse/OAK-2923 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.3.0, 1.0.14 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Trivial Fix For: 1.3.0, 1.2.3, 1.0.15 The DB2 support has been tested with 10.5, but feedback says it's working just fine with 10.1, which appears to be a common version to have. Reduce the requirement so people do not get concerned by the INFO level logging about the unexpected DB version. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2867) CommitQueue.done() may fail to remove commit
[ https://issues.apache.org/jira/browse/OAK-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2867. - Bulk Close for 1.3.0 CommitQueue.done() may fail to remove commit Key: OAK-2867 URL: https://issues.apache.org/jira/browse/OAK-2867 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Affects Versions: 1.0, 1.2 Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 A call to {{CommitQueue.done()}} with the {{isBranch}} flag set to true may fail to remove the commit if {{commit.applyToCache()}} throws a RuntimeException. This issue was originally reported by Stefan Egli. Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2873) Performance problems with many or conditions
[ https://issues.apache.org/jira/browse/OAK-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2873. - Bulk Close for 1.3.0 Performance problems with many or conditions -- Key: OAK-2873 URL: https://issues.apache.org/jira/browse/OAK-2873 Project: Jackrabbit Oak Issue Type: Bug Components: query Reporter: Thomas Mueller Assignee: Thomas Mueller Labels: performance Fix For: 1.3.0, 1.2.3, 1.0.15 XPath queries with many or condition (around 3000) of the following form can result in a performance problem: {noformat} contains(...) or x=1 or x=2 or x=3 ... {noformat} This is somewhat similar to OAK-2738, but not quite the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2636) Issues with Maximum node name size and path size
[ https://issues.apache.org/jira/browse/OAK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2636. - Bulk Close for 1.3.0 Issues with Maximum node name size and path size Key: OAK-2636 URL: https://issues.apache.org/jira/browse/OAK-2636 Project: Jackrabbit Oak Issue Type: Bug Components: doc, mongomk Affects Versions: 1.1.7 Reporter: Will McGauley Fix For: 1.3.0 I ran across the maximum allowed node name and path lengths in Utils.java by getting an exception when attempting to create nodes which violated the policy. I believe this is a backwards compatibility issue and the following should occur: 1) there should be documentation about this. Why are there maximums? Can the default maximums be increased? Is this only applicable to some MKs like Mongo or are they enforced globally? 2) the values themselves should be OSGI properties, not System Properties -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2880) NPE in SegmentWriter.writeMap
[ https://issues.apache.org/jira/browse/OAK-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2880. - Bulk Close for 1.3.0 NPE in SegmentWriter.writeMap - Key: OAK-2880 URL: https://issues.apache.org/jira/browse/OAK-2880 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: resilience Fix For: 1.3.0 Under some rare conditions which are not entirely clear yet {{SegmentWriter.writeMap}} results in a {{NPE}}: {noformat} java.lang.NullPointerException at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:192) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeRecordId(SegmentWriter.java:366) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapLeaf(SegmentWriter.java:417) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:475) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:511) at org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMap(SegmentWriter.java:711) {noformat} This happens when the {{base}} passed to {{writeMap(MapRecord base, MapString, RecordId changes)}} is not null but doesn't contain some of the keys *removed* through the updates provided in the passed {{changes}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2870) Introduce a SegmentNodeStoreBuilder to help wire a SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2870. - Bulk Close for 1.3.0 Introduce a SegmentNodeStoreBuilder to help wire a SegmentNodeStore --- Key: OAK-2870 URL: https://issues.apache.org/jira/browse/OAK-2870 Project: Jackrabbit Oak Issue Type: Improvement Components: core, segmentmk Reporter: Alex Parvulescu Assignee: Alex Parvulescu Fix For: 1.3.0 Attachments: OAK-2870.patch Exposing the SegmentNodeStore for tests outside oak-core is quite tricky as you need to access some compaction related methods which are basically private and it doesn't make much sense in making them public. So I'm proposing introducing a builder to help wire in a SegmentNodeStore if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2627) Optimize equals in AbstractBlob
[ https://issues.apache.org/jira/browse/OAK-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2627. - Bulk Close for 1.3.0 Optimize equals in AbstractBlob --- Key: OAK-2627 URL: https://issues.apache.org/jira/browse/OAK-2627 Project: Jackrabbit Oak Issue Type: Improvement Components: core Affects Versions: 1.1.7 Reporter: Julian Sedding Assignee: Chetan Mehrotra Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2627-chetanm.patch, OAK-2627.patch During some work on the upgrade tool, I discovered tat AbstractBlob's {{equals}} method does not leverage reference comparison. While I have not investigated whether this really helps, I suspect it does. This issue is for the consideration of people with a deeper understanding of the system's dynamics. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2955) Extend ACL-level principal validation for configured administrative principals
[ https://issues.apache.org/jira/browse/OAK-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2955. - Bulk Close for 1.3.0 Extend ACL-level principal validation for configured administrative principals -- Key: OAK-2955 URL: https://issues.apache.org/jira/browse/OAK-2955 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: angela Assignee: angela Fix For: 1.3.0 In order to provide consistent behavior between permission evaluation and access control management, we should extend the test for 'admin' principals to also include any configured administrative principal names as well as the internally used system principal in an extension to OAK-2158. as in OAK-2158 the result would depend on the configured ImportBehavior and either ignore those entries, add them with a warning or fail the operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2832) Test failure: DefaultAnalyzersConfigurationTest
[ https://issues.apache.org/jira/browse/OAK-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2832. - Bulk Close for 1.3.0 Test failure: DefaultAnalyzersConfigurationTest --- Key: OAK-2832 URL: https://issues.apache.org/jira/browse/OAK-2832 Project: Jackrabbit Oak Issue Type: Bug Components: solr Environment: Jenkins, Ubuntu: https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ Reporter: Michael Dürig Assignee: Marcel Reutegger Labels: CI, jenkins Fix For: 1.3.0, 1.2.3 Attachments: OAK-2832.patch {{org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest.org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest}} fails on Jenkins. See e.g. https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/123/jdk=latest1.7,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/console Seen on {{DOCUMENT_RDB}} and {{SEGMENT_MK}} with Java 1.7. and 1.8. {noformat} Tests run: 13, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 23.572 sec FAILURE! org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest Time elapsed: 23.255 sec ERROR! com.carrotsearch.randomizedtesting.ThreadLeakError: 21 threads leaked from SUITE scope at org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest: 1) Thread[id=32, name=oak-scheduled-executor-13, state=TIMED_WAITING, group=main] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.poll(ScheduledThreadPoolExecutor.java:1125) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.poll(ScheduledThreadPoolExecutor.java:807) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2) Thread[id=25, name=oak-scheduled-executor-6, state=TIMED_WAITING, group=main] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.poll(ScheduledThreadPoolExecutor.java:1125) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.poll(ScheduledThreadPoolExecutor.java:807) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ... {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2917) Allow skipping of the baseline check when tests are skipped
[ https://issues.apache.org/jira/browse/OAK-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2917. - Bulk Close for 1.3.0 Allow skipping of the baseline check when tests are skipped --- Key: OAK-2917 URL: https://issues.apache.org/jira/browse/OAK-2917 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Alex Parvulescu Assignee: Davide Giannella Priority: Minor Fix For: 1.3.0 The baseline check has a _skip_ flag [0], I suggest to add support for it in Oak. So in the situation where one decides to skip tests, the baseline check should be also skipped. Helps with: - faster turnaround of the build when I specifically choose to skip the tests. - when offline, maven doesn't need to fetch any dependencies. [0] http://svn.apache.org/repos/asf/felix/trunk/bundleplugin/doc/site/baseline-mojo.html#skip -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2890) SegmentBlob does not return blobId for contentIdentity
[ https://issues.apache.org/jira/browse/OAK-2890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2890. - Bulk Close for 1.3.0 SegmentBlob does not return blobId for contentIdentity -- Key: OAK-2890 URL: https://issues.apache.org/jira/browse/OAK-2890 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Labels: resilience Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2890.patch {{SegmentBlob}} currently returns recordId for {{contentIdentity}} even when an external DataStore is configured. Given that recordId is not stable it would be better to return the blobId as part of {{contentIdentity}} if external DataStore is configured -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2949) RDBDocumentStore: no custom SQL needed for GREATEST
[ https://issues.apache.org/jira/browse/OAK-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2949. - Bulk Close for 1.3.0 RDBDocumentStore: no custom SQL needed for GREATEST --- Key: OAK-2949 URL: https://issues.apache.org/jira/browse/OAK-2949 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.14, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 We currently use GREATEST to set MODIFIED to max(old, new). As this isn't supported by SQLServer, the code supports custom query strings. Turns out we don't need this, because we can replace it with a CASE statement (see http://stackoverflow.com/questions/30530970/equivalent-of-sql-greatest-function-for-apache-derby) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-104) HTTP bindings for Oak
[ https://issues.apache.org/jira/browse/OAK-104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-104. Bulk Close for 1.3.0 HTTP bindings for Oak - Key: OAK-104 URL: https://issues.apache.org/jira/browse/OAK-104 Project: Jackrabbit Oak Issue Type: New Feature Components: remoting Reporter: Jukka Zitting Fix For: 1.3.0 For easy integration with client-side JavaScript (see OAK-103) and other remote or non-Java clients Oak should come with a simple HTTP binding that avoids the extra complexity and overhead (and thus lacks the related extra functionality) of our existing JCR and WebDAV bindings. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2822) Release merge lock in retry loop
[ https://issues.apache.org/jira/browse/OAK-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2822. - Bulk Close for 1.3.0 Release merge lock in retry loop Key: OAK-2822 URL: https://issues.apache.org/jira/browse/OAK-2822 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Labels: concurrency Fix For: 1.3.0, 1.2.3, 1.0.15 The DocumentNodeStoreBranch retries merges in two phases. First it retries merges while holding the merge lock non-exclusive and performing sleeps between attempts. If those retries fail the next phase will acquire the merge lock exclusively and perform retries. In the first phase the merge lock is released when the commit goes to sleep, while in the second it is not and may block other commits while sleeping. DocumentNodeStoreBranch should be changed to release the exclusive lock when the commit goes to sleep. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2690) Add optional UserConfiguration#getUserPrincipalProvider()
[ https://issues.apache.org/jira/browse/OAK-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2690. - Bulk Close for 1.3.0 Add optional UserConfiguration#getUserPrincipalProvider() - Key: OAK-2690 URL: https://issues.apache.org/jira/browse/OAK-2690 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: angela Assignee: angela Fix For: 1.3.0 Attachments: OAK-2690.patch, getgroupmembership.txt, loginmembership_compare_userprincipalprovider.txt while playing around with overall group principal resolution during the repository login, I thought that having a principal provider that knows about the details of the user management implementation may might be a slight improvement compared to the generic default implementation as present in {{org.apache.jackrabbit.oak.security.principal.PrincipalProviderImpl}}, which just acts on the {{UserManager}} interface and thus always creates intermediate {{Authorizable}} objects. in order to be able to get there (without having the default principal mgt implementation rely on implementation details of the user mgt module), we would need an addition to the {{UserConfiguration}} that allows to optionally obtain a {{PrincipalProvider}}; the fallback in the default {{PrincipalConfiguration}} in case the user configuration does not expose a specific principal provider would be the current (generic) solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2701) Move oak-mk-api to attic
[ https://issues.apache.org/jira/browse/OAK-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2701. - Bulk Close for 1.3.0 Move oak-mk-api to attic Key: OAK-2701 URL: https://issues.apache.org/jira/browse/OAK-2701 Project: Jackrabbit Oak Issue Type: Sub-task Components: mk Reporter: angela Fix For: 1.3.0 Moved oak-mk-api module to the attic folder at revision 1673133. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2765) Fix high priority FindBugs reports for oak-solr
[ https://issues.apache.org/jira/browse/OAK-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2765. - Bulk Close for 1.3.0 Fix high priority FindBugs reports for oak-solr --- Key: OAK-2765 URL: https://issues.apache.org/jira/browse/OAK-2765 Project: Jackrabbit Oak Issue Type: Bug Components: solr Reporter: Tommaso Teofili Assignee: Tommaso Teofili Fix For: 1.3.0 As reported by [Jenkins|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/101/findbugsResult/module.2079395158/] FindBugs issues of high priority should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2802) avoid NodeTypeDefDiff code duplication
[ https://issues.apache.org/jira/browse/OAK-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2802. - Bulk Close for 1.3.0 avoid NodeTypeDefDiff code duplication -- Key: OAK-2802 URL: https://issues.apache.org/jira/browse/OAK-2802 Project: Jackrabbit Oak Issue Type: Task Components: core Affects Versions: 1.4 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor Labels: technical_debt Fix For: 1.3.0 org.apache.jackrabbit.oak.plugins.nodetype.NodeTypeDefDiff copies a huge amount of code from org.apache.jackrabbit.spi.commons.nodetype.NodeTypeDefDiff (the latter working on QNodeTypeDefinitions, not NodeTypeDefinitions) Figure out how to avoid the code duplication. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2930) RDBBlob/DocumentStore throws NPE when used after being closed
[ https://issues.apache.org/jira/browse/OAK-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2930. - Bulk Close for 1.3.0 RDBBlob/DocumentStore throws NPE when used after being closed - Key: OAK-2930 URL: https://issues.apache.org/jira/browse/OAK-2930 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.14, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 Avoid the NPE, throw a more meaningful exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2908) infrastructure for running longevity tests
[ https://issues.apache.org/jira/browse/OAK-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2908. - Bulk Close for 1.3.0 infrastructure for running longevity tests -- Key: OAK-2908 URL: https://issues.apache.org/jira/browse/OAK-2908 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Davide Giannella Assignee: Davide Giannella Fix For: 1.3.0 Attachments: OAK-2908-annotation.diff Longevity tests could last days and should not run by default. Set up an infrastructure for keeping them in place but not running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2925) Reorganise scalability classes
[ https://issues.apache.org/jira/browse/OAK-2925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2925. - Bulk Close for 1.3.0 Reorganise scalability classes -- Key: OAK-2925 URL: https://issues.apache.org/jira/browse/OAK-2925 Project: Jackrabbit Oak Issue Type: Task Reporter: Davide Giannella Assignee: Davide Giannella Priority: Minor Fix For: 1.3.0 In {{oak-run}} the o.a.j.o.scalability package is becoming big. Split the NodeSuites and Benchmarks into two different sub-packages: {{o.a.j.o.scalability.suites}} and {{o.a.j.o.scalability.benchmarks}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2641) FilterImpl violates nullability contract
[ https://issues.apache.org/jira/browse/OAK-2641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2641. - Bulk Close for 1.3.0 FilterImpl violates nullability contract - Key: OAK-2641 URL: https://issues.apache.org/jira/browse/OAK-2641 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: Michael Dürig Assignee: angela Labels: technical_debt Fix For: 1.3.0 Attachments: OAK-2641.patch, OAK-2641_2.patch {{FilterImpl#getSupertypes}}, {{FilterImpl#getPrimaryTypes}} and {{FilterImpl#getMixinTypes}} might all return {{null}} although {{Filter}}'s contract mandates \@Nonull. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2862) CompactionMap#compress() inefficient for large compaction maps
[ https://issues.apache.org/jira/browse/OAK-2862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2862. - Bulk Close for 1.3.0 CompactionMap#compress() inefficient for large compaction maps -- Key: OAK-2862 URL: https://issues.apache.org/jira/browse/OAK-2862 Project: Jackrabbit Oak Issue Type: Sub-task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, doc-impacting, gc Fix For: 1.3.0 Attachments: OAK-2862-memory.png, OAK-2862.png, benchLargeMap.xlsx I've seen {{CompactionMap#compress()}} take up most of the time spent in compaction. With 40M record ids in the compaction map compressing runs for hours. I will back this with numbers as soon as I have a better grip on the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2946) Sampling rate feature CompactionGainEstimate is not efficient
[ https://issues.apache.org/jira/browse/OAK-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2946. - Bulk Close for 1.3.0 Sampling rate feature CompactionGainEstimate is not efficient - Key: OAK-2946 URL: https://issues.apache.org/jira/browse/OAK-2946 Project: Jackrabbit Oak Issue Type: Sub-task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, gc Fix For: 1.3.0, 1.0.15 The sampling rate feature introduced with OAK-2595 is not efficient. It only prevents uuids from being stored in the bloom filter while the visited set is not affected and thus keeps growing. I will remove the feature again for now. We should look for a better solution once this becomes a problem. Will follow up on OAK-2939 re. this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2685) Track root state revision when reading the tree
[ https://issues.apache.org/jira/browse/OAK-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2685. - Bulk Close for 1.3.0 Track root state revision when reading the tree --- Key: OAK-2685 URL: https://issues.apache.org/jira/browse/OAK-2685 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Labels: performance Fix For: 1.3.0, 1.2.3 Attachments: OAK-2685.patch Currently the DocumentNodeState has two revisions: - {{getRevision()}} returns the read revision of this node state. This revision was used to read the node state from the underlying {{NodeDocument}}. - {{getLastRevision()}} returns the revision when this node state was last modified. This revision also reflects changes done further below the tree when the node state was not directly affected by a change. The lastRevision of a state is then used as the read revision of the child node states. This avoids reading the entire tree again with a different revision after the head revision changed because of a commit. This approach has at least two problems related to comparing node states: - It does not work well with the current DiffCache implementation and affects the hit rate of this cache. The DiffCache is pro-actively populated after a commit. The key for a diff is a combination of previous and current commit revision and the path. The value then tells what child nodes were added/removed/changed. As the comparison of node states proceeds and traverses the tree, the revision of a state may go back in time because the lastRevision is used as the read revision of the child nodes. This will cause misses in the diff cache, because the revisions do not match the previous and current commit revisions as used to create the cache entries. OAK-2562 tried to address this by keeping the read revision for child nodes at the read revision of the parent in calls of compareAgainstBaseState() when there is a diff cache hit. However, it turns out node state comparison does not always start at the root state. The {{EventQueue}} implementation in oak-jcr will start at the paths as indicated by the filter of the listener. This means, OAK-2562 is not effective in this case and the diff needs to be calculated again based on a set of revisions, which is different from the original commit. - When a diff is calculated for a parent with many child nodes, the {{DocumentNodeStore}} will perform a query on the underlying {{DocumentStore}} to get child nodes modified after a given timestamp. This timestamp is derived from the lower revision of the two lastRevisions of the parent node states to compare. The query gets problematic for the {{DocumentStore}} if the timestamp is too far in the past. This will happen when the parent node (and sub-tree) was not modified for some time. E.g. the {{MongoDocumentStore}} has an index on the _id and the _modified field. But if there are many child nodes the _id index will not be that helpful and if the timestamp is too far in the past, the _modified index is not selective either. This problem was already reported in OAK-1970 and linked issues. Both of the above problems could be addressed by keeping track of the read revision of the root node state in each of the node states as the tree is traversed. The revision of the root state would then be used e.g. to derive the timestamp for the _modified constraint in the query. Because the revision of the root state is rather recent, the _modified constraint is very selective and the index on it would be the preferred choice. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2790) Remove ignored tests related to MicroKernel
[ https://issues.apache.org/jira/browse/OAK-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2790. - Bulk Close for 1.3.0 Remove ignored tests related to MicroKernel --- Key: OAK-2790 URL: https://issues.apache.org/jira/browse/OAK-2790 Project: Jackrabbit Oak Issue Type: Test Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.3.0 There are a number of ignored tests, which are related to the MicroKernel API. These tests can be deleted, since the MicroKernel has been removed from trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2899) Update to Jackrabbit 2.10.1
[ https://issues.apache.org/jira/browse/OAK-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2899. - Bulk Close for 1.3.0 Update to Jackrabbit 2.10.1 --- Key: OAK-2899 URL: https://issues.apache.org/jira/browse/OAK-2899 Project: Jackrabbit Oak Issue Type: Improvement Components: parent Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Labels: technical_debt Fix For: 1.3.0 OAK-2748 introduced a snapshot dependency to Jackrabbit 2.10.1-SNAPSHOT. Now that 2.10.1 is released, the snapshot dependency can be removed again. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2850) Flag states from revision of an external change
[ https://issues.apache.org/jira/browse/OAK-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2850. - Bulk Close for 1.3.0 Flag states from revision of an external change --- Key: OAK-2850 URL: https://issues.apache.org/jira/browse/OAK-2850 Project: Jackrabbit Oak Issue Type: Sub-task Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Fix For: 1.3.0 OAK-2685 introduced a root revision on the DocumentNodeState. This is the revision of the root node state from where the tree traversal started. For OAK-2829 we also need the information about whether the root revision was created for an external change or a local commit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2938) Estimation of required memory for compaction is off
[ https://issues.apache.org/jira/browse/OAK-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2938. - Bulk Close for 1.3.0 Estimation of required memory for compaction is off --- Key: OAK-2938 URL: https://issues.apache.org/jira/browse/OAK-2938 Project: Jackrabbit Oak Issue Type: Sub-task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, gc Fix For: 1.3.0, 1.2.3, 1.0.15 Currently compaction will be skipped if some rough estimation determines that there is not enough memory to run. That estimation however assumes that each compaction cycle requires as much space as the compaction map already takes up. This is too conservative. Instead the amount of memory taken up by the last compaction cycle should be a better estimate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2749) Provide a different lane for slow indexers in async indexing
[ https://issues.apache.org/jira/browse/OAK-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2749. - Bulk Close for 1.3.0 Provide a different lane for slow indexers in async indexing -- Key: OAK-2749 URL: https://issues.apache.org/jira/browse/OAK-2749 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Davide Giannella Assignee: Alex Parvulescu Labels: docs-impacting Fix For: 1.3.0 Attachments: OAK-2749-rc1.diff, OAK-2749-rc2.diff, OAK-2749-v3.diff, OAK-2749-v4.diff, OAK-2749-v5.diff In case of big repositories, asynchronous index like Lucene Property, could lag behind as slow indexes, for example Full Text, are taken care in the same thread pool. Provide a separate thread pool in which such indexes could be registered. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2900) Trying to remove a non existing element from a map might cause NPE
[ https://issues.apache.org/jira/browse/OAK-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2900. - Bulk Close for 1.3.0 Trying to remove a non existing element from a map might cause NPE -- Key: OAK-2900 URL: https://issues.apache.org/jira/browse/OAK-2900 Project: Jackrabbit Oak Issue Type: Bug Components: segmentmk Reporter: Michael Dürig Fix For: 1.3.0 Calling {{SegmentWriter.writeMap(base, changes)}} with {{changes}} containing mappings to {{null}} (meaning to remove the respective key) can result in a {{NPE}} if {{base}} doesn't contain that key. I came across this while working on the {{PersistedCompactionMap}} in OAK-2862. I had to add an [extra check | https://github.com/mduerig/jackrabbit-oak/blob/OAK-2862/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/segment/PersistedCompactionMap.java#L188] for above case as otherwise I'd occasionally hit said {{NPE}}. Need yet to extract a proper test case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2840) Login Benchmark Test broken due to OAK-2128
[ https://issues.apache.org/jira/browse/OAK-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2840. - Bulk Close for 1.3.0 Login Benchmark Test broken due to OAK-2128 --- Key: OAK-2840 URL: https://issues.apache.org/jira/browse/OAK-2840 Project: Jackrabbit Oak Issue Type: Bug Components: run Reporter: angela Assignee: Amit Jain Fix For: 1.3.0 Attachments: OAK-2840.patch As mentioned ages ago in an Oak call, the login related benchmark tests are broken when specifying a value other than -1 for the iterations. This is easily reproducible when setting {code} HASH_ITERATIONS=1 {code} in the concurrent-login-test script. It seems to me that this was broken when addressing OAK-2128. It took me some time finding a fix (see attached patch). [~amitjain], since you committed the changes for OAK-2128, may I kindly ask you to verify the proposed patch (will follow) and commit the fix if that doesn't break the intention of OAK-2128? thanks in advance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2950) RDBDocumentStore: conditional fetch logic is reversed
[ https://issues.apache.org/jira/browse/OAK-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2950. - Bulk Close for 1.3.0 RDBDocumentStore: conditional fetch logic is reversed - Key: OAK-2950 URL: https://issues.apache.org/jira/browse/OAK-2950 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.0.14, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Fix For: 1.3.0, 1.2.3, 1.0.15 In RDBDocumentStore.dbRead(), the logic that tries to decide whether to do a conditional fetch is reversed. This does not affects correctness, but defeats the attempted optimization. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2866) Switch Thread context classloader for default config parsing also
[ https://issues.apache.org/jira/browse/OAK-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2866. - Bulk Close for 1.3.0 Switch Thread context classloader for default config parsing also - Key: OAK-2866 URL: https://issues.apache.org/jira/browse/OAK-2866 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Fix For: 1.3.0, 1.0.15 As part of OAK-2782 thread's context classloader was switched to get it working in OSGi env. Same needs to be done for default config also -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2677) Set ProviderType and ConsumerType annotation on exported items
[ https://issues.apache.org/jira/browse/OAK-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2677. - Bulk Close for 1.3.0 Set ProviderType and ConsumerType annotation on exported items -- Key: OAK-2677 URL: https://issues.apache.org/jira/browse/OAK-2677 Project: Jackrabbit Oak Issue Type: Task Reporter: Michael Dürig Priority: Critical Labels: modularization Fix For: 1.3.0 For Oak 1.0 and Oak 1.2 we just bumped the major version of the exported packages before the release. If we want to be more precise in the future we need to add the consumer and producer type annotations. See https://github.com/osgi/design/raw/master/rfcs/rfc0197/rfc-0197-OSGiPackageTypeAnnotations.pdf -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2889) Ignore order by jcr:score desc in the query engine (for union queries)
[ https://issues.apache.org/jira/browse/OAK-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2889. - Bulk Close for 1.3.0 Ignore order by jcr:score desc in the query engine (for union queries) -- Key: OAK-2889 URL: https://issues.apache.org/jira/browse/OAK-2889 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Thomas Mueller Assignee: Thomas Mueller Labels: performance Fix For: 1.3.0, 1.2.3, 1.0.15 Currently, order by jcr:score desc is ignored in the Lucene index, however for union queries, this sort order is enforced in the query engine. This will cause queries to be slow if one of the sub-queries is slow. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2713) High memory usage of CompactionMap
[ https://issues.apache.org/jira/browse/OAK-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2713. - Bulk Close for 1.3.0 High memory usage of CompactionMap -- Key: OAK-2713 URL: https://issues.apache.org/jira/browse/OAK-2713 Project: Jackrabbit Oak Issue Type: Sub-task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, gc, resilience Fix For: 1.3.0 In environments with a lot of volatile content the {{CompactionMap}} can end up eating a lot of memory. From {{CompactionStrategyMBean#getCompactionMapStats}}: {noformat} [Estimated Weight: 317,5 MB, Records: 39500094, Segments: 36698], [Estimated Weight: 316,4 MB, Records: 39374593, Segments: 36660], [Estimated Weight: 315,4 MB, Records: 39253205, Segments: 36620], [Estimated Weight: 315,1 MB, Records: 39221882, Segments: 36614], [Estimated Weight: 314,9 MB, Records: 39195490, Segments: 36604], [Estimated Weight: 315,0 MB, Records: 39182753, Segments: 36602], [Estimated Weight: 360 B, Records: 0, Segments: 0], {noformat} This causes compaction to be skipped: {noformat} 2015-03-30:30.03.2015 02:00:00.038 *INFO* [] [TarMK compaction thread [/foo/bar/crx-quickstart/repository/segmentstore], active since Mon Mar 30 02:00:00 CEST 2015, previous max duration 3854982ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore Not enough available memory 5,5 GB, needed 6,3 GB, last merge delta 1,3 GB, so skipping compaction for now {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2886) Exclude image/tiff from text extraction
[ https://issues.apache.org/jira/browse/OAK-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2886. - Bulk Close for 1.3.0 Exclude image/tiff from text extraction Key: OAK-2886 URL: https://issues.apache.org/jira/browse/OAK-2886 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 Default tika-config [1] has entries for various images as part of exclude list. We should add {{image/tiff}} to that list [1] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/resources/org/apache/jackrabbit/oak/plugins/index/lucene/tika-config.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2784) Remove Nullable annotation in Predicates of BackgroundObserver
[ https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2784. - Bulk Close for 1.3.0 Remove Nullable annotation in Predicates of BackgroundObserver -- Key: OAK-2784 URL: https://issues.apache.org/jira/browse/OAK-2784 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: angela Assignee: Chetan Mehrotra Labels: technical_debt Fix For: 1.3.0 Attachments: OAK-2784.patch {code} @Override public int getLocalEventCount() { return size(filter(queue, new PredicateContentChange() { @Override public boolean apply(@Nullable ContentChange input) { return input.info != null; } })); } @Override public int getExternalEventCount() { return size(filter(queue, new PredicateContentChange() { @Override public boolean apply(@Nullable ContentChange input) { return input.info == null; } })); } {code} both methods should probably check for {{input}} being null before accessing {{input.info}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2851) Missing test dependency to jackrabbit-data tests artifact
[ https://issues.apache.org/jira/browse/OAK-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2851. - Bulk Close for 1.3.0 Missing test dependency to jackrabbit-data tests artifact - Key: OAK-2851 URL: https://issues.apache.org/jira/browse/OAK-2851 Project: Jackrabbit Oak Issue Type: Test Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.3.0 RepositoryTest in oak-jcr uses classes from the jackrabbit-data tests artifact, but does not declare the dependency in the pom. Current trunk is now broken because of the snapshot dependency to Jackrabbit and the recent fix done for JCR-3876. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2827) [oak-blob-cloud] Test Failures: Add joda-time dependency explicitly with definite version range
[ https://issues.apache.org/jira/browse/OAK-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2827. - Bulk Close for 1.3.0 [oak-blob-cloud] Test Failures: Add joda-time dependency explicitly with definite version range --- Key: OAK-2827 URL: https://issues.apache.org/jira/browse/OAK-2827 Project: Jackrabbit Oak Issue Type: Bug Components: blob Reporter: Amit Jain Assignee: Amit Jain Labels: CI, Jenkins Fix For: 1.3.0 AWS sdk jar - com.amazonaws:aws-java-sdk-core has an open range dependency on joda-time [2.2,) which causes the build to fail. {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.4:process (default) on project oak-blob-cloud: Failed to resolve dependencies for one or more projects in the reactor. Reason: No versions are present in the repository for the artifact with a range [2.2,) [ERROR] joda-time:joda-time:jar:null [ERROR] [ERROR] from the specified remote repositories: [ERROR] Nexus (http://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (http://repo.maven.apache.org/maven2, releases=true, snapshots=false) [ERROR] Path to dependency: [ERROR] 1) org.apache.jackrabbit:oak-blob-cloud:bundle:1.4-SNAPSHOT [ERROR] 2) com.amazonaws:aws-java-sdk:jar:1.9.11 [ERROR] 3) com.amazonaws:aws-java-sdk-support:jar:1.9.11 [ERROR] 4) com.amazonaws:aws-java-sdk-core:jar:1.9.11 [ERROR] - [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn goals -rf :oak-blob-cloud Build step 'Invoke top-level Maven targets' marked build as failure [FINDBUGS] Skipping publisher since build result is FAILURE Recording test results {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2700) Cleanup usages of mk-api
[ https://issues.apache.org/jira/browse/OAK-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2700. - Bulk Close for 1.3.0 Cleanup usages of mk-api Key: OAK-2700 URL: https://issues.apache.org/jira/browse/OAK-2700 Project: Jackrabbit Oak Issue Type: Sub-task Components: core, it Reporter: angela Fix For: 1.3.0 The following usage of MicroKernel API need a clean up: - MicroKernelInputStream in oak-commons - move to attic - static method MicroKernelInputStream#readFully in DocumentMK tests - replace by test-local static method - Wrappers for Log and Timer purpose - drop as there exist implementations for NodeStore as well and the MK-implementations are not used - DocumentMK: This class is only used for {{DocumentStore}} testing. Based on discussions with [~mreutegg] and [~mduerig] we decided to let the test run directly against the DocumentMK and rewrite them later on to use {{DocumentStore}} API. - Javadoc: There is some outdated java doc (e.g. with ContentSessionImpl and ContentRepositoryImpl) that refer to the MK although the implementation is in fact based on a {{NodeStore}} - Documentation: There are some references to the {{MicroKernel}} in the documentation which should be slightly adjusted to reflect the fact that they are no longer used. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2901) RDBBlobStoreTest should be able to run against multiple DB types
[ https://issues.apache.org/jira/browse/OAK-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2901. - Bulk Close for 1.3.0 RDBBlobStoreTest should be able to run against multiple DB types Key: OAK-2901 URL: https://issues.apache.org/jira/browse/OAK-2901 Project: Jackrabbit Oak Issue Type: Sub-task Components: blob, rdbmk Affects Versions: 1.2.2, 1.0.14, 1.4 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2901.diff -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2663) Unique property index can trigger OOM during upgrade of large repository
[ https://issues.apache.org/jira/browse/OAK-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2663. - Bulk Close for 1.3.0 Unique property index can trigger OOM during upgrade of large repository Key: OAK-2663 URL: https://issues.apache.org/jira/browse/OAK-2663 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Reporter: Chetan Mehrotra Assignee: Thomas Mueller Labels: performance, resilience Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2663.patch {{PropertyIndexEditor}} when configured for unique index maintains an in memory state of indexed property in {{keysToCheckForUniqueness}}. This set would accumulate all the unique values being indexed. In case of upgrade where the complete upgrade is performed in single commit this state can become very large. Further later while exiting the editor validates that all such values are actually unique by iterating over all such values. We should look into other possible ways to enforce uniqueness constraint -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2933) AccessDenied when modifying transiently moved item with too many ACEs
[ https://issues.apache.org/jira/browse/OAK-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2933. - Bulk Close for 1.3.0 AccessDenied when modifying transiently moved item with too many ACEs - Key: OAK-2933 URL: https://issues.apache.org/jira/browse/OAK-2933 Project: Jackrabbit Oak Issue Type: Bug Components: core Affects Versions: 1.0.13 Reporter: Tobias Bocanegra Assignee: angela Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2933.patch, OAK-2933_test.patch If at least the following preconditions are fulfilled, saving a moved item fails with access denied: 1. there are more PermissionEntries in the PermissionEntryCache than the configured EagerCacheSize 2. an node is moved to a location where the user has write access through a group membership 3. a property is added to the transiently moved item For example: 1. set the *eagerCacheSize* to '0' 2. create new group *testgroup* and user *testuser* 3. make *testuser* member of *testgroup* 4. create nodes {{/testroot/a}} and {{/testroot/a/b}} and {{/testroot/a/c}} 5. allow *testgroup* {{rep:write}} on {{/testroot/a}} 6. as *testuser* create {{/testroot/a/b/item}} (to verify that the user has write access) 7. as *testuser* move {{/testroot/a/b/item}} to {{/testroot/a/c/item}} 8. {{save()}} - works 9. as *testuser* move {{/testroot/a/c/item}} back to {{/testroot/a/b/item}} AND add new property to the transient {{/testroot/a/b/item}} 10. {{save()}} - access denied -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2678) Update base version for checking proper package export versions
[ https://issues.apache.org/jira/browse/OAK-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2678. - Bulk Close for 1.3.0 Update base version for checking proper package export versions --- Key: OAK-2678 URL: https://issues.apache.org/jira/browse/OAK-2678 Project: Jackrabbit Oak Issue Type: Task Reporter: Michael Dürig Priority: Critical Labels: modularization Fix For: 1.3.0 Once we released 1.2 we need to update the base version for checking proper package export versions. We also need to start including the modules that are released the first time in 1.2. Those are currently skipped. At the same time we should decide on the granularity for updating package export versions and adapt our release process and numbering scheme accordingly. See also [my comment on OAK-2006 | https://issues.apache.org/jira/browse/OAK-2006?focusedCommentId=14376702page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376702] for why this is necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2918) RDBConnectionHandler: handle failure on setReadOnly() gracefully
[ https://issues.apache.org/jira/browse/OAK-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2918. - Bulk Close for 1.3.0 RDBConnectionHandler: handle failure on setReadOnly() gracefully Key: OAK-2918 URL: https://issues.apache.org/jira/browse/OAK-2918 Project: Jackrabbit Oak Issue Type: Sub-task Components: rdbmk Affects Versions: 1.2.2, 1.3.0, 1.0.14 Reporter: Julian Reschke Assignee: Julian Reschke Fix For: 1.3.0, 1.2.3, 1.0.15 It appears that WAS wraps Oracle JDBC connection objects and throws upon setReadOnly(): {noformat} java.sql.SQLException: DSRA9010E: 'setReadOnly' is not supported on the WebSphere java.sql.Connection implementation. at com.ibm.ws.rsadapter.spi.InternalOracleDataStoreHelper.setReadOnly(InternalOracleDataStoreHelper.java:369) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.setReadOnly(WSJdbcConnection.java:3626) at org.apache.jackrabbit.oak.plugins.document.rdb.RDBConnectionHandler.getROConnection(RDBConnectionHandler.java:61) {noformat} ...which of course is a bug in WAS (setReadOnly() is documented as a hint, the implementation is not supposed to throw an exception here); see also http://www-01.ibm.com/support/docview.wss?uid=swg1PM58588 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2829) Comparing node states for external changes is too slow
[ https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585870#comment-14585870 ] Stefan Egli commented on OAK-2829: -- Re querying by timestamp: that would indeed be simpler. With the current set of DocumentStore API however, I believe this is not possible. But: [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127] comes quite close: it would probably just require the opposite of that method too: {code} public T extends Document ListT query(CollectionT collection, String fromKey, String toKey, String indexedProperty, long endValue, int limit) { {code} .. or what about generalizing this method to have both a {{startValue}} and an {{endValue}} - with {{-1}} indicating when one of them is not used? Comparing node states for external changes is too slow -- Key: OAK-2829 URL: https://issues.apache.org/jira/browse/OAK-2829 Project: Jackrabbit Oak Issue Type: Bug Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Blocker Labels: scalability Fix For: 1.3.1, 1.2.3 Attachments: CompareAgainstBaseStateTest.java, OAK-2829-gc-bug.patch, graph-1.png, graph.png Comparing node states for local changes has been improved already with OAK-2669. But in a clustered setup generating events for external changes cannot make use of the introduced cache and is therefore slower. This can result in a growing observation queue, eventually reaching the configured limit. See also OAK-2683. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2957) LIRS cache: config options for segment count and stack move distance
[ https://issues.apache.org/jira/browse/OAK-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2957. - Bulk Close for 1.3.0 LIRS cache: config options for segment count and stack move distance Key: OAK-2957 URL: https://issues.apache.org/jira/browse/OAK-2957 Project: Jackrabbit Oak Issue Type: Improvement Components: cache, core Reporter: Thomas Mueller Assignee: Thomas Mueller Fix For: 1.3.0, 1.2.3, 1.0.15 Currently, both the number of segments and the stack move distance are hardcoded at 16 each. These settings should be configurable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2799) OakIndexInput cloned instances are not closed
[ https://issues.apache.org/jira/browse/OAK-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2799. - Bulk Close for 1.3.0 OakIndexInput cloned instances are not closed - Key: OAK-2799 URL: https://issues.apache.org/jira/browse/OAK-2799 Project: Jackrabbit Oak Issue Type: Bug Components: lucene Affects Versions: 1.2.1 Reporter: Tommaso Teofili Assignee: Tommaso Teofili Labels: resilience Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2799.0.patch, OAK-2799.1.patch Related to the inspections I was doing for OAK-2798 I also noticed that we don't fully comply with the {{IndexInput}} javadoc [1] as the cloned instances should throw the given exception if original is closed, but I also think that the original instance should close the cloned instances, see also [ByteBufferIndexInput#close|https://github.com/apache/lucene-solr/blob/lucene_solr_4_7_1/lucene/core/src/java/org/apache/lucene/store/ByteBufferIndexInput.java#L271]. [1] : {code} /** Abstract base class for input from a file in a {@link Directory}. A * random-access input stream. Used for all Lucene index input operations. * * p{@code IndexInput} may only be used from one thread, because it is not * thread safe (it keeps internal state like file position). To allow * multithreaded use, every {@code IndexInput} instance must be cloned before * used in another thread. Subclasses must therefore implement {@link #clone()}, * returning a new {@code IndexInput} which operates on the same underlying * resource, but positioned independently. Lucene never closes cloned * {@code IndexInput}s, it will only do this on the original one. * The original instance must take care that cloned instances throw * {@link AlreadyClosedException} when the original one is closed. {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2882) Support migration without access to DataStore
[ https://issues.apache.org/jira/browse/OAK-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2882. - Bulk Close for 1.3.0 Support migration without access to DataStore - Key: OAK-2882 URL: https://issues.apache.org/jira/browse/OAK-2882 Project: Jackrabbit Oak Issue Type: New Feature Components: upgrade Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Labels: docs-impacting, performance Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2882-v2.patch, OAK-2882.patch, build_datastore_list.sh Migration currently involves access to DataStore as its configured as part of repository.xml. However in complete migration actual binary content in DataStore is not accessed and migration logic only makes use of * Dataidentifier = id of the files * Length = As it gets encoded as part of blobId (OAK-1667) It would be faster and beneficial to allow migration without actual access to the DataStore. It would serve two benefits # Allows one to test out migration on local setup by just copying the TarPM files. For e.g. one can only zip following files to get going with repository startup if we can somehow avoid having direct access to DataStore {noformat} crx-quickstart# tar -zcvf repo-2.tar.gz repository --exclude=repository/repository/datastore --exclude=repository/repository/index --exclude=repository/workspaces/crx.default/index --exclude=repository/tarJournal {noformat} # Provides faster (repeatable) migration as access to DataStore can be avoided which in cases like S3 might be slow. Given we solve how to get length *Proposal* Have a DataStore implementation which can be provided a mapping file having entries for blobId and length. This file would be used to answer queries regarding length and existing of blob and thus would avoid actual access to DataStore. Going further this DataStore can be configured with a delegate which can be used as a fallback in case the required details is not present in pre computed data set (may be due to change in content after that data was computed) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2165) Observation tests sporadically failing
[ https://issues.apache.org/jira/browse/OAK-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2165. - Bulk Close for 1.3.0 Observation tests sporadically failing -- Key: OAK-2165 URL: https://issues.apache.org/jira/browse/OAK-2165 Project: Jackrabbit Oak Issue Type: Bug Components: jcr Environment: http://ci.apache.org/builders/oak-trunk-win7/ Reporter: Michael Dürig Assignee: Michael Dürig Labels: CI, buildbot, observation, test Fix For: 1.3.0 {{JackrabbitNodeTest#testRenameEventHandling}} fails sporadically on the Apache buildbot with missing events (e.g. http://ci.apache.org/builders/oak-trunk-win7/builds/642). Same holds for other tests in the {{ObservationIT}} suite. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2657) Repository Upgrade could shut down the source repository early
[ https://issues.apache.org/jira/browse/OAK-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2657. - Bulk Close for 1.3.0 Repository Upgrade could shut down the source repository early -- Key: OAK-2657 URL: https://issues.apache.org/jira/browse/OAK-2657 Project: Jackrabbit Oak Issue Type: Improvement Components: upgrade Reporter: Alex Parvulescu Assignee: Chetan Mehrotra Labels: resilience Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2657-v2.patch, OAK-2657.patch I noticed that during the upgrade we can distinguish 2 phases: first copying the data from the source, then applying all the Editors (indexes and co.). After phase 1 is done the repository upgrader could shut down the old repo to allow clearing some memory resources which might be used for the second phase. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2869) RepositorySidegrade: AsyncIndexUpdate throws a IllegalArgumentException after migrating from segment to document store
[ https://issues.apache.org/jira/browse/OAK-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2869. - Bulk Close for 1.3.0 RepositorySidegrade: AsyncIndexUpdate throws a IllegalArgumentException after migrating from segment to document store -- Key: OAK-2869 URL: https://issues.apache.org/jira/browse/OAK-2869 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: Thomas Mueller Assignee: Thomas Mueller Fix For: 1.3.0 After migrating a repository from segment store to the document store, the AsyncIndexUpdate can throw a IllegalArgumentException because it doesn't understand the segment stores checkpoint format: {noformat} java.lang.IllegalArgumentException: 5f18ca57-a72b-4c4d-8105-03a3486094cc at org.apache.jackrabbit.oak.plugins.document.Revision.fromString(Revision.java:236) at org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.retrieve(DocumentNodeStore.java:1558) at org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate.run(AsyncIndexUpdate.java:279) {noformat} The checkpoint references are stored in the node /:async. To solve the problem, multiple solution are possible. One (more complex) solution is to preserve checkpoints (copy the repository state of the very first checkpoint, then apply the diff for each later checkpoint, until all checkpoints are copied). This requires a new API to set change the checkpoint id, and is slow if there are many checkpoints. Let's not do this for now. The easier solution is to remove or clear the checkpoint references, that is, the /:async node. I think this can be done in all cases (even when migrating from segment store to segment store and from document store to document store), because the new repository doesn't know the checkpoints of the old repository (even thought, no exception should be thrown in this case). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2941) RDBDocumentStore: avoid use of GREATEST
[ https://issues.apache.org/jira/browse/OAK-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2941: Description: In the RDBDocumentStore we currently use GREATEST for conditional updates of the MODIFIED column (implementing the max operation). This isn't supported by SQLServer, thus requiring DB-specific code. It appears we can use something portable instead: set MODIFIED = CASE WHEN ? MODIFIED THEN ? ELSE MODIFIED END was: In the RDBDocumenStore we currently use GREATEST for conditional updates of the MODIFIED column (implementing the max operation). This isn't supported by SQLServer, thus requiring DB-specific code. It appears we can use something portable instead: set MODIFIED = CASE WHEN ? MODIFIED THEN ? ELSE MODIFIED END RDBDocumentStore: avoid use of GREATEST - Key: OAK-2941 URL: https://issues.apache.org/jira/browse/OAK-2941 Project: Jackrabbit Oak Issue Type: Improvement Components: rdbmk Affects Versions: 1.2.2, 1.0.14, 1.3 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor In the RDBDocumentStore we currently use GREATEST for conditional updates of the MODIFIED column (implementing the max operation). This isn't supported by SQLServer, thus requiring DB-specific code. It appears we can use something portable instead: set MODIFIED = CASE WHEN ? MODIFIED THEN ? ELSE MODIFIED END -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2702) Move oak-mk to attic
[ https://issues.apache.org/jira/browse/OAK-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2702. - Bulk Close for 1.3.0 Move oak-mk to attic Key: OAK-2702 URL: https://issues.apache.org/jira/browse/OAK-2702 Project: Jackrabbit Oak Issue Type: Sub-task Components: mk Reporter: angela Fix For: 1.3.0 Subtask to move the original mk implementation to the attic folder. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2868) Bypass CommitQueue for branch commits
[ https://issues.apache.org/jira/browse/OAK-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2868. - Bulk Close for 1.3.0 Bypass CommitQueue for branch commits - Key: OAK-2868 URL: https://issues.apache.org/jira/browse/OAK-2868 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Labels: performance Fix For: 1.3.0, 1.2.3, 1.0.15 Currently all commits go through the CommitQueue. This applies to commits that fit into memory, branch commits, merge commits and even reset commits. The guarantee provided by the CommitQueue is only necessary for commits that affect the head revision of the store: commits that fit into memory and merge commits. Branch and reset commits should bypass the CommitQueue to avoid unnecessary delays of commits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2831) Test classes extending AbstractImportTest do not always shut down repository instances properly
[ https://issues.apache.org/jira/browse/OAK-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2831. - Bulk Close for 1.3.0 Test classes extending AbstractImportTest do not always shut down repository instances properly --- Key: OAK-2831 URL: https://issues.apache.org/jira/browse/OAK-2831 Project: Jackrabbit Oak Issue Type: Bug Components: jcr Reporter: Robert Munteanu Assignee: Alex Parvulescu Fix For: 1.3.0 Attachments: OAK-2831-1.patch In {{AbstractImportTest}} a content repository instance is unconditionally created, see https://github.com/apache/jackrabbit-oak/blob/38670e11ed5682b49b7a4b37203aadcd89e1de44/oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/security/user/AbstractImportTest.java#L80-L83 . However, the repository is shutdown only if the import behaviour != null: https://github.com/apache/jackrabbit-oak/blob/38670e11ed5682b49b7a4b37203aadcd89e1de44/oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/security/user/AbstractImportTest.java#L133-L136 . This leads to executor instances not being closed and a large number of threads being leaked. I actually get consistent build failures due to this - see http://oak-dev.markmail.org/thread/k65wycf7ryxioob7 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2016) Make blob gc max age configurable in SegmentNodeStoreService
[ https://issues.apache.org/jira/browse/OAK-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2016. - Bulk Close for 1.3.0 Make blob gc max age configurable in SegmentNodeStoreService Key: OAK-2016 URL: https://issues.apache.org/jira/browse/OAK-2016 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Amit Jain Assignee: Amit Jain Priority: Minor Labels: datastore Fix For: 1.3.0, 1.2.3, 1.0.15 Attachments: OAK-2016.patch The blob gc max age setting is not configurable when using {{SegmentNodeStoreService}}. This can be made configurable and will be useful for testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2754) Use non unique PathCursor in LucenePropertyIndex
[ https://issues.apache.org/jira/browse/OAK-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2754. - Bulk Close for 1.3.0 Use non unique PathCursor in LucenePropertyIndex Key: OAK-2754 URL: https://issues.apache.org/jira/browse/OAK-2754 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Labels: resilience Fix For: 1.3.0 {{LucenePropertyIndex}} currently uses unique PathCursor [1] due to which the cursor would maintain an in memory set of visited path. This might grow big if result size is big and cursor is traversed completely. As with current impl the path would not be duplicated we can avoid using unique cursor [1] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/LucenePropertyIndex.java#L1153-1154 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2006) Verify the maven baseline output and fix the warnings
[ https://issues.apache.org/jira/browse/OAK-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2006. - Bulk Close for 1.3.0 Verify the maven baseline output and fix the warnings - Key: OAK-2006 URL: https://issues.apache.org/jira/browse/OAK-2006 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Alex Parvulescu Assignee: Michael Dürig Labels: build, modularization, osgi, technical_debt Fix For: 1.3.0 Attachments: OAK-2006.patch, baseline-oak-core.patch Currently the maven baseline plugin only logs the package version mismatches, it doesn't fail the build. It would be beneficial to start looking at the output and possibly fix some of the warnings (increase the OSGi package versions). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2823) Change default for oak.maxLockTryTimeMultiplier
[ https://issues.apache.org/jira/browse/OAK-2823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2823. - Bulk Close for 1.3.0 Change default for oak.maxLockTryTimeMultiplier --- Key: OAK-2823 URL: https://issues.apache.org/jira/browse/OAK-2823 Project: Jackrabbit Oak Issue Type: Improvement Components: core, mongomk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Labels: doc-impacting, resilience Fix For: 1.3.0, 1.2.3, 1.0.15 The default multipler is currently 3, which translates into a lock try timeout of 6 seconds. This is rather low and may result in merge failures even when a commit acquired the merge lock exclusively. I would like to increase it to 30. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2945) Sampling rate feature CompactionGainEstimate is not efficient
[ https://issues.apache.org/jira/browse/OAK-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2945. - Bulk Close for 1.3.0 Sampling rate feature CompactionGainEstimate is not efficient - Key: OAK-2945 URL: https://issues.apache.org/jira/browse/OAK-2945 Project: Jackrabbit Oak Issue Type: Sub-task Components: segmentmk Reporter: Michael Dürig Assignee: Michael Dürig Labels: compaction, gc Fix For: 1.3.0, 1.0.15 The sampling rate feature introduced with OAK-2595 is not efficient. It only prevents uuids from being stored in the bloom filter while the visited set is not affected and thus keeps growing. I will remove the feature again for now. We should look for a better solution once this becomes a problem. Will follow up on OAK-2939 re. this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2845) Memory leak in ObserverTracker#removedService
[ https://issues.apache.org/jira/browse/OAK-2845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2845. - Bulk Close for 1.3.0 Memory leak in ObserverTracker#removedService - Key: OAK-2845 URL: https://issues.apache.org/jira/browse/OAK-2845 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: Michael Dürig Assignee: Michael Dürig Fix For: 1.3.0 {{ObserverTracker#removedService}} does not remove the unregistered service from the {{subscriptions}} it keeps internally. This is troublesome as the {{ChangeProcessor}} instances are tracked by {{ObserverTracker}}. When unregistering an observation listener the associated {{ChangeProcessor}} is disabled but not removed and thus not made available for gc. This in turn makes {{ChangeProcessor}} keep a reference to an old node state ({{previousRoot}}), which will render revision garbage collection ineffective. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2883) Tests for SegmentNodeStoreService
[ https://issues.apache.org/jira/browse/OAK-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2883. - Bulk Close for 1.3.0 Tests for SegmentNodeStoreService - Key: OAK-2883 URL: https://issues.apache.org/jira/browse/OAK-2883 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Michael Dürig Assignee: Francesco Mari Labels: technical_debt Fix For: 1.3.0 Attachments: OAK-2883-01.patch {{SegmentNodeStoreService}} currently has no test coverage whatsoever. We should change that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2379) Regular CI failures for DOCUMENT_RDB on buildbot
[ https://issues.apache.org/jira/browse/OAK-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2379. - Bulk Close for 1.3.0 Regular CI failures for DOCUMENT_RDB on buildbot - Key: OAK-2379 URL: https://issues.apache.org/jira/browse/OAK-2379 Project: Jackrabbit Oak Issue Type: Bug Components: jcr Environment: http://ci.apache.org/builders/oak-trunk Reporter: Michael Dürig Assignee: Julian Reschke Labels: CI, buildbot Fix For: 1.3.0 There are many tests failing on http://ci.apache.org/builders/oak-trunk for the {{DOCUMENT_RDB}} fixture: {noformat} addNodes[3](org.apache.jackrabbit.oak.jcr.ConcurrentAddIT): expected:100 but was:99 testMVNameProperty[3](org.apache.jackrabbit.oak.jcr.NameAndPathPropertyTest): org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0001: OakMerge0001: Failed to merge changes to the underlying store (retries 5, 5432 ms) testMVNameProperty[3](org.apache.jackrabbit.oak.jcr.NameAndPathPropertyTest): org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: java.sql.SQLException: Data source is closed testMVPathProperty[3](org.apache.jackrabbit.oak.jcr.NameAndPathPropertyTest): Branch with failed reset testMVPathProperty[3](org.apache.jackrabbit.oak.jcr.NameAndPathPropertyTest): java.sql.SQLException: Data source is closed testInvalidPathProperty[3](org.apache.jackrabbit.oak.jcr.NameAndPathPropertyTest): initializing RDB blob store orderableFolder[3](org.apache.jackrabbit.oak.jcr.OrderableNodesTest): java.sql.SQLException: Data source is closed orderableFolder[3](org.apache.jackrabbit.oak.jcr.OrderableNodesTest): java.sql.SQLException: Data source is closed {noformat} And many more. See e.g. http://ci.apache.org/builders/oak-trunk/builds/890/steps/compile/logs/stdio -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (OAK-2913) TokenLoginModule should clear state in case of a login exception
[ https://issues.apache.org/jira/browse/OAK-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-2913. - Bulk Close for 1.3.0 TokenLoginModule should clear state in case of a login exception Key: OAK-2913 URL: https://issues.apache.org/jira/browse/OAK-2913 Project: Jackrabbit Oak Issue Type: Bug Components: core, security Reporter: Alex Parvulescu Assignee: Alex Parvulescu Fix For: 1.3.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)