[jira] [Commented] (OAK-3493) Deadlock when closing a concurrently used FileStore 2.0
[ https://issues.apache.org/jira/browse/OAK-3493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952736#comment-14952736 ] Francesco Mari commented on OAK-3493: - [~mduerig], it should work. > Deadlock when closing a concurrently used FileStore 2.0 > --- > > Key: OAK-3493 > URL: https://issues.apache.org/jira/browse/OAK-3493 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segmentmk >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: resilience > Fix For: 1.3.8 > > Attachments: OAK-3493.patch > > > A deadlock was detected while stopping the {{SegmentCompactionIT}} using the > exposed MBean. > {noformat} > "main@1" prio=5 tid=0x1 nid=NA waiting for monitor entry > waiting for pool-1-thread-10@2111 to release lock on <0xae8> (a > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.dropCache(SegmentWriter.java:871) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.close(FileStore.java:1031) > - locked <0xae7> (a > org.apache.jackrabbit.oak.plugins.segment.file.FileStore) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentCompactionIT.tearDown(SegmentCompactionIT.java:282) > "pool-1-thread-10@2111" prio=5 tid=0x1d nid=NA waiting for monitor entry > java.lang.Thread.State: BLOCKED > blocks main@1 > waiting for main@1 to release lock on <0xae7> (a > org.apache.jackrabbit.oak.plugins.segment.file.FileStore) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:1155) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:253) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:350) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:468) > - locked <0xae8> (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:719) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1211) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1156) > at > org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:399) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1147) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1175) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:451) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:474) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:530) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:208) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2714) Test failures on Jenkins
[ https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2714: --- Description: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181, 399 | SEGMENT_MK, DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 382 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 | DOCUMENT_RDB | 1.6, 1.8 | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths | 361 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 361 | DOCUMENT_NS, SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | DOCUMENT_NS, SEGMENT_MK | 1.7 | | Build times out after 90mins | 477 | DOCUMENT_RDB | 1.6, 1.7 | | Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 | was: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181, 399 | SEGMENT_MK, DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | |
[jira] [Resolved] (OAK-3493) Deadlock when closing a concurrently used FileStore 2.0
[ https://issues.apache.org/jira/browse/OAK-3493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-3493. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1708049=rev > Deadlock when closing a concurrently used FileStore 2.0 > --- > > Key: OAK-3493 > URL: https://issues.apache.org/jira/browse/OAK-3493 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segmentmk >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: resilience > Fix For: 1.3.8 > > Attachments: OAK-3493.patch > > > A deadlock was detected while stopping the {{SegmentCompactionIT}} using the > exposed MBean. > {noformat} > "main@1" prio=5 tid=0x1 nid=NA waiting for monitor entry > waiting for pool-1-thread-10@2111 to release lock on <0xae8> (a > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.dropCache(SegmentWriter.java:871) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.close(FileStore.java:1031) > - locked <0xae7> (a > org.apache.jackrabbit.oak.plugins.segment.file.FileStore) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentCompactionIT.tearDown(SegmentCompactionIT.java:282) > "pool-1-thread-10@2111" prio=5 tid=0x1d nid=NA waiting for monitor entry > java.lang.Thread.State: BLOCKED > blocks main@1 > waiting for main@1 to release lock on <0xae7> (a > org.apache.jackrabbit.oak.plugins.segment.file.FileStore) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.writeSegment(FileStore.java:1155) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:253) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.prepare(SegmentWriter.java:350) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeListBucket(SegmentWriter.java:468) > - locked <0xae8> (a org.apache.jackrabbit.oak.plugins.segment.SegmentWriter) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeList(SegmentWriter.java:719) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1211) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1156) > at > org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:399) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1147) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1175) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:451) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:474) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:530) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:208) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952922#comment-14952922 ] Tommaso Teofili commented on OAK-3156: -- the second patch (for option 1) looks good to me. > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3270) Improve DocumentMK resilience
[ https://issues.apache.org/jira/browse/OAK-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3270: --- Priority: Critical (was: Major) > Improve DocumentMK resilience > - > > Key: OAK-3270 > URL: https://issues.apache.org/jira/browse/OAK-3270 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth >Priority: Critical > Labels: resilience > > Collection of DocMK resilience improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3285) Datastore performance improvements
[ https://issues.apache.org/jira/browse/OAK-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3285: --- Priority: Major (was: Minor) > Datastore performance improvements > -- > > Key: OAK-3285 > URL: https://issues.apache.org/jira/browse/OAK-3285 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth > Labels: performance > > Collector issue for DS performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3268) Improve datastore resilience
[ https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3268: --- Priority: Critical (was: Major) > Improve datastore resilience > > > Key: OAK-3268 > URL: https://issues.apache.org/jira/browse/OAK-3268 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth >Priority: Critical > Labels: resilience > > As discussed bilaterally grouping the improvements for datastore resilience > in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3269) Improve Lucene indexer resilience
[ https://issues.apache.org/jira/browse/OAK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3269: --- Priority: Critical (was: Major) > Improve Lucene indexer resilience > - > > Key: OAK-3269 > URL: https://issues.apache.org/jira/browse/OAK-3269 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Michael Marth >Priority: Critical > Labels: resilience > > As discussed bilaterally grouping the improvements for Lucene indexer > resilience in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3340) DocumentMK technical debt
[ https://issues.apache.org/jira/browse/OAK-3340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3340: --- Fix Version/s: (was: 1.3.11) > DocumentMK technical debt > -- > > Key: OAK-3340 > URL: https://issues.apache.org/jira/browse/OAK-3340 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk >Reporter: Daniel Hasler >Priority: Minor > > As discussed bilaterally grouping the technical debt for MongoMK in this > issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3501) PersistedCompactionMap could release reference to records early
[ https://issues.apache.org/jira/browse/OAK-3501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu updated OAK-3501: - Attachment: OAK-3501-v2.patch v2, no close flag > PersistedCompactionMap could release reference to records early > --- > > Key: OAK-3501 > URL: https://issues.apache.org/jira/browse/OAK-3501 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: segmentmk >Reporter: Alex Parvulescu >Priority: Minor > Fix For: 1.4 > > Attachments: OAK-3501-v2.patch, OAK-3501.patch > > > Whenever a PersistedCompactionMap becomes empty it will be eventually dropped > from the compaction maps chain. this will happen on the next compaction > cycle, which happens post-cleanup. so we're potentially keeping a reference > to some collectable garbage for up to 2 cycles. > I'd like to propose a patch that allows for eagerly nullifying the reference > to the records, making this interval shorter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3501) PersistedCompactionMap could release reference to records early
[ https://issues.apache.org/jira/browse/OAK-3501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952826#comment-14952826 ] Michael Dürig commented on OAK-3501: LGTM > PersistedCompactionMap could release reference to records early > --- > > Key: OAK-3501 > URL: https://issues.apache.org/jira/browse/OAK-3501 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: segmentmk >Reporter: Alex Parvulescu >Priority: Minor > Fix For: 1.4 > > Attachments: OAK-3501-v2.patch, OAK-3501.patch > > > Whenever a PersistedCompactionMap becomes empty it will be eventually dropped > from the compaction maps chain. this will happen on the next compaction > cycle, which happens post-cleanup. so we're potentially keeping a reference > to some collectable garbage for up to 2 cycles. > I'd like to propose a patch that allows for eagerly nullifying the reference > to the records, making this interval shorter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3284) Query performance improvements
[ https://issues.apache.org/jira/browse/OAK-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3284: --- Fix Version/s: (was: 1.3.9) > Query performance improvements > -- > > Key: OAK-3284 > URL: https://issues.apache.org/jira/browse/OAK-3284 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene, query >Reporter: Michael Marth > Labels: performance > > Collector issue for various performance improvements on query engine and > Lucene indexer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3284) Query performance improvements
[ https://issues.apache.org/jira/browse/OAK-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3284: --- Epic Name: Query performance improvements > Query performance improvements > -- > > Key: OAK-3284 > URL: https://issues.apache.org/jira/browse/OAK-3284 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene, query >Reporter: Michael Marth > Labels: performance > > Collector issue for various performance improvements on query engine and > Lucene indexer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3341) lucene technical debt
[ https://issues.apache.org/jira/browse/OAK-3341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3341: --- Epic Name: lucene technical debt Priority: Minor (was: Major) > lucene technical debt > - > > Key: OAK-3341 > URL: https://issues.apache.org/jira/browse/OAK-3341 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Daniel Hasler >Priority: Minor > > As discussed bilaterally grouping the technical debt for Lucene in this issue > for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3341) lucene technical debt
[ https://issues.apache.org/jira/browse/OAK-3341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3341: --- Issue Type: Epic (was: Improvement) > lucene technical debt > - > > Key: OAK-3341 > URL: https://issues.apache.org/jira/browse/OAK-3341 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Daniel Hasler > > As discussed bilaterally grouping the technical debt for Lucene in this issue > for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3341) lucene technical debt
[ https://issues.apache.org/jira/browse/OAK-3341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3341: --- Fix Version/s: (was: 1.3.9) > lucene technical debt > - > > Key: OAK-3341 > URL: https://issues.apache.org/jira/browse/OAK-3341 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Daniel Hasler >Priority: Minor > > As discussed bilaterally grouping the technical debt for Lucene in this issue > for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3501) PersistedCompactionMap could release reference to records early
[ https://issues.apache.org/jira/browse/OAK-3501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu resolved OAK-3501. -- Resolution: Fixed Assignee: Alex Parvulescu Fix Version/s: (was: 1.4) 1.3.8 fixed with http://svn.apache.org/viewvc?rev=1708051=rev thanks for the review! > PersistedCompactionMap could release reference to records early > --- > > Key: OAK-3501 > URL: https://issues.apache.org/jira/browse/OAK-3501 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: segmentmk >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu >Priority: Minor > Fix For: 1.3.8 > > Attachments: OAK-3501-v2.patch, OAK-3501.patch > > > Whenever a PersistedCompactionMap becomes empty it will be eventually dropped > from the compaction maps chain. this will happen on the next compaction > cycle, which happens post-cleanup. so we're potentially keeping a reference > to some collectable garbage for up to 2 cycles. > I'd like to propose a patch that allows for eagerly nullifying the reference > to the records, making this interval shorter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952870#comment-14952870 ] Tommaso Teofili commented on OAK-3156: -- +1 from my side > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3270) Improve DocumentMK resilience
[ https://issues.apache.org/jira/browse/OAK-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3270: --- Epic Name: DocumentMK resilience > Improve DocumentMK resilience > - > > Key: OAK-3270 > URL: https://issues.apache.org/jira/browse/OAK-3270 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: resilience > Fix For: 1.3.9 > > > Collection of DocMK resilience improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3269) Improve Lucene indexer resilience
[ https://issues.apache.org/jira/browse/OAK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3269: --- Fix Version/s: (was: 1.3.9) > Improve Lucene indexer resilience > - > > Key: OAK-3269 > URL: https://issues.apache.org/jira/browse/OAK-3269 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Michael Marth > Labels: resilience > > As discussed bilaterally grouping the improvements for Lucene indexer > resilience in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3270) Improve DocumentMK resilience
[ https://issues.apache.org/jira/browse/OAK-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3270: --- Fix Version/s: (was: 1.3.9) > Improve DocumentMK resilience > - > > Key: OAK-3270 > URL: https://issues.apache.org/jira/browse/OAK-3270 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: resilience > > Collection of DocMK resilience improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3269) Improve Lucene indexer resilience
[ https://issues.apache.org/jira/browse/OAK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3269: --- Issue Type: Epic (was: Improvement) > Improve Lucene indexer resilience > - > > Key: OAK-3269 > URL: https://issues.apache.org/jira/browse/OAK-3269 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Michael Marth > Labels: resilience > Fix For: 1.3.9 > > > As discussed bilaterally grouping the improvements for Lucene indexer > resilience in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3268) Improve datastore resilience
[ https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3268: --- Fix Version/s: (was: 1.3.9) > Improve datastore resilience > > > Key: OAK-3268 > URL: https://issues.apache.org/jira/browse/OAK-3268 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth > Labels: resilience > > As discussed bilaterally grouping the improvements for datastore resilience > in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3285) Datastore performance improvements
[ https://issues.apache.org/jira/browse/OAK-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3285: --- Fix Version/s: (was: 1.3.9) > Datastore performance improvements > -- > > Key: OAK-3285 > URL: https://issues.apache.org/jira/browse/OAK-3285 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob >Reporter: Michael Marth >Priority: Minor > Labels: performance > > Collector issue for DS performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3285) Datastore performance improvements
[ https://issues.apache.org/jira/browse/OAK-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3285: --- Priority: Minor (was: Major) > Datastore performance improvements > -- > > Key: OAK-3285 > URL: https://issues.apache.org/jira/browse/OAK-3285 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob >Reporter: Michael Marth >Priority: Minor > Labels: performance > > Collector issue for DS performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3271) Improve DocumentMK performance
[ https://issues.apache.org/jira/browse/OAK-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3271: --- Fix Version/s: (was: 1.3.10) > Improve DocumentMK performance > -- > > Key: OAK-3271 > URL: https://issues.apache.org/jira/browse/OAK-3271 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: performance > > Collector issue for DocMK performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3271) Improve DocumentMK performance
[ https://issues.apache.org/jira/browse/OAK-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3271: --- Epic Name: Improve DocumentMK performance > Improve DocumentMK performance > -- > > Key: OAK-3271 > URL: https://issues.apache.org/jira/browse/OAK-3271 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: performance > > Collector issue for DocMK performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3340) DocumentMK technical debt
[ https://issues.apache.org/jira/browse/OAK-3340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3340: --- Epic Name: DocumentMK technical debt > DocumentMK technical debt > -- > > Key: OAK-3340 > URL: https://issues.apache.org/jira/browse/OAK-3340 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk >Reporter: Daniel Hasler >Priority: Minor > Fix For: 1.3.11 > > > As discussed bilaterally grouping the technical debt for MongoMK in this > issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3340) DocumentMK technical debt
[ https://issues.apache.org/jira/browse/OAK-3340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3340: --- Priority: Minor (was: Major) > DocumentMK technical debt > -- > > Key: OAK-3340 > URL: https://issues.apache.org/jira/browse/OAK-3340 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk >Reporter: Daniel Hasler >Priority: Minor > Fix For: 1.3.11 > > > As discussed bilaterally grouping the technical debt for MongoMK in this > issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3285) Datastore performance improvements
[ https://issues.apache.org/jira/browse/OAK-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3285: --- Epic Name: Datastore performance improvements > Datastore performance improvements > -- > > Key: OAK-3285 > URL: https://issues.apache.org/jira/browse/OAK-3285 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth >Priority: Minor > Labels: performance > > Collector issue for DS performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3285) Datastore performance improvements
[ https://issues.apache.org/jira/browse/OAK-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3285: --- Issue Type: Epic (was: Improvement) > Datastore performance improvements > -- > > Key: OAK-3285 > URL: https://issues.apache.org/jira/browse/OAK-3285 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth >Priority: Minor > Labels: performance > > Collector issue for DS performance improvements -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3286) Persistent Cache improvements
[ https://issues.apache.org/jira/browse/OAK-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3286: --- Epic Name: Persistent Cache improvements > Persistent Cache improvements > - > > Key: OAK-3286 > URL: https://issues.apache.org/jira/browse/OAK-3286 > Project: Jackrabbit Oak > Issue Type: Epic > Components: cache >Reporter: Michael Marth >Priority: Minor > Fix For: 1.3.9 > > > Issue for collecting various improvements to the persistent cache -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3436) Prevent missing checkpoint due to unstable topology from causing complete reindexing
[ https://issues.apache.org/jira/browse/OAK-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3436: -- Fix Version/s: (was: 1.3.8) 1.3.9 > Prevent missing checkpoint due to unstable topology from causing complete > reindexing > > > Key: OAK-3436 > URL: https://issues.apache.org/jira/browse/OAK-3436 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Fix For: 1.3.9, 1.0.23, 1.2.8 > > Attachments: AsyncIndexUpdateClusterTest.java, OAK-3436-0.patch > > > Async indexing logic relies on embedding application to ensure that async > indexing job is run as a singleton in a cluster. For Sling based apps it > depends on Sling Discovery support. At times it is being seen that if > topology is not stable then different cluster nodes can consider them as > leader and execute the async indexing job concurrently. > This can cause problem as both cluster node might not see same repository > state (due to write skew and eventual consistency) and might remove the > checkpoint which other cluster node is still relying upon. For e.g. consider > a 2 node cluster N1 and N2 where both are performing async indexing. > # Base state - CP1 is the checkpoint for "async" job > # N2 starts indexing and removes changes CP1 to CP2. For Mongo the > checkpoints are saved in {{settings}} collection > # N1 also decides to execute indexing but has yet not seen the latest > repository state so still thinks that CP1 is the base checkpoint and tries to > read it. However CP1 is already removed from {{settings}} and this makes N1 > think that checkpoint is missing and it decides to reindex everything! > To avoid this topology must be stable but at Oak level we should still handle > such a case and avoid doing a full reindexing. So we would need to have a > {{MissingCheckpointStrategy}} similar to {{MissingIndexEditorStrategy}} as > done in OAK-2203 > Possible approaches > # A1 - Fail the indexing run if checkpoint is missing - Checkpoint being > missing can have valid reason and invalid reason. Need to see what are valid > scenarios where a checkpoint can go missing > # A2 - When a checkpoint is created also store the creation time. When a > checkpoint is found to be missing and its a *recent* checkpoint then fail the > run. For e.g. we would fail the run till checkpoint found to be missing is > less than an hour old (for just started take startup time into account) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3486) Wrong evaluation of NOT NOT clause (see OAK-3371)
[ https://issues.apache.org/jira/browse/OAK-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3486: -- Fix Version/s: (was: 1.3.8) 1.3.9 > Wrong evaluation of NOT NOT clause (see OAK-3371) > - > > Key: OAK-3486 > URL: https://issues.apache.org/jira/browse/OAK-3486 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Affects Versions: 1.2.7 >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Fix For: 1.3.9, 1.2.8 > > > For OAK-3371, I made an improved patch, but that was not used. My patch fixes > some problems with the code, for example: > OAK-3371 uses "instanceof" where it is not needed (AstElementFactory for > example). The current code is problematic. > One regression that OAK-3371 introduced is that "not not contains" is > converted to "not contains" instead of "contains" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing
[ https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3054: -- Fix Version/s: (was: 1.3.8) 1.3.9 > IndexStatsMBean should provide some details if the async indexing is failing > > > Key: OAK-3054 > URL: https://issues.apache.org/jira/browse/OAK-3054 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Chetan Mehrotra >Priority: Minor > Labels: resilience, tooling > Fix For: 1.3.9, 1.2.8 > > > If the background indexing fails for some reason it logs the exception for > the first time then it logs the exception like _The index update failed ..._. > After that if indexing continues to fail then no further logging is done so > as to avoid creating noise. > This poses a problem on long running system where original exception might > not be noticed and index does not show updated result. For such cases we > should expose the indexing health as part of {{IndexStatsMBean}}. Also we can > provide the last recorded exception. > Administrator can then check for MBean and enable debug logs for further > troubleshooting -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3268) Improve datastore resilience
[ https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3268: --- Epic Name: datastore resilience > Improve datastore resilience > > > Key: OAK-3268 > URL: https://issues.apache.org/jira/browse/OAK-3268 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth > Labels: resilience > Fix For: 1.3.9 > > > As discussed bilaterally grouping the improvements for datastore resilience > in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3286) Persistent Cache improvements
[ https://issues.apache.org/jira/browse/OAK-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3286: --- Issue Type: Epic (was: Improvement) > Persistent Cache improvements > - > > Key: OAK-3286 > URL: https://issues.apache.org/jira/browse/OAK-3286 > Project: Jackrabbit Oak > Issue Type: Epic > Components: cache >Reporter: Michael Marth > Fix For: 1.3.9 > > > Issue for collecting various improvements to the persistent cache -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3286) Persistent Cache improvements
[ https://issues.apache.org/jira/browse/OAK-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3286: --- Priority: Minor (was: Major) > Persistent Cache improvements > - > > Key: OAK-3286 > URL: https://issues.apache.org/jira/browse/OAK-3286 > Project: Jackrabbit Oak > Issue Type: Epic > Components: cache >Reporter: Michael Marth >Priority: Minor > Fix For: 1.3.9 > > > Issue for collecting various improvements to the persistent cache -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3176) Provide an option to include a configured boost query while searching
[ https://issues.apache.org/jira/browse/OAK-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3176: -- Fix Version/s: (was: 1.3.8) 1.3.9 > Provide an option to include a configured boost query while searching > - > > Key: OAK-3176 > URL: https://issues.apache.org/jira/browse/OAK-3176 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 1.3.9, 1.2.8 > > > For tweaking relevancy it's sometimes useful to include a boost query that > gets applied at query time and modifies the ranking accordingly. > This can be done also by setting it by hand as a default parameter to the > /select request handler, but for convenience it'd be good if the Solr > instance configuration files wouldn't be touched. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3268) Improve datastore resilience
[ https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3268: --- Issue Type: Epic (was: Improvement) > Improve datastore resilience > > > Key: OAK-3268 > URL: https://issues.apache.org/jira/browse/OAK-3268 > Project: Jackrabbit Oak > Issue Type: Epic > Components: blob >Reporter: Michael Marth > Labels: resilience > Fix For: 1.3.9 > > > As discussed bilaterally grouping the improvements for datastore resilience > in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3269) Improve Lucene indexer resilience
[ https://issues.apache.org/jira/browse/OAK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3269: --- Epic Name: Lucene indexer resilience > Improve Lucene indexer resilience > - > > Key: OAK-3269 > URL: https://issues.apache.org/jira/browse/OAK-3269 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene >Reporter: Michael Marth > Labels: resilience > Fix For: 1.3.9 > > > As discussed bilaterally grouping the improvements for Lucene indexer > resilience in this issue for easier tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3287) DocumentMK revision GC
[ https://issues.apache.org/jira/browse/OAK-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3287: --- Fix Version/s: (was: 1.3.9) > DocumentMK revision GC > -- > > Key: OAK-3287 > URL: https://issues.apache.org/jira/browse/OAK-3287 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > > Collector for various tasks on implementing DocMK revision GC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3287) DocumentMK revision GC
[ https://issues.apache.org/jira/browse/OAK-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3287: --- Epic Name: DocumentMK revision GC > DocumentMK revision GC > -- > > Key: OAK-3287 > URL: https://issues.apache.org/jira/browse/OAK-3287 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > > Collector for various tasks on implementing DocMK revision GC -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3284) Query performance improvements
[ https://issues.apache.org/jira/browse/OAK-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3284: --- Issue Type: Epic (was: Improvement) > Query performance improvements > -- > > Key: OAK-3284 > URL: https://issues.apache.org/jira/browse/OAK-3284 > Project: Jackrabbit Oak > Issue Type: Epic > Components: lucene, query >Reporter: Michael Marth > Labels: performance > > Collector issue for various performance improvements on query engine and > Lucene indexer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3272) DocumentMK scalability improvements
[ https://issues.apache.org/jira/browse/OAK-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3272: --- Epic Name: DocumentMK scalability improvements > DocumentMK scalability improvements > --- > > Key: OAK-3272 > URL: https://issues.apache.org/jira/browse/OAK-3272 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: scalability > Fix For: 1.3.10 > > > Collector issue for tracking DocMK issues concerning scalability -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3272) DocumentMK scalability improvements
[ https://issues.apache.org/jira/browse/OAK-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-3272: --- Fix Version/s: (was: 1.3.10) > DocumentMK scalability improvements > --- > > Key: OAK-3272 > URL: https://issues.apache.org/jira/browse/OAK-3272 > Project: Jackrabbit Oak > Issue Type: Epic > Components: mongomk, rdbmk >Reporter: Michael Marth > Labels: scalability > > Collector issue for tracking DocMK issues concerning scalability -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2507) Truncate journal.log after off line compaction
[ https://issues.apache.org/jira/browse/OAK-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-2507. Resolution: Fixed Fix Version/s: (was: 1.3.9) 1.3.8 Fixed at http://svn.apache.org/viewvc?rev=1708044=rev > Truncate journal.log after off line compaction > -- > > Key: OAK-2507 > URL: https://issues.apache.org/jira/browse/OAK-2507 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: segmentmk >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: compaction, gc > Fix For: 1.3.8 > > > After off line compaction the repository contains a single revision. However > the journal.log file will still contain the trail of all revisions that have > been removed during the compaction process. I suggest we truncate the > journal.log to only contain the latest revision created during compaction. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2881) ConsistencyChecker#checkConsistency can't cope with inconsistent journal
[ https://issues.apache.org/jira/browse/OAK-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2881: --- Fix Version/s: (was: 1.3.9) 1.3.8 > ConsistencyChecker#checkConsistency can't cope with inconsistent journal > > > Key: OAK-2881 > URL: https://issues.apache.org/jira/browse/OAK-2881 > Project: Jackrabbit Oak > Issue Type: Bug > Components: run >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: resilience, tooling > Fix For: 1.3.8 > > > When running the consistency checker against a repository with a corrupt > journal, it fails with an {{ISA}} instead of trying to skip over invalid > revision identifiers: > {noformat} > Exception in thread "main" java.lang.IllegalArgumentException: Bad record > identifier: foobar > at > org.apache.jackrabbit.oak.plugins.segment.RecordId.fromString(RecordId.java:57) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:227) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:178) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:156) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:166) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.(FileStore.java:805) > at > org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.(ConsistencyChecker.java:108) > at > org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:70) > at org.apache.jackrabbit.oak.run.Main.check(Main.java:701) > at org.apache.jackrabbit.oak.run.Main.main(Main.java:158) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2881) ConsistencyChecker#checkConsistency can't cope with inconsistent journal
[ https://issues.apache.org/jira/browse/OAK-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-2881. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1708045=rev > ConsistencyChecker#checkConsistency can't cope with inconsistent journal > > > Key: OAK-2881 > URL: https://issues.apache.org/jira/browse/OAK-2881 > Project: Jackrabbit Oak > Issue Type: Bug > Components: run >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: resilience, tooling > Fix For: 1.3.9 > > > When running the consistency checker against a repository with a corrupt > journal, it fails with an {{ISA}} instead of trying to skip over invalid > revision identifiers: > {noformat} > Exception in thread "main" java.lang.IllegalArgumentException: Bad record > identifier: foobar > at > org.apache.jackrabbit.oak.plugins.segment.RecordId.fromString(RecordId.java:57) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:227) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:178) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:156) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore.(FileStore.java:166) > at > org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.(FileStore.java:805) > at > org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.(ConsistencyChecker.java:108) > at > org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:70) > at org.apache.jackrabbit.oak.run.Main.check(Main.java:701) > at org.apache.jackrabbit.oak.run.Main.main(Main.java:158) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3507) Virtual index rows should have paths as 'null'
Vikas Saurabh created OAK-3507: -- Summary: Virtual index rows should have paths as 'null' Key: OAK-3507 URL: https://issues.apache.org/jira/browse/OAK-3507 Project: Jackrabbit Oak Issue Type: Improvement Components: query Reporter: Vikas Saurabh Priority: Minor While discussing OAK-3156, it was decided that virtual rows would have returned path as {{null}}. Opening this issue to let OAK-3156 be resolved as status quo on value being returned by virtual rows. Note: Having {{null}} paths for virtual rows is semantically correct as there is no defined meaning of a path for those rows. But it brings in some backward incompatibility issue: * existing queries which would have some {{... AND jcr:path='/'}} would not work anymore * xpaths has path built in implicitly, so xpath need to be un-supported PS: Do look at comments on OAK-3156 to get more details into how/why did we arrive here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3506) Uniformization of compaction log messages
[ https://issues.apache.org/jira/browse/OAK-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953214#comment-14953214 ] Francesco Mari commented on OAK-3506: - I think it's a good idea. > Uniformization of compaction log messages > - > > Key: OAK-3506 > URL: https://issues.apache.org/jira/browse/OAK-3506 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Valentin Olteanu > Attachments: compaction_logs.git.patch > > > The logs generated during different phases of tar garbage collection > (compaction) are currently quite heterogenous and difficult to grep/parse. > I propose with the attached patch to uniformize these logs, changing the > following: > # all logs start with the prefix {{TarMK GargabeCollection \{\}#:}} > # different phases of garbage collection are easier to identify by the first > word after prefix, e.g. estimation, compaction, cleanup > # all values are also printed in a standard unit, with the following format: > {{ ()}}. This makes extraction of > information much easier. > # messages corresponding to the same cycle (run) can be grouped by including > the runId in the prefix. > Note1: I don't have enough visibility, but the changes might impact any > system relying on the old format. Yet, I've seen they have changed before so > this might not be a real concern. > Note2: the runId is implemented as a static variable, which is reset every > time the class is reloaded (e.g. at restart), so it is unique only during one > run. > Below you can find an excerpt of old logs and new logs to compare: > NEW: > {code} > 12.10.2015 16:11:56.705 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: started > 12.10.2015 16:11:56.707 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation started > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation completed in 2.569 s (2567 ms). Gain is 16% > or 1.1 GB/1.3 GB (1062364160/1269737472 bytes), so running compaction > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction started, > strategy=CompactionStrategy{paused=false, cloneBinaries=false, > cleanupType=CLEAN_OLD, olderThan=3600, memoryThreshold=5, > persistedCompactionMap=true, retryCount=5, forceAfterFail=true, > compactionStart=1444659116706, offlineCompaction=false} > 12.10.2015 16:12:05.839 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.Compactor Finished compaction: > 420022 nodes, 772259 properties, 20544 binaries. > 12.10.2015 16:12:07.459 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction completed in 8.184 s (8183 ms), after 0 > cycles > 12.10.2015 16:12:11.912 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: cleanup started. Current repository size is 1.4 GB > (1368899584 bytes) > 12.10.2015 16:12:12.368 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: cleanup marking
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953234#comment-14953234 ] Vikas Saurabh commented on OAK-3156: Opened OAK-3507 to track 'virtual rows need to have path set as {{null}}'. [~tmueller], [~teofili], can we please use apply this [patch|^OAK-3156-take2.patch] (option 1 second patch) to resolve this issue? > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3504) CopyOnRead directory should not schedule a copy task for non existent file
[ https://issues.apache.org/jira/browse/OAK-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetan Mehrotra resolved OAK-3504. -- Resolution: Fixed Fixed * trunk - 1708105 * 1.0 - 1708108 * 1.2 - 1708110 > CopyOnRead directory should not schedule a copy task for non existent file > -- > > Key: OAK-3504 > URL: https://issues.apache.org/jira/browse/OAK-3504 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Minor > Fix For: 1.3.9, 1.0.23, 1.2.8 > > > CopyOnWriteDirectory currently schedules a copy task for file which does > exist. This eventually results in FileNotFoundException and that gets logged > as warning. In normal usage this should not happen at the first place i.e. > look up of non existent file. However in case of corruption it is seen that > Lucene tries to lookup segment files with increasing generation number and > they flood the log with such exception. > In such a case no scheduling of file copy should be done and instead lookup > call should be directly delegated to remote directory > {noformat} > 28.09.2015 12:11:08.745 *WARN* [oak-lucene-0] > org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred > while copying file [segments_25y] from OakDirectory@5970597 > lockFactory=org.apache.lucene.store.NoLockFactory@51b451b4 to > NIOFSDirectory@/usr/cq/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 > > lockFactory=NativeFSLockFactory@/usr/cq/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 > java.io.FileNotFoundException: segments_25y > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:133) > at org.apache.lucene.store.Directory.copy(Directory.java:185) > at > org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:312) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) > at java.lang.Thread.run(Thread.java:767) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3325) MissingIndexProviderStrategy should warn when setting the reindex flag
[ https://issues.apache.org/jira/browse/OAK-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953067#comment-14953067 ] Alex Parvulescu commented on OAK-3325: -- * merged to 1.2 with r1708101 * merged to 1.0 with r1708102 > MissingIndexProviderStrategy should warn when setting the reindex flag > -- > > Key: OAK-3325 > URL: https://issues.apache.org/jira/browse/OAK-3325 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu > Fix For: 1.3.7, 1.0.23, 1.2.8 > > Attachments: OAK-3325.patch > > > Currently the _MissingIndexProviderStrategy_ would silently flip the reindex > flag on missing provider types. We need to add some warning logs to know when > this happens for the synchronous indexes too. > [0] > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/IndexUpdate.java#L323 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3325) MissingIndexProviderStrategy should warn when setting the reindex flag
[ https://issues.apache.org/jira/browse/OAK-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu updated OAK-3325: - Fix Version/s: 1.2.8 1.0.23 > MissingIndexProviderStrategy should warn when setting the reindex flag > -- > > Key: OAK-3325 > URL: https://issues.apache.org/jira/browse/OAK-3325 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu > Fix For: 1.3.7, 1.0.23, 1.2.8 > > Attachments: OAK-3325.patch > > > Currently the _MissingIndexProviderStrategy_ would silently flip the reindex > flag on missing provider types. We need to add some warning logs to know when > this happens for the synchronous indexes too. > [0] > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/IndexUpdate.java#L323 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953090#comment-14953090 ] Tommaso Teofili commented on OAK-3156: -- also I think a discussion around result a suggestion per row would fit better into a dedicated issue, as here I'm more concerned about the bug at hand regarding node type restriction not being taken into account properly. > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953090#comment-14953090 ] Tommaso Teofili edited comment on OAK-3156 at 10/12/15 1:05 PM: also I think a discussion around return a suggestion per row would fit better into a dedicated issue, as here I'm more concerned about the bug at hand regarding node type restriction not being taken into account properly. was (Author: teofili): also I think a discussion around result a suggestion per row would fit better into a dedicated issue, as here I'm more concerned about the bug at hand regarding node type restriction not being taken into account properly. > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3504) CopyOnRead directory should not schedule a copy task for non existent file
[ https://issues.apache.org/jira/browse/OAK-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetan Mehrotra updated OAK-3504: - Description: CopyOnWriteDirectory currently schedules a copy task for file which does exist. This eventually results in FileNotFoundException and that gets logged as warning. In normal usage this should not happen at the first place i.e. look up of non existent file. However in case of corruption it is seen that Lucene tries to lookup segment files with increasing generation number and they flood the log with such exception. In such a case no scheduling of file copy should be done and instead lookup call should be directly delegated to remote directory {noformat} 28.09.2015 12:11:08.745 *WARN* [oak-lucene-0] org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while copying file [segments_25y] from OakDirectory@5970597 lockFactory=org.apache.lucene.store.NoLockFactory@51b451b4 to NIOFSDirectory@/usr/cq/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 lockFactory=NativeFSLockFactory@/usd/as67709a/data/v04.cqa.a/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 java.io.FileNotFoundException: segments_25y at org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:133) at org.apache.lucene.store.Directory.copy(Directory.java:185) at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:312) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:767) {noformat} was: CopyOnWriteDirectory currently schedules a copy task for file which does exist. This eventually results in FileNotFoundException and that gets logged as warning. In normal usage this should not happen at the first place i.e. look up of non existent file. However in case of corruption it is seen that Lucene tries to lookup segment files with increasing generation number and they flood the log with such exception. In such a case no scheduling of file copy should be done and instead lookup call should be directly delegated to remote directory {noformat} 28.09.2015 12:11:08.745 *WARN* [oak-lucene-0] org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while copying file [segments_25y] from OakDirectory@5970597 lockFactory=org.apache.lucene.store.NoLockFactory@51b451b4 to NIOFSDirectory@/usd/as67709a/data/v04.cqa.a/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 lockFactory=NativeFSLockFactory@/usd/as67709a/data/v04.cqa.a/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 java.io.FileNotFoundException: segments_25y at org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:133) at org.apache.lucene.store.Directory.copy(Directory.java:185) at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:312) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:767) {noformat} > CopyOnRead directory should not schedule a copy task for non existent file > -- > > Key: OAK-3504 > URL: https://issues.apache.org/jira/browse/OAK-3504 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Minor > Fix For: 1.3.9, 1.0.23, 1.2.8 > > > CopyOnWriteDirectory currently schedules a copy task for file which does > exist. This eventually results in FileNotFoundException and that gets logged > as warning. In normal usage this should not happen at the first place i.e. > look up of non existent file. However in case of corruption it is seen that > Lucene tries to lookup segment files with increasing generation number and > they flood the log with such exception. > In such a case no scheduling of file copy should be done and instead lookup > call should be directly delegated to remote directory > {noformat} > 28.09.2015 12:11:08.745 *WARN* [oak-lucene-0] > org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred > while copying file [segments_25y] from OakDirectory@5970597 > lockFactory=org.apache.lucene.store.NoLockFactory@51b451b4 to >
[jira] [Created] (OAK-3504) CopyOnRead directory should not schedule a copy task for non existent file
Chetan Mehrotra created OAK-3504: Summary: CopyOnRead directory should not schedule a copy task for non existent file Key: OAK-3504 URL: https://issues.apache.org/jira/browse/OAK-3504 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Chetan Mehrotra Assignee: Chetan Mehrotra Priority: Minor Fix For: 1.3.9, 1.0.23, 1.2.8 CopyOnWriteDirectory currently schedules a copy task for file which does exist. This eventually results in FileNotFoundException and that gets logged as warning. In normal usage this should not happen at the first place i.e. look up of non existent file. However in case of corruption it is seen that Lucene tries to lookup segment files with increasing generation number and they flood the log with such exception. In such a case no scheduling of file copy should be done and instead lookup call should be directly delegated to remote directory {noformat} 28.09.2015 12:11:08.745 *WARN* [oak-lucene-0] org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Error occurred while copying file [segments_25y] from OakDirectory@5970597 lockFactory=org.apache.lucene.store.NoLockFactory@51b451b4 to NIOFSDirectory@/usd/as67709a/data/v04.cqa.a/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 lockFactory=NativeFSLockFactory@/usd/as67709a/data/v04.cqa.a/repository/_cqa/repository/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/2 java.io.FileNotFoundException: segments_25y at org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:133) at org.apache.lucene.store.Directory.copy(Directory.java:185) at org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier$CopyOnReadDirectory$1.run(IndexCopier.java:312) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:767) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3506) Uniformization of compaction log messages
Valentin Olteanu created OAK-3506: - Summary: Uniformization of compaction log messages Key: OAK-3506 URL: https://issues.apache.org/jira/browse/OAK-3506 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Valentin Olteanu The logs generated during different phases of tar garbage collection (compaction) are currently quite heterogenous and difficult to grep/parse. I propose with the attached patch to uniformize these logs, changing the following: # all logs start with the prefix {{TarMK GargabeCollection \{\}#:}} # different phases of garbage collection are easier to identify by the first word after prefix, e.g. estimation, compaction, cleanup # all values are also printed in a standard unit, with the following format: {{ ()}}. This makes extraction of information much easier. # messages corresponding to the same cycle (run) can be grouped by including the runId in the prefix. Note1: I don't have enough visibility, but the changes might impact any system relying on the old format. Yet, I've seen they have changed before so this might not be a real concern. Note2: the runId is implemented as a static variable, which is reset every time the class is reloaded (e.g. at restart), so it is unique only during one run. Below you can find an excerpt of old logs and new logs to compare: NEW: {code} 12.10.2015 16:11:56.705 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: started 12.10.2015 16:11:56.707 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: estimation started 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: estimation completed in 2.569 s (2567 ms). Gain is 16% or 1.1 GB/1.3 GB (1062364160/1269737472 bytes), so running compaction 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: compaction started, strategy=CompactionStrategy{paused=false, cloneBinaries=false, cleanupType=CLEAN_OLD, olderThan=3600, memoryThreshold=5, persistedCompactionMap=true, retryCount=5, forceAfterFail=true, compactionStart=1444659116706, offlineCompaction=false} 12.10.2015 16:12:05.839 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.Compactor Finished compaction: 420022 nodes, 772259 properties, 20544 binaries. 12.10.2015 16:12:07.459 *INFO* [TarMK compaction thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: compaction completed in 8.184 s (8183 ms), after 0 cycles 12.10.2015 16:12:11.912 *INFO* [TarMK flush thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: cleanup started. Current repository size is 1.4 GB (1368899584 bytes) 12.10.2015 16:12:12.368 *INFO* [TarMK flush thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: cleanup marking file for deletion: data8a.tar 12.10.2015 16:12:12.434 *INFO* [TarMK flush thread [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK GarbageCollection #1: cleanup completed in 522.8 ms (522 ms). Post cleanup size
[jira] [Created] (OAK-3505) Provide an optionally stricter policy for missing synchronous index editor providers
Alex Parvulescu created OAK-3505: Summary: Provide an optionally stricter policy for missing synchronous index editor providers Key: OAK-3505 URL: https://issues.apache.org/jira/browse/OAK-3505 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Alex Parvulescu Assignee: Alex Parvulescu Currently a missing synchronous index editor provider will mark the specific index type with a reindex flag (OAK-3366). I'd like to provide the option to setup a strategy to fail the commit until a specific impl comes online. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3506) Uniformization of compaction log messages
[ https://issues.apache.org/jira/browse/OAK-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Valentin Olteanu updated OAK-3506: -- Attachment: compaction_logs.git.patch patch to be applied with: patch -p0 -i compaction_logs.git.patch > Uniformization of compaction log messages > - > > Key: OAK-3506 > URL: https://issues.apache.org/jira/browse/OAK-3506 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Valentin Olteanu > Attachments: compaction_logs.git.patch > > > The logs generated during different phases of tar garbage collection > (compaction) are currently quite heterogenous and difficult to grep/parse. > I propose with the attached patch to uniformize these logs, changing the > following: > # all logs start with the prefix {{TarMK GargabeCollection \{\}#:}} > # different phases of garbage collection are easier to identify by the first > word after prefix, e.g. estimation, compaction, cleanup > # all values are also printed in a standard unit, with the following format: > {{ ()}}. This makes extraction of > information much easier. > # messages corresponding to the same cycle (run) can be grouped by including > the runId in the prefix. > Note1: I don't have enough visibility, but the changes might impact any > system relying on the old format. Yet, I've seen they have changed before so > this might not be a real concern. > Note2: the runId is implemented as a static variable, which is reset every > time the class is reloaded (e.g. at restart), so it is unique only during one > run. > Below you can find an excerpt of old logs and new logs to compare: > NEW: > {code} > 12.10.2015 16:11:56.705 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: started > 12.10.2015 16:11:56.707 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation started > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation completed in 2.569 s (2567 ms). Gain is 16% > or 1.1 GB/1.3 GB (1062364160/1269737472 bytes), so running compaction > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction started, > strategy=CompactionStrategy{paused=false, cloneBinaries=false, > cleanupType=CLEAN_OLD, olderThan=3600, memoryThreshold=5, > persistedCompactionMap=true, retryCount=5, forceAfterFail=true, > compactionStart=1444659116706, offlineCompaction=false} > 12.10.2015 16:12:05.839 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.Compactor Finished compaction: > 420022 nodes, 772259 properties, 20544 binaries. > 12.10.2015 16:12:07.459 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction completed in 8.184 s (8183 ms), after 0 > cycles > 12.10.2015 16:12:11.912 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: cleanup started. Current repository size is 1.4 GB > (1368899584 bytes) > 12.10.2015 16:12:12.368 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore
[jira] [Commented] (OAK-3506) Uniformization of compaction log messages
[ https://issues.apache.org/jira/browse/OAK-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14953208#comment-14953208 ] Valentin Olteanu commented on OAK-3506: --- /cc [~alex.parvulescu], [~mduerig], [~frm] could you please have a look and tell if this is a good idea? > Uniformization of compaction log messages > - > > Key: OAK-3506 > URL: https://issues.apache.org/jira/browse/OAK-3506 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Valentin Olteanu > Attachments: compaction_logs.git.patch > > > The logs generated during different phases of tar garbage collection > (compaction) are currently quite heterogenous and difficult to grep/parse. > I propose with the attached patch to uniformize these logs, changing the > following: > # all logs start with the prefix {{TarMK GargabeCollection \{\}#:}} > # different phases of garbage collection are easier to identify by the first > word after prefix, e.g. estimation, compaction, cleanup > # all values are also printed in a standard unit, with the following format: > {{ ()}}. This makes extraction of > information much easier. > # messages corresponding to the same cycle (run) can be grouped by including > the runId in the prefix. > Note1: I don't have enough visibility, but the changes might impact any > system relying on the old format. Yet, I've seen they have changed before so > this might not be a real concern. > Note2: the runId is implemented as a static variable, which is reset every > time the class is reloaded (e.g. at restart), so it is unique only during one > run. > Below you can find an excerpt of old logs and new logs to compare: > NEW: > {code} > 12.10.2015 16:11:56.705 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: started > 12.10.2015 16:11:56.707 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation started > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: estimation completed in 2.569 s (2567 ms). Gain is 16% > or 1.1 GB/1.3 GB (1062364160/1269737472 bytes), so running compaction > 12.10.2015 16:11:59.275 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction started, > strategy=CompactionStrategy{paused=false, cloneBinaries=false, > cleanupType=CLEAN_OLD, olderThan=3600, memoryThreshold=5, > persistedCompactionMap=true, retryCount=5, forceAfterFail=true, > compactionStart=1444659116706, offlineCompaction=false} > 12.10.2015 16:12:05.839 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.Compactor Finished compaction: > 420022 nodes, 772259 properties, 20544 binaries. > 12.10.2015 16:12:07.459 *INFO* [TarMK compaction thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:11:56 CEST 2015, previous max duration 0ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: compaction completed in 8.184 s (8183 ms), after 0 > cycles > 12.10.2015 16:12:11.912 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] > org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK > GarbageCollection #1: cleanup started. Current repository size is 1.4 GB > (1368899584 bytes) > 12.10.2015 16:12:12.368 *INFO* [TarMK flush thread > [/Users/volteanu/workspace/test/qp/quickstart-author-4502/crx-quickstart/repository/segmentstore], > active since Mon Oct 12 16:12:11 CEST 2015, previous max duration 10ms] >
[jira] [Updated] (OAK-3508) External login module should reduce LDAP lookups for pre-authenticated users
[ https://issues.apache.org/jira/browse/OAK-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tobias Bocanegra updated OAK-3508: -- Affects Version/s: 1.4 1.2 1.0.22 > External login module should reduce LDAP lookups for pre-authenticated users > > > Key: OAK-3508 > URL: https://issues.apache.org/jira/browse/OAK-3508 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.2, 1.4, 1.0.22 >Reporter: Tobias Bocanegra > > consider the following JAAS setup: > - *sufficient* SSO Login Module > - *optional* Default Login Module > - *sufficient* External Login Module > This causes each login() to reach the external login module (which is > desired) but causes an IDP lookup for each login, even if the user is already > synced with the repository. > ideally the login module could pass the {{ExternalIdentityRef}} to the sync > handler and to a tentative sync. the {{lastSyncTime}} should be respected in > this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3508) External login module should reduce LDAP lookups for pre-authenticated users
[ https://issues.apache.org/jira/browse/OAK-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tobias Bocanegra updated OAK-3508: -- Component/s: auth-external > External login module should reduce LDAP lookups for pre-authenticated users > > > Key: OAK-3508 > URL: https://issues.apache.org/jira/browse/OAK-3508 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.2, 1.4, 1.0.22 >Reporter: Tobias Bocanegra > > consider the following JAAS setup: > - *sufficient* SSO Login Module > - *optional* Default Login Module > - *sufficient* External Login Module > This causes each login() to reach the external login module (which is > desired) but causes an IDP lookup for each login, even if the user is already > synced with the repository. > ideally the login module could pass the {{ExternalIdentityRef}} to the sync > handler and to a tentative sync. the {{lastSyncTime}} should be respected in > this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3508) External login module should reduce LDAP lookups for pre-authenticated users
[ https://issues.apache.org/jira/browse/OAK-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tobias Bocanegra reassigned OAK-3508: - Assignee: Tobias Bocanegra > External login module should reduce LDAP lookups for pre-authenticated users > > > Key: OAK-3508 > URL: https://issues.apache.org/jira/browse/OAK-3508 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.2, 1.4, 1.0.22 >Reporter: Tobias Bocanegra >Assignee: Tobias Bocanegra > > consider the following JAAS setup: > - *sufficient* SSO Login Module > - *optional* Default Login Module > - *sufficient* External Login Module > This causes each login() to reach the external login module (which is > desired) but causes an IDP lookup for each login, even if the user is already > synced with the repository. > ideally the login module could pass the {{ExternalIdentityRef}} to the sync > handler and to a tentative sync. the {{lastSyncTime}} should be respected in > this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3508) External login module should reduce LDAP lookups for pre-authenticated users
Tobias Bocanegra created OAK-3508: - Summary: External login module should reduce LDAP lookups for pre-authenticated users Key: OAK-3508 URL: https://issues.apache.org/jira/browse/OAK-3508 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Tobias Bocanegra consider the following JAAS setup: - *sufficient* SSO Login Module - *optional* Default Login Module - *sufficient* External Login Module This causes each login() to reach the external login module (which is desired) but causes an IDP lookup for each login, even if the user is already synced with the repository. ideally the login module could pass the {{ExternalIdentityRef}} to the sync handler and to a tentative sync. the {{lastSyncTime}} should be respected in this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names
Vikas Saurabh created OAK-3509: -- Summary: Lucene suggestion results should have 1 row per suggestion with appropriate column names Key: OAK-3509 URL: https://issues.apache.org/jira/browse/OAK-3509 Project: Jackrabbit Oak Issue Type: Improvement Components: lucene Reporter: Vikas Saurabh Priority: Minor Currently suggest query returns just one row with {{rep:suggest()}} column containing a string that needs to be parsed. It'd better if each suggestion is returned as individual row with column names such as {{suggestion}}, {{weight}}(???), etc. (cc [~teofili]) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954233#comment-14954233 ] Vikas Saurabh commented on OAK-3156: Opened OAK-3509. > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names
[ https://issues.apache.org/jira/browse/OAK-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tommaso Teofili updated OAK-3509: - Fix Version/s: 1.3.9 > Lucene suggestion results should have 1 row per suggestion with appropriate > column names > > > Key: OAK-3509 > URL: https://issues.apache.org/jira/browse/OAK-3509 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Minor > Fix For: 1.3.9 > > > Currently suggest query returns just one row with {{rep:suggest()}} column > containing a string that needs to be parsed. > It'd better if each suggestion is returned as individual row with column > names such as {{suggestion}}, {{weight}}(???), etc. > (cc [~teofili]) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names
[ https://issues.apache.org/jira/browse/OAK-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tommaso Teofili reassigned OAK-3509: Assignee: Tommaso Teofili > Lucene suggestion results should have 1 row per suggestion with appropriate > column names > > > Key: OAK-3509 > URL: https://issues.apache.org/jira/browse/OAK-3509 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Minor > Fix For: 1.3.9 > > > Currently suggest query returns just one row with {{rep:suggest()}} column > containing a string that needs to be parsed. > It'd better if each suggestion is returned as individual row with column > names such as {{suggestion}}, {{weight}}(???), etc. > (cc [~teofili]) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3156) Lucene suggestions index definition can't be restricted to a specific type of node
[ https://issues.apache.org/jira/browse/OAK-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954416#comment-14954416 ] Tommaso Teofili commented on OAK-3156: -- I'll apply this today unless there's any objection from Thomas. > Lucene suggestions index definition can't be restricted to a specific type of > node > -- > > Key: OAK-3156 > URL: https://issues.apache.org/jira/browse/OAK-3156 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Blocker > Fix For: 1.3.9 > > Attachments: LuceneIndexSuggestionTest.java, OAK-3156-take2.patch, > OAK-3156.patch > > > While performing a suggestor query like > {code} > SELECT [rep:suggest()] as suggestion FROM [nt:unstructured] WHERE > suggest('foo') > {code} > Suggestor does not provide any result. In current implementation, > [suggestions|http://jackrabbit.apache.org/oak/docs/query/lucene.html#Suggestions] > in Oak work only for index definitions for {{nt:base}} nodetype. > So, an index definition like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > works, but if we change nodetype to {{nt:unstructured}} like: > {code:xml} > jcr:primaryType="oak:QueryIndexDefinition" > async="async" > compatVersion="{Long}2" > type="lucene"> > > > > jcr:primaryType="nt:unstructured" > analyzed="{Boolean}true" > name="description" > propertyIndex="{Boolean}true" > useInSuggest="{Boolean}true"/> > > > > > {code} > , it won't work. > The issue is that suggestor implementation essentially is passing a pseudo > row with path=/.: > {code:title=LucenePropertyIndex.java} > private boolean loadDocs() { > ... > queue.add(new LuceneResultRow(suggestedWords)); > ... > {code} > and > {code:title=LucenePropertyIndex.java} > LuceneResultRow(Iterable suggestWords) { > this.path = "/"; > this.score = 1.0d; > this.suggestWords = suggestWords; > } > {code} > Due to path being set to "/", {{SelectorImpl}} later filters out the result > as {{rep:root}} (primary type of "/") isn't a {{nt:unstructured}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names
[ https://issues.apache.org/jira/browse/OAK-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954426#comment-14954426 ] Chetan Mehrotra commented on OAK-3509: -- +1 - Given we already have notion of row and column such data can be mapped to those constructs and not require user of API to parse them > Lucene suggestion results should have 1 row per suggestion with appropriate > column names > > > Key: OAK-3509 > URL: https://issues.apache.org/jira/browse/OAK-3509 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Vikas Saurabh >Assignee: Tommaso Teofili >Priority: Minor > Fix For: 1.3.9 > > > Currently suggest query returns just one row with {{rep:suggest()}} column > containing a string that needs to be parsed. > It'd better if each suggestion is returned as individual row with column > names such as {{suggestion}}, {{weight}}(???), etc. > (cc [~teofili]) -- This message was sent by Atlassian JIRA (v6.3.4#6332)