[jira] [Assigned] (OAK-3013) SQL2 query with union, limit and offset can return invalid results
[ https://issues.apache.org/jira/browse/OAK-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain reassigned OAK-3013: -- Assignee: Amit Jain > SQL2 query with union, limit and offset can return invalid results > -- > > Key: OAK-3013 > URL: https://issues.apache.org/jira/browse/OAK-3013 > Project: Jackrabbit Oak > Issue Type: Bug > Components: query >Reporter: Teodor Rosu >Assignee: Amit Jain > Fix For: 1.3.2 > > Attachments: OAK-3013-fix.patch, OAK-3013-test.patch > > > when using order, limit and offset and a SQL2 query that contains an union of > two subqueries that have common results can return invalid results > Example: assuming content tree /test/a/b/c/d/e exists > {code:sql} > SELECT [jcr:path] FROM [nt:base] AS a WHERE ISDESCENDANTNODE(a, '/test') > UNION SELECT [jcr:path] FROM [nt:base] AS a WHERE ISDESCENDANTNODE(a, > '/test')" ORDER BY [jcr:path] > {code} > with limit=3 and offset 2 returns only one row ( instead of 3 ) > the correct result set is > {noformat} > /test/a/b/c > /test/a/b/c/d > /test/a/b/c/d/e > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3027) NullPointerException from Tika if SupportedMediaType is null in LuceneIndexEditorContext
[ https://issues.apache.org/jira/browse/OAK-3027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598958#comment-14598958 ] Amit Jain commented on OAK-3027: bq. can we do a fix like this https://github.com/apache/jackrabbit/blob/trunk/jackrabbitcore/src/main/java/org/apache/jackrabbit/core/query/lucene/NodeIndexer.java#L935 Yes, this works on the current version of tika (1.5) used in oak as well. I'll make the change. > NullPointerException from Tika if SupportedMediaType is null in > LuceneIndexEditorContext > > > Key: OAK-3027 > URL: https://issues.apache.org/jira/browse/OAK-3027 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Affects Versions: 1.3.0 >Reporter: Ashok Kumar > > Tika Parser tested with is 1.7, poi version 3.11. > Related to OAK-2468 ; can we do a fix like this > https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/query/lucene/NodeIndexer.java#L935 > > Stacktrace -- > 24.06.2015 11:01:45.536 *ERROR* [pool-7-thread-2] > org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job > execution of > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@73cf679b : null > java.lang.NullPointerException: null > at > org.apache.tika.parser.ocr.TesseractOCRParser.getSupportedTypes(TesseractOCRParser.java:89) > at > org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) > at > org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:95) > at > org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) > at > org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:104) > at > org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) > at > org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) > at > org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) > at > org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) > at > org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.isSupportedMediaType(LuceneIndexEditorContext.java:259) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.isSupportedMediaType(LuceneIndexEditor.java:802) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.newBinary(LuceneIndexEditor.java:525) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.indexProperty(LuceneIndexEditor.java:393) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.makeDocument(LuceneIndexEditor.java:330) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.addOrUpdate(LuceneIndexEditor.java:287) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:191) > at > org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74) > at > org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130) > at > org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) > at > org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:576) > at > org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) > at > org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:5
[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598940#comment-14598940 ] Amit Jain commented on OAK-3026: I am on maven 3.2.3 > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-3026 > URL: https://issues.apache.org/jira/browse/OAK-3026 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Amit Jain > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598939#comment-14598939 ] Amit Jain commented on OAK-3026: Yeah, but I thought that the activation condition is an AND. Is it related to maven versions [1]? [1] http://stackoverflow.com/questions/4629140/maven-profile-activation-with-multiple-conditions > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-3026 > URL: https://issues.apache.org/jira/browse/OAK-3026 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Amit Jain > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598927#comment-14598927 ] Tobias Bocanegra commented on OAK-3026: --- well, it is related to windows, not to JDK 1.7 :-) > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-3026 > URL: https://issues.apache.org/jira/browse/OAK-3026 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Amit Jain > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3027) NullPointerException from Tika if SupportedMediaType is null in LuceneIndexEditorContext
[ https://issues.apache.org/jira/browse/OAK-3027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashok Kumar updated OAK-3027: - Description: Tika Parser tested with is 1.7, poi version 3.11. Related to OAK-2468 ; can we do a fix like this https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/query/lucene/NodeIndexer.java#L935 Stacktrace -- 24.06.2015 11:01:45.536 *ERROR* [pool-7-thread-2] org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job execution of org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@73cf679b : null java.lang.NullPointerException: null at org.apache.tika.parser.ocr.TesseractOCRParser.getSupportedTypes(TesseractOCRParser.java:89) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:95) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:104) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.isSupportedMediaType(LuceneIndexEditorContext.java:259) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.isSupportedMediaType(LuceneIndexEditor.java:802) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.newBinary(LuceneIndexEditor.java:525) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.indexProperty(LuceneIndexEditor.java:393) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.makeDocument(LuceneIndexEditor.java:330) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.addOrUpdate(LuceneIndexEditor.java:287) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:191) at org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74) at org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130) at org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) at org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:576) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:531) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
[jira] [Updated] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain updated OAK-3020: --- Fix Version/s: 1.3.2 > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding >Assignee: Amit Jain > Fix For: 1.3.2 > > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3027) NullPointerException from Tika if SupportedMediaType is null in LuceneIndexEditorContext
Ashok Kumar created OAK-3027: Summary: NullPointerException from Tika if SupportedMediaType is null in LuceneIndexEditorContext Key: OAK-3027 URL: https://issues.apache.org/jira/browse/OAK-3027 Project: Jackrabbit Oak Issue Type: Bug Components: lucene Affects Versions: 1.3.0 Reporter: Ashok Kumar Tika Parser tested with is 1.7, poi version 3.11. Stacktrace -- 24.06.2015 11:01:45.536 *ERROR* [pool-7-thread-2] org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job execution of org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@73cf679b : null java.lang.NullPointerException: null at org.apache.tika.parser.ocr.TesseractOCRParser.getSupportedTypes(TesseractOCRParser.java:89) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:95) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.DefaultParser.getParsers(DefaultParser.java:104) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.tika.parser.CompositeParser.getParsers(CompositeParser.java:81) at org.apache.tika.parser.CompositeParser.getSupportedTypes(CompositeParser.java:229) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.isSupportedMediaType(LuceneIndexEditorContext.java:259) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.isSupportedMediaType(LuceneIndexEditor.java:802) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.newBinary(LuceneIndexEditor.java:525) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.indexProperty(LuceneIndexEditor.java:393) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.makeDocument(LuceneIndexEditor.java:330) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.addOrUpdate(LuceneIndexEditor.java:287) at org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:191) at org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74) at org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130) at org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) at org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:161) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:434) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:576) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:531) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:418) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583) at org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compar
[jira] [Commented] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598915#comment-14598915 ] Amit Jain commented on OAK-3020: Fixed in trunk with http://svn.apache.org/r1687175 (Includes test case by [~jsedding]) [~chetanm] Could you please review the change done. Once reviewed I'll merge to the branches. > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding >Assignee: Amit Jain > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3026) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain updated OAK-3026: --- Fix Version/s: (was: 1.0.15) (was: 1.4) 1.0.16 1.3.2 > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-3026 > URL: https://issues.apache.org/jira/browse/OAK-3026 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Amit Jain > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598911#comment-14598911 ] Amit Jain commented on OAK-3026: I get the same errors with jdk 1.7 too on windows. cc /[~tripod] > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-3026 > URL: https://issues.apache.org/jira/browse/OAK-3026 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Amit Jain > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3026) test failures for oak-auth-ldap on Windows
Amit Jain created OAK-3026: -- Summary: test failures for oak-auth-ldap on Windows Key: OAK-3026 URL: https://issues.apache.org/jira/browse/OAK-3026 Project: Jackrabbit Oak Issue Type: Bug Components: auth-ldap Reporter: Amit Jain Fix For: 1.2.3, 1.4, 1.0.15 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) Time elapsed: 0.01 sec <<< ERROR! java.io.IOException: Unable to delete file: target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) at org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) at org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain resolved OAK-2962. Resolution: Fixed Committed in trunk with http://svn.apache.org/r1687171 > SegmentNodeStoreService fails to handle empty strings in the OSGi > configuration > --- > > Key: OAK-2962 > URL: https://issues.apache.org/jira/browse/OAK-2962 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segmentmk >Reporter: Francesco Mari >Assignee: Amit Jain > Fix For: 1.3.2 > > Attachments: OAK-2962-01.patch, OAK-2962-02.patch > > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, > it is returned to its caller as a valid configuration value. The callers of > {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an > empty value is found. > The method {{SegmentNodeStoreService#lookup}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain reassigned OAK-2962: -- Assignee: Amit Jain (was: Francesco Mari) > SegmentNodeStoreService fails to handle empty strings in the OSGi > configuration > --- > > Key: OAK-2962 > URL: https://issues.apache.org/jira/browse/OAK-2962 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segmentmk >Reporter: Francesco Mari >Assignee: Amit Jain > Fix For: 1.3.2 > > Attachments: OAK-2962-01.patch, OAK-2962-02.patch > > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, > it is returned to its caller as a valid configuration value. The callers of > {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an > empty value is found. > The method {{SegmentNodeStoreService#lookup}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users
[ https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597842#comment-14597842 ] Tobias Bocanegra commented on OAK-2874: --- Since the API is only used internally, and not exposed to a UI right now, I think it would make sense to only fix it transparently if possible. We can still add a paging or cursor based API later. > [ldap] enable listUsers to work for more than 1000 external users > - > > Key: OAK-2874 > URL: https://issues.apache.org/jira/browse/OAK-2874 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Affects Versions: 1.2.1 >Reporter: Nicolas Peltier > > LDAP servers are usually limited to return 1000 search results. Currently > LdapIdentityProvider.listUsers() doesn't take care of that limitation and > prevent the client user to retrieve more.(cc [~tripod]) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain reassigned OAK-3020: -- Assignee: Amit Jain > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding >Assignee: Amit Jain > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597755#comment-14597755 ] Amit Jain commented on OAK-3020: Yes I think we should ignore it. We already ignore the occurrence if the type of the property differs from what is configured and cannot be converted to it. [~chetanm] wdyt? > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3022) DocumentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain reassigned OAK-3022: -- Assignee: Amit Jain > DocumentNodeStoreService fails to handle empty strings in the OSGi > configuration > > > Key: OAK-3022 > URL: https://issues.apache.org/jira/browse/OAK-3022 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Francesco Mari >Assignee: Amit Jain > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{DocumentNodeStoreService#prop}}, > it is returned to its caller as a valid configuration value. The callers of > {{DocumentNodeStoreService#prop}}, instead, expect {{null}} when such an > empty value is found. > The method {{DocumentNodeStoreService#prop}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3022) DocumentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain updated OAK-3022: --- Assignee: Francesco Mari (was: Amit Jain) > DocumentNodeStoreService fails to handle empty strings in the OSGi > configuration > > > Key: OAK-3022 > URL: https://issues.apache.org/jira/browse/OAK-3022 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Francesco Mari >Assignee: Francesco Mari > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{DocumentNodeStoreService#prop}}, > it is returned to its caller as a valid configuration value. The callers of > {{DocumentNodeStoreService#prop}}, instead, expect {{null}} when such an > empty value is found. > The method {{DocumentNodeStoreService#prop}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3025) add test case simulating batched import of nodes
[ https://issues.apache.org/jira/browse/OAK-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-3025. - Resolution: Fixed run with mvn test -Dtest=PackageImportIT -DPackageImportIT See source for additional parameters; output goes to INFO-level log. > add test case simulating batched import of nodes > > > Key: OAK-3025 > URL: https://issues.apache.org/jira/browse/OAK-3025 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3025) add test case simulating batched import of nodes
[ https://issues.apache.org/jira/browse/OAK-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3025: Affects Version/s: 1.2.2 > add test case simulating batched import of nodes > > > Key: OAK-3025 > URL: https://issues.apache.org/jira/browse/OAK-3025 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3025) add test case simulating batched import of nodes
[ https://issues.apache.org/jira/browse/OAK-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3025: Fix Version/s: 1.2.3 > add test case simulating batched import of nodes > > > Key: OAK-3025 > URL: https://issues.apache.org/jira/browse/OAK-3025 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection
[ https://issues.apache.org/jira/browse/OAK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-3024. - Resolution: Fixed > NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC > connection > -- > > Key: OAK-3024 > URL: https://issues.apache.org/jira/browse/OAK-3024 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection
[ https://issues.apache.org/jira/browse/OAK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3024: Affects Version/s: (was: 1.0.15) > NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC > connection > -- > > Key: OAK-3024 > URL: https://issues.apache.org/jira/browse/OAK-3024 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection
[ https://issues.apache.org/jira/browse/OAK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3024: Fix Version/s: 1.2.3 > NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC > connection > -- > > Key: OAK-3024 > URL: https://issues.apache.org/jira/browse/OAK-3024 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1, 1.0.15 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.2.3, 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3002) Optimize docCache and docChildrenCache invalidation by filtering using journal
[ https://issues.apache.org/jira/browse/OAK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597662#comment-14597662 ] Stefan Egli commented on OAK-3002: -- [~chetanm], [~mreutegg], FYI: patch is ready for another round of review, thx > Optimize docCache and docChildrenCache invalidation by filtering using journal > -- > > Key: OAK-3002 > URL: https://issues.apache.org/jira/browse/OAK-3002 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: core, mongomk >Reporter: Stefan Egli >Assignee: Stefan Egli > Labels: scalability > Fix For: 1.2.3, 1.3.2 > > Attachments: JournalLoadTest.java, > OAK-3002-improved-doc-and-docChildren-cache-invaliation-and-junit.4.patch, > OAK-3002-improved-doc-and-docChildren-cache-invaliation.3.patch, > OAK-3002-improved-doc-cache-invaliation.2.patch > > > This subtask is about spawning out a > [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14588114&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14588114] > on OAK-2829 re optimizing docCache invalidation using the newly introduced > external diff journal: > {quote} > Attached OAK-2829-improved-doc-cache-invaliation.patch which is a suggestion > on how to avoid invalidating the entire document cache when doing a > {{backgroundRead}} but instead making use of the new journal: ie only > invalidate from the document cache what has actually changed. > I'd like to get an opinion ([~mreutegg], [~chetanm]?) on this first, I have a > load test pending locally which found invalidation of the document cache to > be the slowest part thus wanted to optimize this first. > Open still/next: > * also invalidate only necessary parts from the docChildrenCache > * junits for all of these > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3025) add test case simulating batched import of nodes
[ https://issues.apache.org/jira/browse/OAK-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3025: Fix Version/s: 1.3.2 > add test case simulating batched import of nodes > > > Key: OAK-3025 > URL: https://issues.apache.org/jira/browse/OAK-3025 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.3.1 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection
[ https://issues.apache.org/jira/browse/OAK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3024: Fix Version/s: 1.3.2 > NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC > connection > -- > > Key: OAK-3024 > URL: https://issues.apache.org/jira/browse/OAK-3024 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1, 1.0.15 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.3.2 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2974) Broken link on documentation site
[ https://issues.apache.org/jira/browse/OAK-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597619#comment-14597619 ] Marcel Reutegger commented on OAK-2974: --- Added it again in http://svn.apache.org/r1687047 > Broken link on documentation site > - > > Key: OAK-2974 > URL: https://issues.apache.org/jira/browse/OAK-2974 > Project: Jackrabbit Oak > Issue Type: Bug > Components: doc >Reporter: Michael Dürig >Assignee: Marcel Reutegger >Priority: Minor > Labels: docuentation > > http://jackrabbit.apache.org/oak/docs/command_line.html points to > http://jackrabbit.apache.org/oak/docs/oak-mongo-js/oak.html, which doesn't > exit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597601#comment-14597601 ] Julian Sedding commented on OAK-3020: - Another point of view could be to ignore the occurrence of the property, because it does not match the index definition. Not sure what would be the most intuitive behaviour. > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597596#comment-14597596 ] Davide Giannella commented on OAK-3020: --- thanks [~jsedding]. Now the question would is: should we drop the node or index, let's say, only the first occurrence of the property? I think the document should be indexed under both the property values but then, don't know how to cope with the ordering. [~chetanm] thoughts? > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Sedding updated OAK-3020: Description: The async index update can fail due to a mismatch between an index definition and the actual content. If that is the case, it seems that it can no longer make any progress. Instead it re-indexes the latest changes over and over again until it hits the problematic property. Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq Stacktrace attached. was: The async index update can fail due to specific documents. If that is the case, it seems that it can not make any progress any longer. Instead it re-indexes the latest changes over and over again until it hits the problematic document. Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq Stacktrace attached. > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to a mismatch between an index definition > and the actual content. If that is the case, it seems that it can no longer > make any progress. Instead it re-indexes the latest changes over and over > again until it hits the problematic property. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3025) add test case simulating batched import of nodes
Julian Reschke created OAK-3025: --- Summary: add test case simulating batched import of nodes Key: OAK-3025 URL: https://issues.apache.org/jira/browse/OAK-3025 Project: Jackrabbit Oak Issue Type: Improvement Components: it Affects Versions: 1.3.1 Reporter: Julian Reschke Assignee: Julian Reschke -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection
[ https://issues.apache.org/jira/browse/OAK-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3024: Summary: NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC connection (was: NodeStoreFixture: add "getName()" for diagnostics) > NodeStoreFixture: add "getName()" for diagnostics, allow config of RDB JDBC > connection > -- > > Key: OAK-3024 > URL: https://issues.apache.org/jira/browse/OAK-3024 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: it >Affects Versions: 1.2.2, 1.3.1, 1.0.15 >Reporter: Julian Reschke >Assignee: Julian Reschke > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3024) NodeStoreFixture: add "getName()" for diagnostics
Julian Reschke created OAK-3024: --- Summary: NodeStoreFixture: add "getName()" for diagnostics Key: OAK-3024 URL: https://issues.apache.org/jira/browse/OAK-3024 Project: Jackrabbit Oak Issue Type: Improvement Components: it Affects Versions: 1.0.15, 1.2.2, 1.3.1 Reporter: Julian Reschke Assignee: Julian Reschke -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3020) Async Update fails after IllegalArgumentException
[ https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Sedding updated OAK-3020: Attachment: OAK-3020.test.patch Test case to reproduce the issue. It turns out that the issue is not related to the document. Instead the issue happens when a property is indexed as "ordered" in a lucene index and content with a multi-value property is created. > Async Update fails after IllegalArgumentException > - > > Key: OAK-3020 > URL: https://issues.apache.org/jira/browse/OAK-3020 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2 >Reporter: Julian Sedding > Attachments: OAK-3020-stacktrace.txt, OAK-3020.test.patch > > > The async index update can fail due to specific documents. If that is the > case, it seems that it can not make any progress any longer. Instead it > re-indexes the latest changes over and over again until it hits the > problematic document. > Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq > Stacktrace attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2810) Cannot copy a node from a parent with restricted access
[ https://issues.apache.org/jira/browse/OAK-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fernando Lemes updated OAK-2810: Labels: easytest patch-available security (was: easytest security) > Cannot copy a node from a parent with restricted access > --- > > Key: OAK-2810 > URL: https://issues.apache.org/jira/browse/OAK-2810 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Affects Versions: 1.1.8 >Reporter: Fernando Lemes > Labels: easytest, patch-available, security > Attachments: patch_to_CopyTest_file.patch > > > If we try to copy a node, in which we have full access, but with no access on > the parent node, the copy operation will throw a PathNotFoundException when > evaluating checkProtectedNode(getParentPath("sourceNodePath")) on the copy() > method from org.apache.jackrabbit.oak.jcr.session.WorkspaceImpl -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2934) Certain searches cause lucene index to hit OutOfMemoryError
[ https://issues.apache.org/jira/browse/OAK-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu updated OAK-2934: - Priority: Blocker (was: Major) > Certain searches cause lucene index to hit OutOfMemoryError > --- > > Key: OAK-2934 > URL: https://issues.apache.org/jira/browse/OAK-2934 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu >Priority: Blocker > Labels: resilience > Fix For: 1.2.3, 1.3.2, 1.0.16 > > Attachments: LuceneIndex.java.patch > > > Certain search terms can get split into very small wildcard tokens that will > match a huge amount of items from the index, finally resulting in a OOME. > For example > {code} > /jcr:root//*[jcr:contains(., 'U=1*')] > {code} > will translate into the following lucene query > {code} > :fulltext:"u ( [set of all index terms stating with '1'] )" > {code} > this will break down when lucene will try to compute the score for the huge > set of tokens: > {code} > java.lang.OutOfMemoryError: Java heap space > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:201) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:155) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.(OakDirectory.java:340) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:345) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:329) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.(Lucene41PostingsReader.java:613) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.docsAndPositions(Lucene41PostingsReader.java:252) > at > org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.docsAndPositions(BlockTreeTermsReader.java:2233) > at > org.apache.lucene.search.UnionDocsAndPositionsEnum.(MultiPhraseQuery.java:492) > at > org.apache.lucene.search.MultiPhraseQuery$MultiPhraseWeight.scorer(MultiPhraseQuery.java:205) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:289) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:280) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1026) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1047) > at > org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88) > at > org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3021) UserValidator and AccessControlValidator must not process hidden nodes
[ https://issues.apache.org/jira/browse/OAK-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3021: -- Fix Version/s: 1.0.16 1.2.3 > UserValidator and AccessControlValidator must not process hidden nodes > -- > > Key: OAK-3021 > URL: https://issues.apache.org/jira/browse/OAK-3021 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, security >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Fix For: 1.2.3, 1.3.2, 1.0.16 > > Attachments: OAK-3021.patch > > > This is similar to OAK-3019 but for {{UserValidator}} and > {{AccessControlValidator}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3023) Long running MongoDB query may block other threads
Marcel Reutegger created OAK-3023: - Summary: Long running MongoDB query may block other threads Key: OAK-3023 URL: https://issues.apache.org/jira/browse/OAK-3023 Project: Jackrabbit Oak Issue Type: Bug Affects Versions: 1.0.15, 1.2.2 Reporter: Marcel Reutegger Assignee: Marcel Reutegger Fix For: 1.3.2 Most queries on MongoDB are usually rather fast and the TreeLock acquired in MongoDocumentStore (to ensure cache consistency) is released rather quickly. However there may be cases when a query is more expensive and a TreeLock is held for a long time. This may block other threads from querying MongoDB and limit concurrency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3023) Long running MongoDB query may block other threads
[ https://issues.apache.org/jira/browse/OAK-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3023: -- Labels: concurrency (was: ) > Long running MongoDB query may block other threads > -- > > Key: OAK-3023 > URL: https://issues.apache.org/jira/browse/OAK-3023 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.2.2, 1.0.15 >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Labels: concurrency > Fix For: 1.3.2 > > > Most queries on MongoDB are usually rather fast and the TreeLock acquired in > MongoDocumentStore (to ensure cache consistency) is released rather quickly. > However there may be cases when a query is more expensive and a TreeLock is > held for a long time. This may block other threads from querying MongoDB and > limit concurrency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2974) Broken link on documentation site
[ https://issues.apache.org/jira/browse/OAK-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597405#comment-14597405 ] Thierry Ygé commented on OAK-2974: -- [~mreutegg] It still not valid the link , it end on 404, it was really useful to have the oak-mongo-js documentation. Would it be possible to fix it again ? > Broken link on documentation site > - > > Key: OAK-2974 > URL: https://issues.apache.org/jira/browse/OAK-2974 > Project: Jackrabbit Oak > Issue Type: Bug > Components: doc >Reporter: Michael Dürig >Assignee: Marcel Reutegger >Priority: Minor > Labels: docuentation > > http://jackrabbit.apache.org/oak/docs/command_line.html points to > http://jackrabbit.apache.org/oak/docs/oak-mongo-js/oak.html, which doesn't > exit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2934) Certain searches cause lucene index to hit OutOfMemoryError
[ https://issues.apache.org/jira/browse/OAK-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597384#comment-14597384 ] Alex Parvulescu commented on OAK-2934: -- marking as a blocker for 1.0.16, this needs to get included in the release > Certain searches cause lucene index to hit OutOfMemoryError > --- > > Key: OAK-2934 > URL: https://issues.apache.org/jira/browse/OAK-2934 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu >Priority: Blocker > Labels: resilience > Fix For: 1.2.3, 1.3.2, 1.0.16 > > Attachments: LuceneIndex.java.patch > > > Certain search terms can get split into very small wildcard tokens that will > match a huge amount of items from the index, finally resulting in a OOME. > For example > {code} > /jcr:root//*[jcr:contains(., 'U=1*')] > {code} > will translate into the following lucene query > {code} > :fulltext:"u ( [set of all index terms stating with '1'] )" > {code} > this will break down when lucene will try to compute the score for the huge > set of tokens: > {code} > java.lang.OutOfMemoryError: Java heap space > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:201) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:155) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.(OakDirectory.java:340) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:345) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:329) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.(Lucene41PostingsReader.java:613) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.docsAndPositions(Lucene41PostingsReader.java:252) > at > org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.docsAndPositions(BlockTreeTermsReader.java:2233) > at > org.apache.lucene.search.UnionDocsAndPositionsEnum.(MultiPhraseQuery.java:492) > at > org.apache.lucene.search.MultiPhraseQuery$MultiPhraseWeight.scorer(MultiPhraseQuery.java:205) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281) > at > org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:289) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:280) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1026) > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1047) > at > org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88) > at > org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474) > at > org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597333#comment-14597333 ] Francesco Mari commented on OAK-2962: - I created OAK-3022 for DocumentNodeStoreService. > SegmentNodeStoreService fails to handle empty strings in the OSGi > configuration > --- > > Key: OAK-2962 > URL: https://issues.apache.org/jira/browse/OAK-2962 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segmentmk >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.3.2 > > Attachments: OAK-2962-01.patch, OAK-2962-02.patch > > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, > it is returned to its caller as a valid configuration value. The callers of > {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an > empty value is found. > The method {{SegmentNodeStoreService#lookup}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3022) DocumentNodeStoreService fails to handle empty strings in the OSGi configuration
[ https://issues.apache.org/jira/browse/OAK-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597331#comment-14597331 ] Francesco Mari commented on OAK-3022: - The patch in OAK-2962 can be used to implement a solution for this issue. > DocumentNodeStoreService fails to handle empty strings in the OSGi > configuration > > > Key: OAK-3022 > URL: https://issues.apache.org/jira/browse/OAK-3022 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Francesco Mari > > When an OSGi configuration property is removed from the dictionary associated > to a component, the default value assigned to it is an empty string. > When such an empty string is processed by {{DocumentNodeStoreService#prop}}, > it is returned to its caller as a valid configuration value. The callers of > {{DocumentNodeStoreService#prop}}, instead, expect {{null}} when such an > empty value is found. > The method {{DocumentNodeStoreService#prop}} should check for empty strings > in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3022) DocumentNodeStoreService fails to handle empty strings in the OSGi configuration
Francesco Mari created OAK-3022: --- Summary: DocumentNodeStoreService fails to handle empty strings in the OSGi configuration Key: OAK-3022 URL: https://issues.apache.org/jira/browse/OAK-3022 Project: Jackrabbit Oak Issue Type: Bug Components: core Reporter: Francesco Mari When an OSGi configuration property is removed from the dictionary associated to a component, the default value assigned to it is an empty string. When such an empty string is processed by {{DocumentNodeStoreService#prop}}, it is returned to its caller as a valid configuration value. The callers of {{DocumentNodeStoreService#prop}}, instead, expect {{null}} when such an empty value is found. The method {{DocumentNodeStoreService#prop}} should check for empty strings in the OSGi configuration, and treat them as {{null}} values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3019) VersionablePathHook must not process hidden nodes
[ https://issues.apache.org/jira/browse/OAK-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3019: -- Fix Version/s: 1.0.16 1.2.3 Merged into 1.2 branch: http://svn.apache.org/r1686980 and 1.0 branch: http://svn.apache.org/r1686986 > VersionablePathHook must not process hidden nodes > - > > Key: OAK-3019 > URL: https://issues.apache.org/jira/browse/OAK-3019 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2, 1.0.15 >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Fix For: 1.2.3, 1.3.2, 1.0.16 > > > The VersionablePathHook also processes hidden nodes, e.g. index data, which > adds considerable overhead to the merge phase. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2989) Swap large commits to disk in order to avoid OOME
[ https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597300#comment-14597300 ] Marcel Reutegger commented on OAK-2989: --- bq. is the amount of change or the period configurable ? The DocumentNodeStore persists changes to a branch when there are 10'000 changes. This includes calls to addNode() and setProperty(). This can be tweaked with a system property {{-Dupdate.limit}}. bq. Do you have a mechanism already in place to reproduce issues due to large data set ? We have LargeOperationIT in oak-jcr, but a standalone test class or a new test in oak-run is probably easier. > Swap large commits to disk in order to avoid OOME > - > > Key: OAK-2989 > URL: https://issues.apache.org/jira/browse/OAK-2989 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2 >Reporter: Timothee Maret > Fix For: 1.3.2 > > > As described in [0] large commits consume a fair amount of memory. With very > large commits, this become problematic as commits may eat up 100GB or more > and thus causing OOME and aborting the commit. > Instead of keeping the whole commit in memory, the implementation may store > parts of it on the disk once the heap memory consumption reaches a > configurable threshold. > This would allow to solve the issue and not simply mitigate it as in > OAK-2968, OAK-2969. > The behaviour may already be supported for some configurations of Oak. At > least the setup Mongo + DocumentStore seemed not to support it. > [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3021) UserValidator and AccessControlValidator must not process hidden nodes
[ https://issues.apache.org/jira/browse/OAK-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3021: -- Fix Version/s: 1.3.2 > UserValidator and AccessControlValidator must not process hidden nodes > -- > > Key: OAK-3021 > URL: https://issues.apache.org/jira/browse/OAK-3021 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, security >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Fix For: 1.3.2 > > Attachments: OAK-3021.patch > > > This is similar to OAK-3019 but for {{UserValidator}} and > {{AccessControlValidator}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3021) UserValidator and AccessControlValidator must not process hidden nodes
[ https://issues.apache.org/jira/browse/OAK-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-3021: -- Attachment: OAK-3021.patch Proposed changes with test cases and fixes. > UserValidator and AccessControlValidator must not process hidden nodes > -- > > Key: OAK-3021 > URL: https://issues.apache.org/jira/browse/OAK-3021 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, security >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Fix For: 1.3.2 > > Attachments: OAK-3021.patch > > > This is similar to OAK-3019 but for {{UserValidator}} and > {{AccessControlValidator}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2989) Swap large commits to disk in order to avoid OOME
[ https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597288#comment-14597288 ] Timothee Maret commented on OAK-2989: - bq. when there are many changes and persists changes periodically [~mreutegg] is the amount of change or the period configurable ? Maybe it plays a role in my case. bq. Can you please provide a test to reproduce the issue? Do you have a mechanism already in place to reproduce issues due to large data set ? > Swap large commits to disk in order to avoid OOME > - > > Key: OAK-2989 > URL: https://issues.apache.org/jira/browse/OAK-2989 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2 >Reporter: Timothee Maret > Fix For: 1.3.2 > > > As described in [0] large commits consume a fair amount of memory. With very > large commits, this become problematic as commits may eat up 100GB or more > and thus causing OOME and aborting the commit. > Instead of keeping the whole commit in memory, the implementation may store > parts of it on the disk once the heap memory consumption reaches a > configurable threshold. > This would allow to solve the issue and not simply mitigate it as in > OAK-2968, OAK-2969. > The behaviour may already be supported for some configurations of Oak. At > least the setup Mongo + DocumentStore seemed not to support it. > [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2989) Swap large commits to disk in order to avoid OOME
[ https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-2989: -- Fix Version/s: (was: 1.3.1) 1.3.2 Changed fix version. The vote for 1.3.1 is already out. > Swap large commits to disk in order to avoid OOME > - > > Key: OAK-2989 > URL: https://issues.apache.org/jira/browse/OAK-2989 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2 >Reporter: Timothee Maret > Fix For: 1.3.2 > > > As described in [0] large commits consume a fair amount of memory. With very > large commits, this become problematic as commits may eat up 100GB or more > and thus causing OOME and aborting the commit. > Instead of keeping the whole commit in memory, the implementation may store > parts of it on the disk once the heap memory consumption reaches a > configurable threshold. > This would allow to solve the issue and not simply mitigate it as in > OAK-2968, OAK-2969. > The behaviour may already be supported for some configurations of Oak. At > least the setup Mongo + DocumentStore seemed not to support it. > [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2989) Swap large commits to disk in order to avoid OOME
[ https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597278#comment-14597278 ] Marcel Reutegger commented on OAK-2989: --- The DocumentNodeStore already creates a branch when there are many changes and persists changes periodically to the DocumentStore. Can you please provide a test to reproduce the issue? > Swap large commits to disk in order to avoid OOME > - > > Key: OAK-2989 > URL: https://issues.apache.org/jira/browse/OAK-2989 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2 >Reporter: Timothee Maret > Fix For: 1.3.1 > > > As described in [0] large commits consume a fair amount of memory. With very > large commits, this become problematic as commits may eat up 100GB or more > and thus causing OOME and aborting the commit. > Instead of keeping the whole commit in memory, the implementation may store > parts of it on the disk once the heap memory consumption reaches a > configurable threshold. > This would allow to solve the issue and not simply mitigate it as in > OAK-2968, OAK-2969. > The behaviour may already be supported for some configurations of Oak. At > least the setup Mongo + DocumentStore seemed not to support it. > [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2989) Swap large commits to disk in order to avoid OOME
[ https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-2989: --- Fix Version/s: 1.3.1 > Swap large commits to disk in order to avoid OOME > - > > Key: OAK-2989 > URL: https://issues.apache.org/jira/browse/OAK-2989 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.2.2 >Reporter: Timothee Maret > Fix For: 1.3.1 > > > As described in [0] large commits consume a fair amount of memory. With very > large commits, this become problematic as commits may eat up 100GB or more > and thus causing OOME and aborting the commit. > Instead of keeping the whole commit in memory, the implementation may store > parts of it on the disk once the heap memory consumption reaches a > configurable threshold. > This would allow to solve the issue and not simply mitigate it as in > OAK-2968, OAK-2969. > The behaviour may already be supported for some configurations of Oak. At > least the setup Mongo + DocumentStore seemed not to support it. > [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196 -- This message was sent by Atlassian JIRA (v6.3.4#6332)