[jira] [Commented] (OAK-2569) LuceneIndex#loadDocs can throw IllegalArgumentException

2015-04-21 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506543#comment-14506543
 ] 

Thomas Mueller commented on OAK-2569:
-

I think this is an important issue and needs to be fixed for 1.2.2.

> catching the exception

I think that's the wrong solution. There are multiple problem with this, one is 
that we should not use exceptions for flow control. But more importantly, if 
many documents were added and removed at the same time, some of them might not 
be returned. That means the query result would be wrong (the result could miss 
data that was there in both the old and the new revision).

> key thing would be to come up with a test case reproducing the issue

Yes, that is probably quite tricky. But it should be possible, by adding some 
delay to read the result, and/or by using a synchronous lucene index.

> We would need to hold on to IndexSearcher from within Cursor

That would be a possible solution. What are the disadvantages?


> LuceneIndex#loadDocs can throw IllegalArgumentException
> ---
>
> Key: OAK-2569
> URL: https://issues.apache.org/jira/browse/OAK-2569
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.1.6, 1.2.1
>Reporter: Alex Parvulescu
> Fix For: 1.3.0, 1.2.2
>
>
> It looks like in some cases a IllegalArgumentException will be thrown if the 
> _nextBatchSize_ passed to the _searchAfter_ method call is smaller than the 
> current searcher limit (_lastDoc.doc_).
> Stack trace for the failure:
> {code}
> java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
> in the reader: after.doc=387045 limit=386661
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
>  at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1055)
>  at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1076)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
>  at org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
>  at org.apache.jackrabbit.commo

[jira] [Updated] (OAK-2569) LuceneIndex#loadDocs can throw IllegalArgumentException

2015-04-21 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2569:

Affects Version/s: 1.2.1

> LuceneIndex#loadDocs can throw IllegalArgumentException
> ---
>
> Key: OAK-2569
> URL: https://issues.apache.org/jira/browse/OAK-2569
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.1.6, 1.2.1
>Reporter: Alex Parvulescu
> Fix For: 1.3.0, 1.2.2
>
>
> It looks like in some cases a IllegalArgumentException will be thrown if the 
> _nextBatchSize_ passed to the _searchAfter_ method call is smaller than the 
> current searcher limit (_lastDoc.doc_).
> Stack trace for the failure:
> {code}
> java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
> in the reader: after.doc=387045 limit=386661
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
>  at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1055)
>  at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1076)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
>  at org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:296)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2569) LuceneIndex#loadDocs can throw IllegalArgumentException

2015-04-21 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2569:

Fix Version/s: 1.2.2

> LuceneIndex#loadDocs can throw IllegalArgumentException
> ---
>
> Key: OAK-2569
> URL: https://issues.apache.org/jira/browse/OAK-2569
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.1.6
>Reporter: Alex Parvulescu
> Fix For: 1.3.0, 1.2.2
>
>
> It looks like in some cases a IllegalArgumentException will be thrown if the 
> _nextBatchSize_ passed to the _searchAfter_ method call is smaller than the 
> current searcher limit (_lastDoc.doc_).
> Stack trace for the failure:
> {code}
> java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
> in the reader: after.doc=387045 limit=386661
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
>  at org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1055)
>  at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
>  at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
>  at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1076)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
>  at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:389)
>  at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
>  at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
>  at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
>  at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
>  at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
>  at org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
>  at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:296)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2793) Time limit for HierarchicalInvalidator

2015-04-21 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506487#comment-14506487
 ] 

Vikas Saurabh commented on OAK-2793:


Umm, actually I was under the impression that we are now tending towards usage 
of persistent cache. But, sure, it'd be useful to get numbers in both cases. 

> Time limit for HierarchicalInvalidator
> --
>
> Key: OAK-2793
> URL: https://issues.apache.org/jira/browse/OAK-2793
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
> Fix For: 1.3.0
>
>
> This issue is related to OAK-2646. Every now and then I see reports of 
> background reads with a cache invalidation that takes a rather long time. 
> Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
> upper limit for the time it may take to perform the invalidation. When the 
> time is up, the implementation should simply invalidate the remaining 
> documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2792) Investigate feasibility/advantages of log-tailing mongo op-log to invalidate cache

2015-04-21 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506485#comment-14506485
 ] 

Vikas Saurabh commented on OAK-2792:


[~mmarth], current idea was around invalidation, so I didn't think about 
persistent cache. But, yes, sure we can look at doing pro active fetch apart 
from just invalidation. 
About, abstracting log tailing logic to not tie too strongly with Mongo - sure 
would take care of it.. 
[~mreutegg], Chetan and i thought that we can use op log changes to just get 
ids to invalidate. Actual invalidation can remain in background thread. In 
worst case, we might excessively invalidate despite changes not yet visible - 
but even then this might be cheaper than blanket invalidation if time exceeds 
(as being discussed in OAK-2793).
Btw, I'm currently looking at OAK-2793 as it is fairly straightforward and it's 
results might get us some data around access patterns and usability of mem 
cache. 

> Investigate feasibility/advantages of log-tailing mongo op-log to invalidate 
> cache
> --
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2794) Lucene index: after.doc exceeds the number of documents in the reader

2015-04-21 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2794.
--
Resolution: Duplicate

Resolving this as duplicate of OAK-2569

> Lucene index: after.doc exceeds the number of documents in the reader
> -
>
> Key: OAK-2794
> URL: https://issues.apache.org/jira/browse/OAK-2794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene, query
>Reporter: Thomas Mueller
>
> The following exception can occur:
> {noformat}
> java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
> in the reader: after.doc=2027462 limit=2027455
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
> at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1062)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1083)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:403)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
> at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
> at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
> at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
> at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
> at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
> at 
> org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:296)
> {noformat}
> I guess the reason is that the "lastDoc" is part of the iterator, but the 
> "IndexSearcher" is not always the same, so that if there were changes in the 
> meantime, the "lastDoc" does not correspond to this "IndexSearcher". This 
> might mean the query result is sometimes wrong (matches are not returned, or 
> returned twice).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2726) Avoid repository traversal for trivial node type changes

2015-04-21 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2726:

Attachment: OAK-2726.diff

(work-in-progress; need to decide how to handle the new code duplication with 
Jackrabbbit)

> Avoid repository traversal for trivial node type changes
> 
>
> Key: OAK-2726
> URL: https://issues.apache.org/jira/browse/OAK-2726
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
> Fix For: 1.4
>
> Attachments: OAK-2726.diff
>
>
> The {{TypeEditor}} in oak-core checks the repository content when a node type 
> changes to make sure the content still conforms to the updated node type. For 
> a repository with a lot of nodes, this check can take quite a bit of time.
> Jackrabbit has an optimization in place for trivial node type changes. E.g. 
> it will not check the content if a non-mandatory item is added to an existing 
> node type definition. This optimization could be implemented in Oak as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2726) Avoid repository traversal for trivial node type changes

2015-04-21 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned OAK-2726:
---

Assignee: Julian Reschke

> Avoid repository traversal for trivial node type changes
> 
>
> Key: OAK-2726
> URL: https://issues.apache.org/jira/browse/OAK-2726
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
> Fix For: 1.4
>
>
> The {{TypeEditor}} in oak-core checks the repository content when a node type 
> changes to make sure the content still conforms to the updated node type. For 
> a repository with a lot of nodes, this check can take quite a bit of time.
> Jackrabbit has an optimization in place for trivial node type changes. E.g. 
> it will not check the content if a non-mandatory item is added to an existing 
> node type definition. This optimization could be implemented in Oak as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2795) TarMK revision cleanup message too verbose.

2015-04-21 Thread Zygmunt Wiercioch (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zygmunt Wiercioch updated OAK-2795:
---
Summary: TarMK revision cleanup message too verbose.  (was: TarML revision 
cleanup message too verbose.)

> TarMK revision cleanup message too verbose.
> ---
>
> Key: OAK-2795
> URL: https://issues.apache.org/jira/browse/OAK-2795
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Zygmunt Wiercioch
>Priority: Minor
>
> The segment GC message can be thousands of lines long on a system which 
> experienced a lot of activity.  For example, in my test, I had:
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data8a.tar: 
>  4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
> f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
> {noformat}
> followed by approximately 40,000 more lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2795) TarML revision cleanup message too verbose.

2015-04-21 Thread Zygmunt Wiercioch (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zygmunt Wiercioch updated OAK-2795:
---
Description: 
The segment GC message can be thousands of lines long on a system which 
experienced a lot of activity.  For example, in my test, I had:
{noformat}
org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
from data8a.tar: 
 4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
{noformat}
followed by approximately 40,000 more lines.


  was:
The segment GC message can be thousands of lines long on a system which 
experience a lot of activity.  For example, in my test, I had:
{noformat}
org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
from data8a.tar: 
 4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
{noformat}
followed by approximately 40,000 more lines.



> TarML revision cleanup message too verbose.
> ---
>
> Key: OAK-2795
> URL: https://issues.apache.org/jira/browse/OAK-2795
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Zygmunt Wiercioch
>Priority: Minor
>
> The segment GC message can be thousands of lines long on a system which 
> experienced a lot of activity.  For example, in my test, I had:
> {noformat}
> org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
> from data8a.tar: 
>  4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
> f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
> {noformat}
> followed by approximately 40,000 more lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2795) TarML revision cleanup message too verbose.

2015-04-21 Thread Zygmunt Wiercioch (JIRA)
Zygmunt Wiercioch created OAK-2795:
--

 Summary: TarML revision cleanup message too verbose.
 Key: OAK-2795
 URL: https://issues.apache.org/jira/browse/OAK-2795
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Zygmunt Wiercioch
Priority: Minor


The segment GC message can be thousands of lines long on a system which 
experience a lot of activity.  For example, in my test, I had:
{noformat}
org.apache.jackrabbit.oak.plugins.segment.file.TarReader-GC Cleaned segments 
from data8a.tar: 
 4b6882e7-babe-4854-a966-c3a630c338c6, 506a2eb3-92a6-4b83-a320-5cc5ac304302, 
f179e4b1-acec-4a5d-a686-bf5e97828563, 8c6b3a4d-c327-4701-affb-fcd055c4aa67, 
{noformat}
followed by approximately 40,000 more lines.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2792) Investigate feasibility/advantages of log-tailing mongo op-log to invalidate cache

2015-04-21 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505002#comment-14505002
 ] 

Michael Marth commented on OAK-2792:


[~catholicon], some comments:

# if we find this to be useful (which I am quite convinced we will) then we 
should consider to implement it in a way so that it can also be applied to 
non-MongoDB persistence layers. (some cloud DBs that have similar concepts to 
log tailing come to mind) - so basically should not be specific to MongoMK
# afaik there are 2 possibilities on which cache to update: the in-mem cache 
within the DocumentMK or the persistence cache. Also, we have 2 possibilities 
what to do: either flush invalid items or proactively write new items into the 
cache. If we go with the persistent cache (which is immutable) then the latter 
decision is simple: would be to proactively fill.
# *If* we go with proactively fill IMO this makes sense under the underlying 
assumption that content written in one Oak instance is likely to be read in 
other instances soon later. That's a big assumption - so we need to make sure 
we have test cases that model our read/write assumptions (or intended 
improvements). Maybe we can extend the scalability test framework for such 
tests?

> Investigate feasibility/advantages of log-tailing mongo op-log to invalidate 
> cache
> --
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2794) Lucene index: after.doc exceeds the number of documents in the reader

2015-04-21 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504987#comment-14504987
 ] 

Alex Parvulescu commented on OAK-2794:
--

duplicate of OAK-2569?

> Lucene index: after.doc exceeds the number of documents in the reader
> -
>
> Key: OAK-2794
> URL: https://issues.apache.org/jira/browse/OAK-2794
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene, query
>Reporter: Thomas Mueller
>
> The following exception can occur:
> {noformat}
> java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
> in the reader: after.doc=2027462 limit=2027455
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
> at 
> org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1062)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1083)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:403)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
> at 
> org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
> at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
> at 
> org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
> at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
> at 
> org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
> at 
> org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
> at 
> org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
> at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:296)
> {noformat}
> I guess the reason is that the "lastDoc" is part of the iterator, but the 
> "IndexSearcher" is not always the same, so that if there were changes in the 
> meantime, the "lastDoc" does not correspond to this "IndexSearcher". This 
> might mean the query result is sometimes wrong (matches are not returned, or 
> returned twice).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2794) Lucene index: after.doc exceeds the number of documents in the reader

2015-04-21 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-2794:
---

 Summary: Lucene index: after.doc exceeds the number of documents 
in the reader
 Key: OAK-2794
 URL: https://issues.apache.org/jira/browse/OAK-2794
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene, query
Reporter: Thomas Mueller


The following exception can occur:

{noformat}
java.lang.IllegalArgumentException: after.doc exceeds the number of documents 
in the reader: after.doc=2027462 limit=2027455
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:243)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:292)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:283)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1062)
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1083)
at 
org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
at 
org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
at 
org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.fetchNext(Cursors.java:403)
at 
org.apache.jackrabbit.oak.spi.query.Cursors$IntersectionCursor.hasNext(Cursors.java:381)
at 
org.apache.jackrabbit.oak.query.ast.SelectorImpl.next(SelectorImpl.java:401)
at 
org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.fetchNext(QueryImpl.java:664)
at 
org.apache.jackrabbit.oak.query.QueryImpl$RowIterator.hasNext(QueryImpl.java:689)
at 
org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.init(FilterIterators.java:203)
at 
org.apache.jackrabbit.oak.query.FilterIterators$SortIterator.hasNext(FilterIterators.java:237)
at 
org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.fetch(QueryResultImpl.java:108)
at 
org.apache.jackrabbit.oak.jcr.query.QueryResultImpl$1.(QueryResultImpl.java:104)
at 
org.apache.jackrabbit.oak.jcr.query.QueryResultImpl.getRows(QueryResultImpl.java:91)
at org.apache.jackrabbit.commons.query.GQL.executeJcrQuery(GQL.java:440)
at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:434)
at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:327)
at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:310)
at org.apache.jackrabbit.commons.query.GQL.execute(GQL.java:296)
{noformat}

I guess the reason is that the "lastDoc" is part of the iterator, but the 
"IndexSearcher" is not always the same, so that if there were changes in the 
meantime, the "lastDoc" does not correspond to this "IndexSearcher". This might 
mean the query result is sometimes wrong (matches are not returned, or returned 
twice).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2792) Investigate feasibility/advantages of log-tailing mongo op-log to invalidate cache

2015-04-21 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2792:
---
Summary: Investigate feasibility/advantages of log-tailing mongo op-log to 
invalidate cache  (was: Investigate feasibility/advantages of log-taling mongo 
op-log to invalidate cache)

> Investigate feasibility/advantages of log-tailing mongo op-log to invalidate 
> cache
> --
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2682) Introduce time difference detection for DocumentNodeStore

2015-04-21 Thread Robert Munteanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504888#comment-14504888
 ] 

Robert Munteanu commented on OAK-2682:
--

{quote}Not necessarily. Wouldn't it be sufficient for each DocumentStore 
instance to check with the clock on the backend? This would guarantee the clock 
difference between each two DocumentStore is less than 2 times the maximum 
allowed clock difference between the backend and a DocumentStore.{quote}

Sorry, I was imprecise with my comment. 'This solution' -> [~chetanm]'s 
proposal. My preference is to check with the backend, which means that we don't 
need peer-to-peer connections between the cluster nodes.

> Introduce time difference detection for DocumentNodeStore
> -
>
> Key: OAK-2682
> URL: https://issues.apache.org/jira/browse/OAK-2682
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
> Fix For: 1.3.0
>
>
> Currently the lease mechanism in DocumentNodeStore/mongoMk is based on the 
> assumption that the clocks are in perfect sync between all nodes of the 
> cluster. The lease is valid for 60sec with a timeout of 30sec. If clocks are 
> off by too much, and background operations happen to take couple seconds, you 
> run the risk of timing out a lease. So introducing a check which WARNs if the 
> clocks in a cluster are off by too much (1st threshold, eg 5sec?) would help 
> increase awareness. Further drastic measure could be to prevent a startup of 
> Oak at all if the difference is for example higher than a 2nd threshold 
> (optional I guess, but could be 20sec?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2793) Time limit for HierarchicalInvalidator

2015-04-21 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504882#comment-14504882
 ] 

Marcel Reutegger commented on OAK-2793:
---

For testing purposes it would probably be better to use a bigger document 
cache. Otherwise the time limit may not kick in.

> Time limit for HierarchicalInvalidator
> --
>
> Key: OAK-2793
> URL: https://issues.apache.org/jira/browse/OAK-2793
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
> Fix For: 1.3.0
>
>
> This issue is related to OAK-2646. Every now and then I see reports of 
> background reads with a cache invalidation that takes a rather long time. 
> Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
> upper limit for the time it may take to perform the invalidation. When the 
> time is up, the implementation should simply invalidate the remaining 
> documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2682) Introduce time difference detection for DocumentNodeStore

2015-04-21 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504877#comment-14504877
 ] 

Marcel Reutegger commented on OAK-2682:
---

bq. This solution requires some sort of communication avenue between the Oak 
cluster nodes.

Not necessarily. Wouldn't it be sufficient for each DocumentStore instance to 
check with the clock on the backend? This would guarantee the clock difference 
between each two DocumentStore is less than 2 times the maximum allowed clock 
difference between the backend and a DocumentStore. 

> Introduce time difference detection for DocumentNodeStore
> -
>
> Key: OAK-2682
> URL: https://issues.apache.org/jira/browse/OAK-2682
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
> Fix For: 1.3.0
>
>
> Currently the lease mechanism in DocumentNodeStore/mongoMk is based on the 
> assumption that the clocks are in perfect sync between all nodes of the 
> cluster. The lease is valid for 60sec with a timeout of 30sec. If clocks are 
> off by too much, and background operations happen to take couple seconds, you 
> run the risk of timing out a lease. So introducing a check which WARNs if the 
> clocks in a cluster are off by too much (1st threshold, eg 5sec?) would help 
> increase awareness. Further drastic measure could be to prevent a startup of 
> Oak at all if the difference is for example higher than a 2nd threshold 
> (optional I guess, but could be 20sec?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2792) Investigate feasibility/advantages of log-taling mongo op-log to invalidate cache

2015-04-21 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504845#comment-14504845
 ] 

Marcel Reutegger commented on OAK-2792:
---

Sounds interesting. We'd have to be careful to ensure the information we obtain 
from the op-log is consistent with the current view of the root document. The 
root document contains the head revisions of the other cluster nodes and 
defines what changes further down the tree must be considered visible. The 
cache invalidation based on the asynchronous (\!) op-log must ensure it 
invalidates all changes to documents that happened before the update of the 
root document. I think we will have to extract this information from the op-log 
as well, to tell up to what revision on the root document we've seen changes.

> Investigate feasibility/advantages of log-taling mongo op-log to invalidate 
> cache
> -
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2792) Investigate feasibility/advantages of log-taling mongo op-log to invalidate cache

2015-04-21 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2792:
-
Priority: Major  (was: Critical)

> Investigate feasibility/advantages of log-taling mongo op-log to invalidate 
> cache
> -
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2792) Investigate feasibility/advantages of log-taling mongo op-log to invalidate cache

2015-04-21 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2792:
-
Issue Type: Improvement  (was: Bug)

> Investigate feasibility/advantages of log-taling mongo op-log to invalidate 
> cache
> -
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Vikas Saurabh
>Priority: Critical
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2793) Time limit for HierarchicalInvalidator

2015-04-21 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504826#comment-14504826
 ] 

Vikas Saurabh commented on OAK-2793:


[~mreutegg], I'd prototype this and post a rough comparison of performance drop 
due to excessive cache invalidation if any. BTW, the setup I'd be using it with 
would have a fairly small doc cache and would have persistent cache enabled. 
Would that be fine?

> Time limit for HierarchicalInvalidator
> --
>
> Key: OAK-2793
> URL: https://issues.apache.org/jira/browse/OAK-2793
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
> Fix For: 1.3.0
>
>
> This issue is related to OAK-2646. Every now and then I see reports of 
> background reads with a cache invalidation that takes a rather long time. 
> Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
> upper limit for the time it may take to perform the invalidation. When the 
> time is up, the implementation should simply invalidate the remaining 
> documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2793) Time limit for HierarchicalInvalidator

2015-04-21 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-2793:
-

 Summary: Time limit for HierarchicalInvalidator
 Key: OAK-2793
 URL: https://issues.apache.org/jira/browse/OAK-2793
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
 Fix For: 1.3.0


This issue is related to OAK-2646. Every now and then I see reports of 
background reads with a cache invalidation that takes a rather long time. 
Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
upper limit for the time it may take to perform the invalidation. When the time 
is up, the implementation should simply invalidate the remaining documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2682) Introduce time difference detection for DocumentNodeStore

2015-04-21 Thread Robert Munteanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504822#comment-14504822
 ] 

Robert Munteanu commented on OAK-2682:
--

Coming back to this discussion, some comments:

1. Completely agree with [~mreutegg] - this should be implemented at the 
DocumentStore level so we can implement it distinctly for the RDB NodeStore. 
Note however that getting the current time in SQL is not to my knowledge 
something universal - we would have to implement different checks for each 
database. So my proposal it make this an optional operation and start with the 
Mongo implementation

2. This solution requires some sort of communication avenue between the Oak 
cluster nodes. My first thought was to use the DocumentStore implementation 
since it is an already existing mechanism for communication between the nodes. 

An alternative would be to introduce another communication protocol - e.g. HTTP 
or NTP. The disadvantage here is that it would require the instances to be able 
to communicate with each other and that would introduce additional hurdles from 
an operational point of view. There is no guarantee (or requirement) at the 
moment that the cluster node instances are able to communicate with each other 
at the moment.

> Introduce time difference detection for DocumentNodeStore
> -
>
> Key: OAK-2682
> URL: https://issues.apache.org/jira/browse/OAK-2682
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
> Fix For: 1.3.0
>
>
> Currently the lease mechanism in DocumentNodeStore/mongoMk is based on the 
> assumption that the clocks are in perfect sync between all nodes of the 
> cluster. The lease is valid for 60sec with a timeout of 30sec. If clocks are 
> off by too much, and background operations happen to take couple seconds, you 
> run the risk of timing out a lease. So introducing a check which WARNs if the 
> clocks in a cluster are off by too much (1st threshold, eg 5sec?) would help 
> increase awareness. Further drastic measure could be to prevent a startup of 
> Oak at all if the difference is for example higher than a 2nd threshold 
> (optional I guess, but could be 20sec?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2792) Investigate feasibility/advantages of log-taling mongo op-log to invalidate cache

2015-04-21 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504820#comment-14504820
 ] 

Vikas Saurabh commented on OAK-2792:


[~mreutegg], can you please review if such an approach makes sense?

(cc [~chetanm])

> Investigate feasibility/advantages of log-taling mongo op-log to invalidate 
> cache
> -
>
> Key: OAK-2792
> URL: https://issues.apache.org/jira/browse/OAK-2792
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Reporter: Vikas Saurabh
>Priority: Critical
>
> It is observed that current cache invalidation logic has some issues 
> (OAK-2187) and there is an Oak issue (OAK-2646) to investigate alternative 
> cache invalidation logic. One such option is to log-tail mongo op-log 
> (specific to mongo back-end obviously :)).
> This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2792) Investigate feasibility/advantages of log-taling mongo op-log to invalidate cache

2015-04-21 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-2792:
--

 Summary: Investigate feasibility/advantages of log-taling mongo 
op-log to invalidate cache
 Key: OAK-2792
 URL: https://issues.apache.org/jira/browse/OAK-2792
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Critical


It is observed that current cache invalidation logic has some issues (OAK-2187) 
and there is an Oak issue (OAK-2646) to investigate alternative cache 
invalidation logic. One such option is to log-tail mongo op-log (specific to 
mongo back-end obviously :)).

This task is for investigating that approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2690) Add optional UserConfiguration#getUserPrincipalProvider()

2015-04-21 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-2690.
-
Resolution: Fixed

Committed revision 1675089.

> Add optional UserConfiguration#getUserPrincipalProvider()
> -
>
> Key: OAK-2690
> URL: https://issues.apache.org/jira/browse/OAK-2690
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.0
>
> Attachments: OAK-2690.patch, getgroupmembership.txt, 
> loginmembership_compare_userprincipalprovider.txt
>
>
> while playing around with overall group principal resolution during the 
> repository login, I thought that having a principal provider that knows about 
> the details of the user management implementation may might be a slight 
> improvement compared to the generic default implementation as present in 
> {{org.apache.jackrabbit.oak.security.principal.PrincipalProviderImpl}}, which 
> just acts on the {{UserManager}} interface and thus always creates 
> intermediate {{Authorizable}} objects.
> in order to be able to get there (without having the default principal mgt 
> implementation rely on implementation details of the user mgt module), we 
> would need an addition to the {{UserConfiguration}} that allows to optionally 
> obtain a {{PrincipalProvider}}; the fallback in the default  
> {{PrincipalConfiguration}} in case the user configuration does not expose a 
> specific principal provider would be the current (generic) solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-04-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103, 106, 111  | DOCUMENT_RDB, SEGMENT_MK  | 1.8, 1.7 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97, 105  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88, 109  | DOCUMENT_RDB | 1.7, 1.8  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |




  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103, 106  | DOCUMENT_RDB, SEGMENT_MK  | 1.8 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97, 105  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88, 109  | DOCUMENT_RDB | 1.7, 1.8  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |





> Test failures on Jenkins
> 
>
> Key: OAK-2714
> URL: https://issues.apache.org/

[jira] [Updated] (OAK-2784) Remove Nullable annotation in Predicates of BackgroundObserver

2015-04-21 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2784:
--
Summary: Remove Nullable annotation in Predicates of BackgroundObserver  
(was: Potential NPEs in BackgroundObserverMBean)

> Remove Nullable annotation in Predicates of BackgroundObserver
> --
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
> Attachments: OAK-2784.patch
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2281) Support for conditional indexingRules

2015-04-21 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504703#comment-14504703
 ] 

Thomas Mueller commented on OAK-2281:
-

What is the reason (use case) for this request? Is it to 

(a) save disk space / speed up indexing
(b) filter query results

If it's (b), I think the _index_ itself does not need to support such rules, if 
that means the query returns the incorrect data. Instead, the _query_ should be 
changed to filter out the nodes you don't want (" and @priority = 'high'" for 
example).

> Support for conditional indexingRules
> -
>
> Key: OAK-2281
> URL: https://issues.apache.org/jira/browse/OAK-2281
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>
> JR2 supported conditional indexing rules like below. Similar support should 
> be provided within Oak
> {code:xml}
> http://www.jcp.org/jcr/nt/1.0";>
>  boost="2.0"
>   condition="ancestor::*/@priority = 'high'">
> Text
>   
>  boost="0.5"
>   condition="parent::foo/@priority = 'low'">
> Text
>   
>  boost="1.5"
>   condition="bar/@priority = 'medium'">
> Text
>   
>   
> Text
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2784) Potential NPEs in BackgroundObserverMBean

2015-04-21 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504695#comment-14504695
 ] 

Chetan Mehrotra edited comment on OAK-2784 at 4/21/15 9:48 AM:
---

Thanks [~mreutegg]! This was the right fix as {{Nullable}} annotation was 
giving wrong indication. With this patch Findbugs seems to be happy as well

Also thanks to [~anchela] for taking note of this and following up. For future 
I would be more careful on how this annotation is left and also keep a check on 
findbug

{code}

{code}

Applied the patch to trunk in http://svn.apache.org/r1675074


was (Author: chetanm):
Thanks [~mreutegg]! This was the right fix as {{Nullable}} annotation was 
giving wrong indication. With this patch Findbugs seems to be happy as well

{code}

{code}

Applied the patch to trunk in http://svn.apache.org/r1675074

> Potential NPEs in BackgroundObserverMBean
> -
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
> Attachments: OAK-2784.patch
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2784) Potential NPEs in BackgroundObserverMBean

2015-04-21 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2784.
--
Resolution: Fixed

Thanks [~mreutegg]! This was the right fix as {{Nullable}} annotation was 
giving wrong indication. With this patch Findbugs seems to be happy as well

{code}

{code}

Applied the patch to trunk in http://svn.apache.org/r1675074

> Potential NPEs in BackgroundObserverMBean
> -
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
> Attachments: OAK-2784.patch
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2760) HttpServer in Oak creates multiple instance of ContentRepository

2015-04-21 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2760:

Attachment: OAK-2760-01.patch

[~chetanm], [~mduerig]: this patch caches a {{ContentRepository}} instance in 
the {{Oak}} builder and a {{Repository}} instance in the {{Jcr}} builder. This 
should be sufficient to guarantee that repository instances are created and 
initialized only once. Moreover, the {{ContentRepository}} instance is also 
accessible through the {{Jcr}} builder.

> HttpServer in Oak creates multiple instance of ContentRepository
> 
>
> Key: OAK-2760
> URL: https://issues.apache.org/jira/browse/OAK-2760
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: run
>Reporter: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: OAK-2760-01.patch
>
>
> Http Server in oak-run constructs multiple repository instance of 
> ContentRepository [1]
> {code}
> Jcr jcr = new Jcr(oak);
> // 1 - OakServer
> ContentRepository repository = oak.createContentRepository();
> ServletHolder holder = new ServletHolder(new 
> OakServlet(repository));
> context.addServlet(holder, path + "/*");
> // 2 - Webdav Server on JCR repository
> final Repository jcrRepository = jcr.createRepository();
> {code}
> In above code a repository instance is created twice via same Oak instance 1 
> in OakServer and 2 for webdav. 
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-run/src/main/java/org/apache/jackrabbit/oak/run/Main.java#L1125-1133



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-04-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103, 106  | DOCUMENT_RDB, SEGMENT_MK  | 1.8 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97, 105  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88, 109  | DOCUMENT_RDB | 1.7, 1.8  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |




  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| 
org.apache.jackrabbit.oak.plugins.index.solr.configuration.DefaultAnalyzersConfigurationTest
 | 61, 63, 92, 94, 103, 106  | DOCUMENT_RDB, SEGMENT_MK  | 1.8 |
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder  
   | 81, 87, 92, 95, 96 | DOCUMENT_NS, DOCUMENT_RDB (2)  | 1.6, 1.7 
  |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76  | SEGMENT_MK   | 1.6   |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType   
   | 69, 83, 97, 105  | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation  
   | 48, 55  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems   
   | 41, 88, 109  | DOCUMENT_RDB | 1.7, 1.8  |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |



> Test failures on Jenkins
> 
>
> Key: OAK-2714
> URL: https://issues.apache.org/jira/browse/OAK-2714
> Project: Jackrabbit Oak
>  Issue Type: Bug
> Environment: Jenki

[jira] [Commented] (OAK-2784) Potential NPEs in BackgroundObserverMBean

2015-04-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504627#comment-14504627
 ] 

Michael Dürig commented on OAK-2784:


+1

> Potential NPEs in BackgroundObserverMBean
> -
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
> Attachments: OAK-2784.patch
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2784) Potential NPEs in BackgroundObserverMBean

2015-04-21 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2784:
--
Attachment: OAK-2784.patch

I'd suggest we remove the Nullable annotation and adjust the summary of this 
issue accordingly. See attached patch.

I think this is in line with the {{Predicate}} contract and makes it clear the 
implementations in BackgroundObserver expect a non-null input.

> Potential NPEs in BackgroundObserverMBean
> -
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
> Attachments: OAK-2784.patch
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2690) Add optional UserConfiguration#getUserPrincipalProvider()

2015-04-21 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2690:

Attachment: getgroupmembership.txt

updated benchmark results for 
- PrincipalManager#getGroupMembership
- Repository#login

apart from some performance improvement, optionally exposing an impl specific 
principal provider with the user mgt implementation would allow us to further 
add implementation specific optimization relying on the very details of the 
implementation which is not possible with the generic default principal mgt 
implementation.


> Add optional UserConfiguration#getUserPrincipalProvider()
> -
>
> Key: OAK-2690
> URL: https://issues.apache.org/jira/browse/OAK-2690
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.0
>
> Attachments: OAK-2690.patch, getgroupmembership.txt, 
> loginmembership_compare_userprincipalprovider.txt
>
>
> while playing around with overall group principal resolution during the 
> repository login, I thought that having a principal provider that knows about 
> the details of the user management implementation may might be a slight 
> improvement compared to the generic default implementation as present in 
> {{org.apache.jackrabbit.oak.security.principal.PrincipalProviderImpl}}, which 
> just acts on the {{UserManager}} interface and thus always creates 
> intermediate {{Authorizable}} objects.
> in order to be able to get there (without having the default principal mgt 
> implementation rely on implementation details of the user mgt module), we 
> would need an addition to the {{UserConfiguration}} that allows to optionally 
> obtain a {{PrincipalProvider}}; the fallback in the default  
> {{PrincipalConfiguration}} in case the user configuration does not expose a 
> specific principal provider would be the current (generic) solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2690) Add optional UserConfiguration#getUserPrincipalProvider()

2015-04-21 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-2690:
---

Assignee: angela

> Add optional UserConfiguration#getUserPrincipalProvider()
> -
>
> Key: OAK-2690
> URL: https://issues.apache.org/jira/browse/OAK-2690
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.0
>
> Attachments: OAK-2690.patch, 
> loginmembership_compare_userprincipalprovider.txt
>
>
> while playing around with overall group principal resolution during the 
> repository login, I thought that having a principal provider that knows about 
> the details of the user management implementation may might be a slight 
> improvement compared to the generic default implementation as present in 
> {{org.apache.jackrabbit.oak.security.principal.PrincipalProviderImpl}}, which 
> just acts on the {{UserManager}} interface and thus always creates 
> intermediate {{Authorizable}} objects.
> in order to be able to get there (without having the default principal mgt 
> implementation rely on implementation details of the user mgt module), we 
> would need an addition to the {{UserConfiguration}} that allows to optionally 
> obtain a {{PrincipalProvider}}; the fallback in the default  
> {{PrincipalConfiguration}} in case the user configuration does not expose a 
> specific principal provider would be the current (generic) solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2690) Add optional UserConfiguration#getUserPrincipalProvider()

2015-04-21 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2690:

Fix Version/s: 1.3.0

> Add optional UserConfiguration#getUserPrincipalProvider()
> -
>
> Key: OAK-2690
> URL: https://issues.apache.org/jira/browse/OAK-2690
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: angela
>Assignee: angela
> Fix For: 1.3.0
>
> Attachments: OAK-2690.patch, 
> loginmembership_compare_userprincipalprovider.txt
>
>
> while playing around with overall group principal resolution during the 
> repository login, I thought that having a principal provider that knows about 
> the details of the user management implementation may might be a slight 
> improvement compared to the generic default implementation as present in 
> {{org.apache.jackrabbit.oak.security.principal.PrincipalProviderImpl}}, which 
> just acts on the {{UserManager}} interface and thus always creates 
> intermediate {{Authorizable}} objects.
> in order to be able to get there (without having the default principal mgt 
> implementation rely on implementation details of the user mgt module), we 
> would need an addition to the {{UserConfiguration}} that allows to optionally 
> obtain a {{PrincipalProvider}}; the fallback in the default  
> {{PrincipalConfiguration}} in case the user configuration does not expose a 
> specific principal provider would be the current (generic) solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2791) Change default for oak.mongo.maxDeltaForModTimeIdxSecs

2015-04-21 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2791:
--
Fix Version/s: 1.2.2

Merged into 1.2 branch: http://svn.apache.org/r1675063

> Change default for oak.mongo.maxDeltaForModTimeIdxSecs
> --
>
> Key: OAK-2791
> URL: https://issues.apache.org/jira/browse/OAK-2791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.0, 1.2.2
>
>
> OAK-1966 introduced a hint for MongoDB to either use the _id or _modified
> index, depending on the time delta of the _modified constraint. The default
> value is currently -1, which mean MongoDocumentStore will always pass an _id
> hint. I would like to enable the _modified hint for small deltas less than
> 60 seconds. Now that OAK-2131 is implemented, the number of background updates
> is much lower and a query with a _modified constraint will better match
> documents that actually have been modified since a given timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2784) Potential NPEs in BackgroundObserverMBean

2015-04-21 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504525#comment-14504525
 ] 

angela commented on OAK-2784:
-

while i agree that the code currently works, i don't agree with the conclusion: 
as the code base evolves, other people refactoring or adjusting it, this is 
prone for causing issues. i would at least add a comment explicitly stating 
that the argument must never be null to make it totally unambiguous for someone 
else that might in the future work on this.

maybe i just did too many code review during the last year but what i saw 
was exactly this kind of pattern: someone writing code that for sure originally 
worked and was clear to the original author. unfortunately he/she suffered from 
code amnesia or someone else had to take over and fixing issues here and there 
or adding you features. what happens is that later on the code is no longer 
clear, devs usually don't have time to properly fix it or simply didn't get the 
original intention and such after years resulting in code that is really poor, 
not maintainable and not fixable except for a complete rewrite (which nobody is 
willing to do).

> Potential NPEs in BackgroundObserverMBean
> -
>
> Key: OAK-2784
> URL: https://issues.apache.org/jira/browse/OAK-2784
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Assignee: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.0
>
>
> {code}
> @Override
> public int getLocalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info != null;
> }
> }));
> }
> @Override
> public int getExternalEventCount() {
> return size(filter(queue, new Predicate() {
> @Override
> public boolean apply(@Nullable ContentChange input) {
> return input.info == null;
> }
> }));
> }
> {code}
> both methods should probably check for {{input}} being null before accessing 
> {{input.info}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2791) Change default for oak.mongo.maxDeltaForModTimeIdxSecs

2015-04-21 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2791.
---
Resolution: Fixed

Done in trunk: http://svn.apache.org/r1675055

> Change default for oak.mongo.maxDeltaForModTimeIdxSecs
> --
>
> Key: OAK-2791
> URL: https://issues.apache.org/jira/browse/OAK-2791
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.0
>
>
> OAK-1966 introduced a hint for MongoDB to either use the _id or _modified
> index, depending on the time delta of the _modified constraint. The default
> value is currently -1, which mean MongoDocumentStore will always pass an _id
> hint. I would like to enable the _modified hint for small deltas less than
> 60 seconds. Now that OAK-2131 is implemented, the number of background updates
> is much lower and a query with a _modified constraint will better match
> documents that actually have been modified since a given timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2778) DocumentNodeState is null for revision rx-x-x

2015-04-21 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504503#comment-14504503
 ] 

Marcel Reutegger commented on OAK-2778:
---

Added test (currently ignored) to reproduce the exception: 
http://svn.apache.org/r1675054

> DocumentNodeState is null for revision rx-x-x
> -
>
> Key: OAK-2778
> URL: https://issues.apache.org/jira/browse/OAK-2778
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, mongomk
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.0
>
>
> On a system running Oak 1.0.12 the following exception is seen repeatedly 
> when the async index update tries to update a lucene index:
> {noformat}
> org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job 
> execution of 
> org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@6be42cde : 
> DocumentNodeState is null for revision r14cbbd50ad2-0-1 of 
> /oak:index/lucene/:data/_1co.cfe (aborting getChildNodes())
> org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
> DocumentNodeState is null for revision r14cbbd50ad2-0-1 of 
> /oak:index/lucene/:data/_1co.cfe (aborting getChildNodes())
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$6.apply(DocumentNodeStore.java:925)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$6.apply(DocumentNodeStore.java:919)
> at com.google.common.collect.Iterators$8.transform(Iterators.java:794)
> at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
> at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.next(DocumentNodeState.java:618)
> at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeState$ChildNodeEntryIterator.next(DocumentNodeState.java:587)
> at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
> at com.google.common.collect.Iterators.addAll(Iterators.java:357)
> at com.google.common.collect.Lists.newArrayList(Lists.java:146)
> at com.google.common.collect.Iterables.toCollection(Iterables.java:334)
> at com.google.common.collect.Iterables.toArray(Iterables.java:312)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.listAll(OakDirectory.java:69)
> at 
> org.apache.lucene.index.DirectoryReader.indexExists(DirectoryReader.java:339)
> at org.apache.lucene.index.IndexWriter.(IndexWriter.java:720)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.getWriter(LuceneIndexEditorContext.java:134)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.addOrUpdate(LuceneIndexEditor.java:260)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:171)
> at 
> org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74)
> at 
> org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130)
> at 
> org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
> {noformat}
> A similar issue was already fixed with OAK-2420.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)