[jira] [Resolved] (OAK-3058) Backport OAK-2872 to 1.0 and 1.2 branches

2015-07-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3058.
--
Resolution: Fixed

* merged to 1.2 http://svn.apache.org/r1688565
* merged to 1.0 http://svn.apache.org/r1688567

> Backport OAK-2872 to 1.0 and 1.2 branches
> -
>
> Key: OAK-3058
> URL: https://issues.apache.org/jira/browse/OAK-3058
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: auth-external
>Reporter: Michael Dürig
>Assignee: Alex Parvulescu
> Fix For: 1.2.3, 1.0.17
>
>
> Our customer confirmed that OAK-2872 fixed a {{SNFE}} on their system. We 
> thus need to back port the fix to the branches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3059:
-

 Summary: Manage versions of all exported packages
 Key: OAK-3059
 URL: https://issues.apache.org/jira/browse/OAK-3059
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: commons, core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.3.2


There are still some exported packages, which do not have a package-info.java 
with explicitly managed export versions.

We should add those to prevent excessive version increase of exported packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609711#comment-14609711
 ] 

Marcel Reutegger commented on OAK-3059:
---

The list of exported packages without explicitly managed version is:

- 
oak-authorization-cug/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/cug
 (suggested: 1.2.2)
- oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)
- 
oak-core/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/accesscontrol
 (suggested: 1.3.1)
- oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/score 
(suggested: 1.2.2)

Please check the package if you are familiar with the component and let me know 
if this makes sense.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609732#comment-14609732
 ] 

Marcel Reutegger commented on OAK-3059:
---

Related to this issue: I would like to fail the build if the baseline plugin 
issues a warning. This way we can identify exported packages without an 
explicit version earlier.

Any objections?

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3007) SegmentStore cache does not take "string" map into account

2015-07-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3007:
---
Attachment: OAK-3007-2.patch

Slightly edited version of Tomas' patch retaining encapsulation. 

> SegmentStore cache does not take "string" map into account
> --
>
> Key: OAK-3007
> URL: https://issues.apache.org/jira/browse/OAK-3007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Thomas Mueller
> Fix For: 1.3.2
>
> Attachments: OAK-3007-2.patch, OAK-3007.patch
>
>
> The SegmentStore cache size calculation ignores the size of the field 
> Segment.string (a concurrent hash map). It looks like a regular segment in a 
> memory mapped file has the size 1024, no matter how many strings are loaded 
> in memory. This can lead to out of memory. There seems to be no way to limit 
> (configure) the amount of memory used by strings. In one example, 100'000 
> segments are loaded in memory, and 5 GB are used for Strings in that map.
> We need a way to configure the amount of memory used for that. This seems to 
> be basically a cache. OAK-2688 does this, but it would be better to have one 
> cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3007) SegmentStore cache does not take "string" map into account

2015-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609783#comment-14609783
 ] 

Michael Dürig commented on OAK-3007:


I think this approach makes a lot of sense and we should give it a spin. 

[~tmueller], could you also add a unit test for {{StringCache}}?

> SegmentStore cache does not take "string" map into account
> --
>
> Key: OAK-3007
> URL: https://issues.apache.org/jira/browse/OAK-3007
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Thomas Mueller
> Fix For: 1.3.2
>
> Attachments: OAK-3007-2.patch, OAK-3007.patch
>
>
> The SegmentStore cache size calculation ignores the size of the field 
> Segment.string (a concurrent hash map). It looks like a regular segment in a 
> memory mapped file has the size 1024, no matter how many strings are loaded 
> in memory. This can lead to out of memory. There seems to be no way to limit 
> (configure) the amount of memory used by strings. In one example, 100'000 
> segments are loaded in memory, and 5 GB are used for Strings in that map.
> We need a way to configure the amount of memory used for that. This seems to 
> be basically a cache. OAK-2688 does this, but it would be better to have one 
> cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3060) Release Oak 1.3.2

2015-07-01 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3060:
-

 Summary: Release Oak 1.3.2
 Key: OAK-3060
 URL: https://issues.apache.org/jira/browse/OAK-3060
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella


- release oak
- update website
- update javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3059.
---
Resolution: Fixed

Added the package-info.java files to exported packages and configured the 
baseline plugin to fail the build on warnings: http://svn.apache.org/r1688606

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2829) Comparing node states for external changes is too slow

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2829.
---
   Resolution: Fixed
Fix Version/s: (was: 1.2.3)

I'm resolving this issue as fixed. All sub-tasks, but one (OAK-3001), have been 
implemented and tests look OK. The remaining sub-task is a further 
optimization, which can be tracked separately IMO.

> Comparing node states for external changes is too slow
> --
>
> Key: OAK-2829
> URL: https://issues.apache.org/jira/browse/OAK-2829
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Blocker
>  Labels: scalability
> Fix For: 1.3.2
>
> Attachments: CompareAgainstBaseStateTest.java, 
> OAK-2829-JournalEntry.patch, OAK-2829-gc-bug.patch, 
> OAK-2829-improved-doc-cache-invaliation.2.patch, 
> OAK-2829-improved-doc-cache-invaliation.patch, graph-1.png, graph.png
>
>
> Comparing node states for local changes has been improved already with 
> OAK-2669. But in a clustered setup generating events for external changes 
> cannot make use of the introduced cache and is therefore slower. This can 
> result in a growing observation queue, eventually reaching the configured 
> limit. See also OAK-2683.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-3001:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: OAK-2829)

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Priority: Critical
>  Labels: scalability
> Fix For: 1.2.3, 1.3.2
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609836#comment-14609836
 ] 

Marcel Reutegger commented on OAK-3001:
---

I converted this issue from a sub-task of OAK-2829 into a separate improvement. 
I think this is an important optimization, but shouldn't block OAK-2829.

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Priority: Critical
>  Labels: scalability
> Fix For: 1.2.3, 1.3.2
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reopened OAK-3059:
---

Re-opening. Davide reported, the baseline plugin now fails on his machine and 
complains about the cug package. Expected version is 1.3.1, which actually 
makes sense, given the package didn't have an explicit version set before and 
is present in 1.3.1. The same is probably true for the lucene score package.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3061) oak-authorization-cug uses wrong parent pom

2015-07-01 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3061:
-

 Summary: oak-authorization-cug uses wrong parent pom
 Key: OAK-3061
 URL: https://issues.apache.org/jira/browse/OAK-3061
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: authorization-cug
Affects Versions: 1.3.1
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.3.2


As a side effect, the artifact is not deployed when we release Oak!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609711#comment-14609711
 ] 

Marcel Reutegger edited comment on OAK-3059 at 7/1/15 10:14 AM:


The list of exported packages without explicitly managed version is:

- 
oak-authorization-cug/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/cug
 (suggested: 1.2.2)
- oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)
- 
oak-core/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/accesscontrol
 (suggested: 1.3.1)
- oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/score 
(suggested: 1.3.1)

Please check the package if you are familiar with the component and let me know 
if this makes sense.


was (Author: mreutegg):
The list of exported packages without explicitly managed version is:

- 
oak-authorization-cug/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/cug
 (suggested: 1.2.2)
- oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)
- 
oak-core/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/accesscontrol
 (suggested: 1.3.1)
- oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/score 
(suggested: 1.2.2)

Please check the package if you are familiar with the component and let me know 
if this makes sense.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609873#comment-14609873
 ] 

Marcel Reutegger commented on OAK-3059:
---

The issue is caused by OAK-3061.

The oak-lucene score package I actually set to 1.3.1. The version in the 
initial comment was a typo...

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609896#comment-14609896
 ] 

Davide Giannella commented on OAK-3059:
---

gave a quick look at 
{{oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)}}.

We could even not exporting it as, as far as I remember, we use it only in 
oak-run for the micro-benchmarks which is not OSGi. In case I gave a quick look 
at the history and it seems that there were a minor changes, nothing I would 
consider API.

So IMO we could leave it 1.0.0.

[~mduerig] you where the last one AFACS to update the package. WDYT?

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3062) VersionGC failing on Mongo with CursorNotFoundException

2015-07-01 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3062:


 Summary: VersionGC failing on Mongo with CursorNotFoundException
 Key: OAK-3062
 URL: https://issues.apache.org/jira/browse/OAK-3062
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.2.3, 1.3.3, 1.0.17


At times the VersionGC on big repository fails with following exception

{noformat}
30.06.2015 03:55:59.253 *INFO* [pool-7-thread-132] 
org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Iterated 
through 44 documents so far. 410668 found to be deleted
com.mongodb.MongoException$CursorNotFound: Cursor 78740863820 not found on 
server mongo2.aem.lan.tpa.foxnews.com:27017
at 
com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:218)
at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:198)
at 
com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:176)
at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:141)
at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
at 
com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:43)
at 
org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.collectDeletedDocuments(VersionGarbageCollector.java:110)
at 
org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.gc(VersionGarbageCollector.java:85)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$2.run(DocumentNodeStoreService.java:503)
at 
org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:68)
at 
org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3062) VersionGC failing on Mongo with CursorNotFoundException

2015-07-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3062:
-
Attachment: cursor-78740863820.log
versiongc-oak.log

Attaching
* [Oak Logs|^versiongc-oak.log]
* [Mongod Logs|^cursor-78740863820.log]

Key thing to observe is that Mongo is using index on {{_modified}} instead of 
index on {{_deletedOnce}}

{noformat}
2015-06-30T03:27:20.318-0400 [conn11654] query aem-author.nodes query: { 
_deletedOnce: true, _modified: { $lt: 1435562840 } } planSummary: IXSCAN { 
_modified: -1 } cursorid:78740863820 ntoreturn:0 ntoskip:0 nscanned:131 
nscannedObjects:131 keyUpdates:0 numYields:32 locks(micros) r:58967 
nreturned:101 reslen:73159 147ms
{noformat}

As a fix we should send a hint to make use of {{_deletedOnce}} index always 



> VersionGC failing on Mongo with CursorNotFoundException
> ---
>
> Key: OAK-3062
> URL: https://issues.apache.org/jira/browse/OAK-3062
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
> Attachments: cursor-78740863820.log, versiongc-oak.log
>
>
> At times the VersionGC on big repository fails with following exception
> {noformat}
> 30.06.2015 03:55:59.253 *INFO* [pool-7-thread-132] 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Iterated 
> through 44 documents so far. 410668 found to be deleted
> com.mongodb.MongoException$CursorNotFound: Cursor 78740863820 not found on 
> server mongo2.aem.lan.tpa.foxnews.com:27017
>   at 
> com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:218)
>   at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:198)
>   at 
> com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:176)
>   at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:141)
>   at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
>   at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
>   at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
>   at 
> com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:43)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.collectDeletedDocuments(VersionGarbageCollector.java:110)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.gc(VersionGarbageCollector.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$2.run(DocumentNodeStoreService.java:503)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:68)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:64)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3059.
---
Resolution: Fixed

Updated to correct 1.3.1 version of most recent released oak-authorization-cug.
Disabled failOnWarning again to avoid failed builds on machines where Oak 1.3.1 
release was not installed from sources.

Done in trunk: http://svn.apache.org/r1688615

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3061) oak-authorization-cug uses wrong parent pom

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3061.
---
Resolution: Fixed

Fixed in trunk: http://svn.apache.org/r1688616

> oak-authorization-cug uses wrong parent pom
> ---
>
> Key: OAK-3061
> URL: https://issues.apache.org/jira/browse/OAK-3061
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: authorization-cug
>Affects Versions: 1.3.1
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> As a side effect, the artifact is not deployed when we release Oak!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609929#comment-14609929
 ] 

Marcel Reutegger commented on OAK-3059:
---

Correction to above: the baseline plugin initially suggested 1.2.2 for the cug 
package, because it didn't find a 1.3.1 version in the public maven repository. 
The most recent released version of that package however is 1.3.1.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609711#comment-14609711
 ] 

Marcel Reutegger edited comment on OAK-3059 at 7/1/15 10:57 AM:


The list of exported packages without explicitly managed version is:

- 
oak-authorization-cug/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/cug
 (suggested: --1.2.2-- 1.3.1)
- oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)
- 
oak-core/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/accesscontrol
 (suggested: 1.3.1)
- oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/score 
(suggested: 1.3.1)

Please check the package if you are familiar with the component and let me know 
if this makes sense.


was (Author: mreutegg):
The list of exported packages without explicitly managed version is:

- 
oak-authorization-cug/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/cug
 (suggested: 1.2.2)
- oak-commons/src/main/java/org/apache/jackrabbit/oak/commons/benchmark 
(suggested: 1.3.1)
- 
oak-core/src/main/java/org/apache/jackrabbit/oak/spi/security/authorization/accesscontrol
 (suggested: 1.3.1)
- oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/score 
(suggested: 1.3.1)

Please check the package if you are familiar with the component and let me know 
if this makes sense.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609929#comment-14609929
 ] 

Marcel Reutegger edited comment on OAK-3059 at 7/1/15 10:58 AM:


Correction to above: the baseline plugin initially suggested 1.2.2 for the cug 
package, because it didn't find a 1.3.1 version in the public maven repository 
(caused by OAK-3061). The most recent released version of that package however 
is 1.3.1.


was (Author: mreutegg):
Correction to above: the baseline plugin initially suggested 1.2.2 for the cug 
package, because it didn't find a 1.3.1 version in the public maven repository. 
The most recent released version of that package however is 1.3.1.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3059) Manage versions of all exported packages

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609936#comment-14609936
 ] 

Marcel Reutegger commented on OAK-3059:
---

bq. So IMO we could leave it 1.0.0

That's not an option. The package didn't have an explicit version set before, 
which means the exported version in the previous release was 1.3.1.

> Manage versions of all exported packages
> 
>
> Key: OAK-3059
> URL: https://issues.apache.org/jira/browse/OAK-3059
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: commons, core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.2
>
>
> There are still some exported packages, which do not have a package-info.java 
> with explicitly managed export versions.
> We should add those to prevent excessive version increase of exported 
> packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3061) oak-authorization-cug uses wrong parent pom

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-3061:
--
Fix Version/s: 1.2.3

Merged into 1.2 branch: http://svn.apache.org/r1688623

> oak-authorization-cug uses wrong parent pom
> ---
>
> Key: OAK-3061
> URL: https://issues.apache.org/jira/browse/OAK-3061
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: authorization-cug
>Affects Versions: 1.3.1
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.2.3, 1.3.2
>
>
> As a side effect, the artifact is not deployed when we release Oak!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3061) oak-authorization-cug uses wrong parent pom

2015-07-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14609960#comment-14609960
 ] 

Marcel Reutegger commented on OAK-3061:
---

Note: the 1.0 branch does not have this module.

> oak-authorization-cug uses wrong parent pom
> ---
>
> Key: OAK-3061
> URL: https://issues.apache.org/jira/browse/OAK-3061
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: authorization-cug
>Affects Versions: 1.3.1
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.2.3, 1.3.2
>
>
> As a side effect, the artifact is not deployed when we release Oak!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3062) VersionGC failing on Mongo with CursorNotFoundException

2015-07-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3062.
--
Resolution: Fixed

Added support for sending the hint by default. If required this can be disabled 
via setting system property {{oak.mongo.disableVersionGCIndexHint}} to true
* trunk - http://svn.apache.org/r1688622
* 1.0 - http://svn.apache.org/r1688625
* 1.2 - http://svn.apache.org/r1688626

> VersionGC failing on Mongo with CursorNotFoundException
> ---
>
> Key: OAK-3062
> URL: https://issues.apache.org/jira/browse/OAK-3062
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
> Attachments: cursor-78740863820.log, versiongc-oak.log
>
>
> At times the VersionGC on big repository fails with following exception
> {noformat}
> 30.06.2015 03:55:59.253 *INFO* [pool-7-thread-132] 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Iterated 
> through 44 documents so far. 410668 found to be deleted
> com.mongodb.MongoException$CursorNotFound: Cursor 78740863820 not found on 
> server mongo2.aem.lan.tpa.foxnews.com:27017
>   at 
> com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:218)
>   at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:198)
>   at 
> com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:176)
>   at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:141)
>   at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
>   at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
>   at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
>   at 
> com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:43)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.collectDeletedDocuments(VersionGarbageCollector.java:110)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.gc(VersionGarbageCollector.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$2.run(DocumentNodeStoreService.java:503)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:68)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:64)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3062) VersionGC failing on Mongo with CursorNotFoundException

2015-07-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3062:
-
Fix Version/s: (was: 1.3.3)
   1.3.2

> VersionGC failing on Mongo with CursorNotFoundException
> ---
>
> Key: OAK-3062
> URL: https://issues.apache.org/jira/browse/OAK-3062
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.2.3, 1.3.2, 1.0.17
>
> Attachments: cursor-78740863820.log, versiongc-oak.log
>
>
> At times the VersionGC on big repository fails with following exception
> {noformat}
> 30.06.2015 03:55:59.253 *INFO* [pool-7-thread-132] 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector Iterated 
> through 44 documents so far. 410668 found to be deleted
> com.mongodb.MongoException$CursorNotFound: Cursor 78740863820 not found on 
> server mongo2.aem.lan.tpa.foxnews.com:27017
>   at 
> com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:218)
>   at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:198)
>   at 
> com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:176)
>   at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:141)
>   at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
>   at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
>   at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
>   at 
> com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:43)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.collectDeletedDocuments(VersionGarbageCollector.java:110)
>   at 
> org.apache.jackrabbit.oak.plugins.document.VersionGarbageCollector.gc(VersionGarbageCollector.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService$2.run(DocumentNodeStoreService.java:503)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:68)
>   at 
> org.apache.jackrabbit.oak.spi.state.RevisionGC$1.call(RevisionGC.java:64)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3041) Baseline plugin suggests version increase for unmodified class

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-3041:
--
Fix Version/s: 1.2.3

Merged into 1.2 branch: http://svn.apache.org/r1688627

> Baseline plugin suggests version increase for unmodified class 
> ---
>
> Key: OAK-3041
> URL: https://issues.apache.org/jira/browse/OAK-3041
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: parent
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.2.3, 1.3.2
>
>
> The base line plugin suggests a version increase even though the class didn't 
> change. See e.g.: 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/228/
> This is caused by a bug in the BND tool: 
> https://github.com/bndtools/bnd/issues/639



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2850) Flag states from revision of an external change

2015-07-01 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2850:
--
Fix Version/s: 1.2.3

Merged into 1.2 branch: http://svn.apache.org/r1688633

> Flag states from revision of an external change
> ---
>
> Key: OAK-2850
> URL: https://issues.apache.org/jira/browse/OAK-2850
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
> Fix For: 1.3.0, 1.2.3
>
>
> OAK-2685 introduced a root revision on the DocumentNodeState. This is the 
> revision of the root node state from where the tree traversal started. For 
> OAK-2829 we also need the information about whether the root revision was 
> created for an external change or a local commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3011) Add name of lucene-property index to cost debug log

2015-07-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3011.
--
Resolution: Fixed
  Assignee: Alex Parvulescu

thanks Chetan for the very detailed explanation!

fixed on trunk with 1688634

> Add name of lucene-property index to cost debug log
> ---
>
> Key: OAK-3011
> URL: https://issues.apache.org/jira/browse/OAK-3011
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.2.3, 1.3.2
>
> Attachments: OAK-3011-v2.patch, OAK-3011.patch
>
>
> Currently the cost debug log only contains the type and the cost, but if 
> there are multiple lucene-property indexes, there's no way of knowing which 
> index has what cost so it would be really nice to have the index name 
> included with the cost output.
> Now:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property is 1.5
> {code}
> Nice to have:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property [name] is 
> 1.5
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3011) Add name of lucene-property index to cost debug log

2015-07-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3011:
-
Fix Version/s: (was: 1.2.3)

> Add name of lucene-property index to cost debug log
> ---
>
> Key: OAK-3011
> URL: https://issues.apache.org/jira/browse/OAK-3011
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.2
>
> Attachments: OAK-3011-v2.patch, OAK-3011.patch
>
>
> Currently the cost debug log only contains the type and the cost, but if 
> there are multiple lucene-property indexes, there's no way of knowing which 
> index has what cost so it would be really nice to have the index name 
> included with the cost output.
> Now:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property is 1.5
> {code}
> Nice to have:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property [name] is 
> 1.5
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3011) Add name of lucene-property index to cost debug log

2015-07-01 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14610044#comment-14610044
 ] 

Alex Parvulescu commented on OAK-3011:
--

removing the 1.2.x fix version for now, there are some merge errors on the OSGi 
package version of "spi/query/package-info.java".

> Add name of lucene-property index to cost debug log
> ---
>
> Key: OAK-3011
> URL: https://issues.apache.org/jira/browse/OAK-3011
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene, query
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.2
>
> Attachments: OAK-3011-v2.patch, OAK-3011.patch
>
>
> Currently the cost debug log only contains the type and the cost, but if 
> there are multiple lucene-property indexes, there's no way of knowing which 
> index has what cost so it would be really nice to have the index name 
> included with the cost output.
> Now:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property is 1.5
> {code}
> Nice to have:
> {code}
> org.apache.jackrabbit.oak.query.QueryImpl cost for lucene-property [name] is 
> 1.5
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3055) Improve segment cache in SegmentTracker

2015-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14610050#comment-14610050
 ] 

Michael Dürig commented on OAK-3055:


[~tmueller], do you think it would make sense to replace this with a LIRS cache?

> Improve segment cache in SegmentTracker
> ---
>
> Key: OAK-3055
> URL: https://issues.apache.org/jira/browse/OAK-3055
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience, scalability
> Fix For: 1.3.5
>
>
> The hand crafted segment cache in {{SegmentTracker}} is prone to lock 
> contentions in concurrent access scenarios. As {{SegmentNodeStore#merge}} 
> might also end up acquiring this lock while holding the commit semaphore the 
> situation can easily lead to many threads being blocked on the commit 
> semaphore. The {{SegmentTracker}} cache doesn't differentiate between read 
> and write access, which means that reader threads can block writer threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2934) Certain searches cause lucene index to hit OutOfMemoryError

2015-07-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-2934.
--
Resolution: Fixed

fixed on trunk with r1688636

> Certain searches cause lucene index to hit OutOfMemoryError
> ---
>
> Key: OAK-2934
> URL: https://issues.apache.org/jira/browse/OAK-2934
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Blocker
>  Labels: resilience
> Fix For: 1.2.3, 1.3.2, 1.0.17
>
> Attachments: LuceneIndex.java.patch
>
>
> Certain search terms can get split into very small wildcard tokens that will 
> match a huge amount of items from the index, finally resulting in a OOME.
> For example
> {code}
> /jcr:root//*[jcr:contains(., 'U=1*')]
> {code}
> will translate into the following lucene query
> {code}
> :fulltext:"u ( [set of all index terms stating with '1'] )"
> {code}
> this will break down when lucene will try to compute the score for the huge 
> set of tokens:
> {code}
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:201)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:155)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.(OakDirectory.java:340)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:345)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:329)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.(Lucene41PostingsReader.java:613)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.docsAndPositions(Lucene41PostingsReader.java:252)
> at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.docsAndPositions(BlockTreeTermsReader.java:2233)
> at 
> org.apache.lucene.search.UnionDocsAndPositionsEnum.(MultiPhraseQuery.java:492)
> at 
> org.apache.lucene.search.MultiPhraseQuery$MultiPhraseWeight.scorer(MultiPhraseQuery.java:205)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:289)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:280)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1026)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1047)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3054:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> IndexStatsMBean should provide some details if the async indexing is failing
> 
>
> Key: OAK-3054
> URL: https://issues.apache.org/jira/browse/OAK-3054
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
>
> If the background indexing fails for some reason it logs the exception for 
> the first time then it logs the exception like _The index update failed ..._. 
> After that if indexing continues to fail then no further logging is done so 
> as to avoid creating noise.
> This poses a problem on long running system where original exception might 
> not be noticed and index does not show updated result. For such cases we 
> should expose the indexing health as part of {{IndexStatsMBean}}. Also we can 
> provide the last recorded exception. 
> Administrator can then check for MBean and enable debug logs for further 
> troubleshooting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2999) Index updation fails on updating multivalued property

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2999:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Index updation fails on updating multivalued property
> -
>
> Key: OAK-2999
> URL: https://issues.apache.org/jira/browse/OAK-2999
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.2, 1.0.15
>Reporter: Rishabh Maurya
>Assignee: Amit Jain
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
>
> On emptying a multivalued property, fulltext index updation fails and one can 
> search on old values. Following test demonstrates the issue.
> Added below test in 
> [LuceneIndexQueryTest.java|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/plugins/index/lucene/LuceneIndexQueryTest.java]
>  which should pass - 
> {code}
> @Test
> public void testMultiValuedPropUpdate() throws Exception {
> Tree test = root.getTree("/").addChild("test");
> String child = "child";
> String mulValuedProp = "prop";
> test.addChild(child).setProperty(mulValuedProp, of("foo","bar"), 
> Type.STRINGS);
> root.commit();
> assertQuery(
> "/jcr:root//*[jcr:contains(@" + mulValuedProp + ", 'foo')]",
> "xpath", ImmutableList.of("/test/" + child));
> test.getChild(child).setProperty(mulValuedProp, new 
> ArrayList(), Type.STRINGS);
> root.commit();
> assertQuery(
> "/jcr:root//*[jcr:contains(@" + mulValuedProp + ", 'foo')]",
> "xpath", new ArrayList());
> test.getChild(child).setProperty(mulValuedProp, of("bar"), 
> Type.STRINGS);
> root.commit();
> assertQuery(
> "/jcr:root//*[jcr:contains(@" + mulValuedProp + ", 'foo')]",
> "xpath", new ArrayList());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2934) Certain searches cause lucene index to hit OutOfMemoryError

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2934:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Certain searches cause lucene index to hit OutOfMemoryError
> ---
>
> Key: OAK-2934
> URL: https://issues.apache.org/jira/browse/OAK-2934
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Blocker
>  Labels: resilience
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
> Attachments: LuceneIndex.java.patch
>
>
> Certain search terms can get split into very small wildcard tokens that will 
> match a huge amount of items from the index, finally resulting in a OOME.
> For example
> {code}
> /jcr:root//*[jcr:contains(., 'U=1*')]
> {code}
> will translate into the following lucene query
> {code}
> :fulltext:"u ( [set of all index terms stating with '1'] )"
> {code}
> this will break down when lucene will try to compute the score for the huge 
> set of tokens:
> {code}
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:201)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.(OakDirectory.java:155)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.(OakDirectory.java:340)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:345)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.clone(OakDirectory.java:329)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.(Lucene41PostingsReader.java:613)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.docsAndPositions(Lucene41PostingsReader.java:252)
> at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.docsAndPositions(BlockTreeTermsReader.java:2233)
> at 
> org.apache.lucene.search.UnionDocsAndPositionsEnum.(MultiPhraseQuery.java:492)
> at 
> org.apache.lucene.search.MultiPhraseQuery$MultiPhraseWeight.scorer(MultiPhraseQuery.java:205)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.loadDocs(LuceneIndex.java:352)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:289)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$1.computeNext(LuceneIndex.java:280)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor$1.hasNext(LuceneIndex.java:1026)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$PathCursor.hasNext(Cursors.java:198)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndex$LucenePathCursor.hasNext(LuceneIndex.java:1047)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.fetchNext(AggregationCursor.java:88)
> at 
> org.apache.jackrabbit.oak.plugins.index.aggregate.AggregationCursor.hasNext(AggregationCursor.java:75)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.fetchNext(Cursors.java:474)
> at 
> org.apache.jackrabbit.oak.spi.query.Cursors$ConcatCursor.hasNext(Cursors.java:466)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3026:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> test failures for oak-auth-ldap on Windows
> --
>
> Key: OAK-3026
> URL: https://issues.apache.org/jira/browse/OAK-3026
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-ldap
>Reporter: Amit Jain
>Assignee: Tobias Bocanegra
> Fix For: 1.2.3, 1.3.3, 1.0.17
>
>
> testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
>   Time elapsed: 0.01 sec  <<< ERROR!
> java.io.IOException: Unable to delete file: 
> target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
> at 
> org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
> at 
> org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
> etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3001:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Simplify JournalGarbageCollector using a dedicated timestamp property
> -
>
> Key: OAK-3001
> URL: https://issues.apache.org/jira/browse/OAK-3001
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>Priority: Critical
>  Labels: scalability
> Fix For: 1.2.3, 1.3.3
>
>
> This subtask is about spawning out a 
> [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
>  from [~chetanm] re JournalGC:
> {quote}
> Further looking at JournalGarbageCollector ... it would be simpler if you 
> record the journal entry timestamp as an attribute in JournalEntry document 
> and then you can delete all the entries which are older than some time by a 
> simple query. This would avoid fetching all the entries to be deleted on the 
> Oak side
> {quote}
> and a corresponding 
> [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
>  from myself:
> {quote}
> Re querying by timestamp: that would indeed be simpler. With the current set 
> of DocumentStore API however, I believe this is not possible. But: 
> [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
>  comes quite close: it would probably just require the opposite of that 
> method too: 
> {code}
> public  List query(Collection collection,
>   String fromKey,
>   String toKey,
>   String indexedProperty,
>   long endValue,
>   int limit) {
> {code}
> .. or what about generalizing this method to have both a {{startValue}} and 
> an {{endValue}} - with {{-1}} indicating when one of them is not used?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2977) Fast result size estimate: OSGi configuration

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2977:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Fast result size estimate: OSGi configuration
> -
>
> Key: OAK-2977
> URL: https://issues.apache.org/jira/browse/OAK-2977
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: doc-impacting
> Fix For: 1.2.3, 1.3.3
>
>
> The fast result size option in OAK-2926 should be configurable, for example 
> over OSGi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2892) Speed up lucene indexing post migration by pre extracting the text content from binaries

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2892:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Move to 1.3.3.

> Speed up lucene indexing post migration by pre extracting the text content 
> from binaries
> 
>
> Key: OAK-2892
> URL: https://issues.apache.org/jira/browse/OAK-2892
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: lucene, run
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.3.3, 1.0.17
>
>
> While migrating large repositories say having 3 M docs (250k PDF) Lucene 
> indexing takes long time to complete (at time 4 days!). Currently the text 
> extraction logic is coupled with Lucene indexing and hence is performed in a 
> single threaded mode which slows down the indexing process. Further if the 
> reindexing has to be triggered it has to be done all over again.
> To speed up the Lucene indexing we can decouple the text extraction
> from actual indexing. It is partly based on discussion on OAK-2787
> # Introduce a new ExtractedTextProvider which can provide extracted text for 
> a given Blob instance
> # In oak-run introduce a new indexer mode - This would take a path in 
> repository and would then traverse the repository and look for existing 
> binaries and extract text from that
> So before or after migration is done one can run this oak-run tool to create 
> this store which has the text already extracted. Then post startup we need to 
> wire up the ExtractedTextProvider instance (which is backed by the BlobStore 
> populated before) and indexing logic can just get content from that. This 
> would avoid performing expensive text extraction in the indexing thread.
> See discussion thread http://markmail.org/thread/ndlfpkwfgpey6o66



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2875) Namespaces keep references to old node states

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2875:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Namespaces keep references to old node states
> -
>
> Key: OAK-2875
> URL: https://issues.apache.org/jira/browse/OAK-2875
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core, jcr
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.3
>
> Attachments: OAK-2875-v1.patch, OAK-2875-v2.patch
>
>
> As described on the parent issue OA2849, the session namespaces keep a 
> reference to a Tree instance which will make GC inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2735) MongoDiffCacheTest.sizeLimit() uses too much memory

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2735:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> MongoDiffCacheTest.sizeLimit() uses too much memory
> ---
>
> Key: OAK-2735
> URL: https://issues.apache.org/jira/browse/OAK-2735
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: core
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: CI
> Fix For: 1.3.3
>
>
> The diff created by the test uses a lot of memory. Either test test should be 
> changed or the implementation should ignore further changes once a threshold 
> is reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2689) Test failure: QueryResultTest.testGetSize

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2689:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Test failure: QueryResultTest.testGetSize
> -
>
> Key: OAK-2689
> URL: https://issues.apache.org/jira/browse/OAK-2689
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
> Environment: Jenkins, Ubuntu: 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
>Reporter: Michael Dürig
>Assignee: Thomas Mueller
>  Labels: CI, Jenkins
> Fix For: 1.3.3
>
>
> {{org.apache.jackrabbit.core.query.QueryResultTest.testGetSize}} fails every 
> couple of builds:
> {noformat}
> junit.framework.AssertionFailedError: Wrong size of NodeIterator in result 
> expected:<48> but was:<-1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:134)
>   at 
> org.apache.jackrabbit.core.query.QueryResultTest.testGetSize(QueryResultTest.java:47)
> {noformat}
> Failure seen at builds: 29, 39, 59, 61, 114, 117, 118, 120, 139, 142
> See e.g. 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/59/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_NS,profile=unittesting/testReport/junit/org.apache.jackrabbit.core.query/QueryResultTest/testGetSize/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2660) wrong resultset multiple ORs, lucene index, full-text

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2660:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> wrong resultset multiple ORs, lucene index, full-text
> -
>
> Key: OAK-2660
> URL: https://issues.apache.org/jira/browse/OAK-2660
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Affects Versions: 1.1.7
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.3.3
>
>
> When executing a query like 
> {code}
> SELECT * 
> FROM [nt:unstructured] AS c
>  WHERE ( c.[name] = 'yes' 
> OR CONTAINS(c.[surname], 'yes') 
> OR CONTAINS(c.[description], 'yes') ) 
> AND ISDESCENDANTNODE(c, '/content') 
> ORDER BY added DESC 
> {code}
> and a lucene property index is serving all the properties: {{name, surname, 
> description, added}} the full index is returned and no extra condition is 
> applied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2063) Index creation: interruption resilience

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2063:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Index creation: interruption resilience
> ---
>
> Key: OAK-2063
> URL: https://issues.apache.org/jira/browse/OAK-2063
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>  Labels: resilience
> Fix For: 1.3.3
>
>
> Creating an index can take a long time. If it is interrupted (for example 
> because the process was stopped or died), then it would be nice if after a 
> restart reindexing would continue where it was stopped. I'm not sure how 
> complicated this is.
> There are some more potential problems that should be documented / tested: 
> * When creating a new index in a cluster, which instance creates the index?
> * When creating multiple indexes at the same time, is the repository only 
> scanned once (and not once per index)? 
> * The same when manually triggering a reindex using the "reindex" flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1744) GQL queries with "jcr:primaryType='x'" don't use the node type index

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1744:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> GQL queries with "jcr:primaryType='x'" don't use the node type index
> 
>
> Key: OAK-1744
> URL: https://issues.apache.org/jira/browse/OAK-1744
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.3
>
>
> GQL queries (org.apache.jackrabbit.commons.query.GQL) with type restrictions 
> are converted to the XPath condition "jcr:primaryType = 'x'". This conditions 
> is not currently interpreted as a regular node type restriction in the query 
> engine or the node type index, as one would expect. 
> Such restrictions could still be processed efficiently using the property 
> index on "jcr:primaryType", but if that one is disabled (by setting the cost 
> manually very high, as it is done now), then such queries don't use the 
> expected index.
> I'm not sure yet where this should be best fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2656) Test failures in LDAP authentication: Failed to bind an LDAP service

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2656:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Test failures in LDAP authentication: Failed to bind an LDAP service
> 
>
> Key: OAK-2656
> URL: https://issues.apache.org/jira/browse/OAK-2656
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-ldap
> Environment: Jenkins, Ubuntu: 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
>Reporter: Michael Dürig
>Assignee: Tobias Bocanegra
>Priority: Minor
>  Labels: CI, Jenkins, technical_debt
> Fix For: 1.3.3
>
>
> The following tests all fail with the same error message "Failed to bind an 
> LDAP service (1024) to the service registry.". 
> {noformat} 
> testAuthenticateFail(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest):
>  Failed to bind an LDAP service (1024) to the service registry.
> testGetGroups2(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest):
>  Failed to bind an LDAP service (1024) to the service registry.
> org.apache.jackrabbit.oak.security.authentication.ldap.LdapDefaultLoginModuleTest:
>  Failed to bind an LDAP service (1024) to the service registry.
> testGetUserByUserId(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest):
>  Failed to bind an LDAP service (1024) to the service registry.
> {noformat} 
> The stacktrace is always similar:
> {noformat}
> java.net.BindException: Address already in use]
>   at 
> org.apache.directory.server.ldap.LdapServer.startNetwork(LdapServer.java:528)
>   at 
> org.apache.directory.server.ldap.LdapServer.start(LdapServer.java:394)
>   at 
> org.apache.directory.server.unit.AbstractServerTest.setUp(AbstractServerTest.java:273)
>   at 
> org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:37)
>   at 
> org.apache.jackrabbit.oak.security.authentication.ldap.LdapLoginTestBase.beforeClass(LdapLoginTestBase.java:86)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:198)
>   at 
> org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:51)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcc

[jira] [Updated] (OAK-1828) Improved SegmentWriter

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1828:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Improved SegmentWriter
> --
>
> Key: OAK-1828
> URL: https://issues.apache.org/jira/browse/OAK-1828
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: segmentmk
>Reporter: Jukka Zitting
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> At about 1kLOC and dozens of methods, the SegmentWriter class currently a bit 
> too complex for one of the key components of the TarMK. It also uses a 
> somewhat non-obvious mix of synchronized and unsynchronized code to 
> coordinate multiple concurrent threads that may be writing content at the 
> same time. The synchronization blocks are also broader than what really would 
> be needed, which in some cases causes unnecessary lock contention in 
> concurrent write loads.
> To improve the readability and maintainability of the code, and to increase 
> performance of concurrent writes, it would be useful to split part of the 
> SegmentWriter functionality to a separate RecordWriter class that would be 
> responsible for writing individual records into a segment. The 
> SegmentWriter.prepare() method would return a new RecordWriter instance, and 
> the higher-level SegmentWriter methods would use the returned instance for 
> all the work that's currently guarded in synchronization blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2460) Resolve the base directory path of persistent cache against repository home

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2460:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Resolve the base directory path of persistent cache against repository home
> ---
>
> Key: OAK-2460
> URL: https://issues.apache.org/jira/browse/OAK-2460
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> Currently PersistentCache uses the directory path directly. Various other 
> parts in Oak which need access to the filesystem currently make use of 
> {{repository.home}} framework property in OSGi env [1]
> Same should also be used in PersistentCache
> [1] http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2023) Optimal index usage for XPath queries with "order by" combined with "or"

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2023:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Optimal index usage for XPath queries with "order by" combined with "or"
> 
>
> Key: OAK-2023
> URL: https://issues.apache.org/jira/browse/OAK-2023
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
> Fix For: 1.3.3
>
>
> XPath queries with "or" are converted to union, even if there is an "order 
> by" clause. In such cases, sorting is done in memory. See also OAK-2022.
> For some queries, it might be better to not use union, but use an ordered 
> index instead. This is tricky to decide up-front, but it would be possible to 
> estimate the cost of both variants and pick the one that seems better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1501) Property index on "jcr:primaryType" returns the wrong cost

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1501:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Property index on "jcr:primaryType" returns the wrong cost
> --
>
> Key: OAK-1501
> URL: https://issues.apache.org/jira/browse/OAK-1501
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.3
>
>
> For queries of type, the property index on jcr:primaryType is used, even if 
> only a subset of all node types are indexed:
> {noformat}
> /jcr:root//element(*,rep:User)[xyz/@jcr:primaryType]
> {noformat}
> The problem is that this index returns the wrong cost. It should return 
> "infinity", because the index doesn't have enough data if not all node types 
> and mixins are indexed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3032) LDAP test failures

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3032:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> LDAP test failures
> --
>
> Key: OAK-3032
> URL: https://issues.apache.org/jira/browse/OAK-3032
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-ldap
>Reporter: Marcel Reutegger
>Assignee: Tobias Bocanegra
> Fix For: 1.3.3
>
>
> There are various test failures in the oak-auth-ldap module:
> Failed tests:   
> testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
>  
> expected:<[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
>  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
> Bar,ou=users,ou=system, everyone principal]> but was:<[]>
>   
> testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
>  
> expected:<[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
>  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
> Bar,ou=users,ou=system, everyone principal]> but was:<[]>
>   
> testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
>  
> expected:<[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
>  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
> Bar,ou=users,ou=system, everyone principal]> but was:<[]>
> The tests also fail on travis. E.g.: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2853) Use default codec for fulltext index

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2853:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Use default codec for fulltext index
> 
>
> Key: OAK-2853
> URL: https://issues.apache.org/jira/browse/OAK-2853
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.3
>
>
> Currently OakCodec is used by default if full text indexing is enabled for 
> that index. OakCodec disables compression and was done as performance issues 
> were observed around 1.0 release (See OAK-1737). 
> Post 1.0 we introduced CopyOnRead which should provide better performance 
> even with compression enabled. We should revisit the usage of OakCodec by 
> default to see if with default code we get comparable performance or not and 
> hence get benefit of smaller index size.
> Changing the default would require change in index format version as this 
> change would not be compatible to default
> Note that one can still change codec by specifying {{codec}} value for index 
> config to the code name like {{Lucene46}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2185) Fix intermittent failure in JaasConfigSpiTest

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2185:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Fix intermittent failure in JaasConfigSpiTest
> -
>
> Key: OAK-2185
> URL: https://issues.apache.org/jira/browse/OAK-2185
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: CI, buildbot, test
> Fix For: 1.3.3
>
>
> Intermittent failures on windows are observed in JaasConfigSpiTest with 
> following exception
> {noformat}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.841 sec <<< 
> FAILURE!
> defaultConfigSpiAuth(org.apache.jackrabbit.oak.run.osgi.JaasConfigSpiTest)  
> Time elapsed: 3.835 sec  <<< ERROR!
> java.lang.reflect.UndeclaredThrowableException
>   at $Proxy7.login(Unknown Source)
>   at javax.jcr.Repository$login.call(Unknown Source)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
>   at 
> org.apache.jackrabbit.oak.run.osgi.JaasConfigSpiTest.defaultConfigSpiAuth(JaasConfigSpiTest.groovy:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.jackrabbit.oak.run.osgi.OakOSGiRepositoryFactory$RepositoryProxy.invoke(OakOSGiRepositoryFactory.java:325)
>   ... 37 more
> Caused by: javax.jcr.LoginException: No LoginModules configured for 
> jackrabbit.oak
>   a

[jira] [Updated] (OAK-2682) Introduce time difference detection for DocumentNodeStore

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2682:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Introduce time difference detection for DocumentNodeStore
> -
>
> Key: OAK-2682
> URL: https://issues.apache.org/jira/browse/OAK-2682
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.3
>
>
> Currently the lease mechanism in DocumentNodeStore/mongoMk is based on the 
> assumption that the clocks are in perfect sync between all nodes of the 
> cluster. The lease is valid for 60sec with a timeout of 30sec. If clocks are 
> off by too much, and background operations happen to take couple seconds, you 
> run the risk of timing out a lease. So introducing a check which WARNs if the 
> clocks in a cluster are off by too much (1st threshold, eg 5sec?) would help 
> increase awareness. Further drastic measure could be to prevent a startup of 
> Oak at all if the difference is for example higher than a 2nd threshold 
> (optional I guess, but could be 20sec?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2634) QueryEngine should expose name query as property restriction

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2634:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> QueryEngine should expose name query as property restriction
> 
>
> Key: OAK-2634
> URL: https://issues.apache.org/jira/browse/OAK-2634
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: query
>Reporter: Chetan Mehrotra
> Fix For: 1.3.3
>
> Attachments: OAK-2634-with-test.patch, OAK-2634.patch
>
>
> Currently {{NodeNameImpl}} and {{NodeLocalNameImpl}} do not add restriction 
> to a filter hence query index cannot handle such queries.
> To allow faster execution name related restriction can be converted to a 
> property restriction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2835) TARMK Cold Standby inefficient cleanup

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2835:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> TARMK Cold Standby inefficient cleanup
> --
>
> Key: OAK-2835
> URL: https://issues.apache.org/jira/browse/OAK-2835
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Critical
>  Labels: compaction, gc, production, resilience
> Fix For: 1.3.3
>
> Attachments: OAK-2835.patch
>
>
> Following OAK-2817, it turns out that patching the data corruption issue 
> revealed an inefficiency of the cleanup method. similar to the online 
> compaction situation, the standby has issues clearing some of the in-memory 
> references to old revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2828) Jcr builder class does not allow overriding most of its dependencies

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2828:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Jcr builder class does not allow overriding most of its dependencies
> 
>
> Key: OAK-2828
> URL: https://issues.apache.org/jira/browse/OAK-2828
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: jcr
>Affects Versions: 1.2.2
>Reporter: Robert Munteanu
>  Labels: modularization, technical_debt
> Fix For: 1.3.3
>
> Attachments: 
> 0001-OAK-2828-Jcr-builder-class-does-not-allow-overriding.patch
>
>
> The {{Jcr}} class is the entry point for configuring a JCR repository using 
> an Oak backend. However, it always use a hardcoded set of dependencies ( 
> IndexEditorProvider, SecurityProvider, etc )  which cannot be reset, as they 
> are defined in the constructor and the builder {{with}} methods eagerly 
> configure the backing {{Oak}} instance with those dependencies.
> As an example
> {code:java|title=Jcr.java}
> @Nonnull
> public final Jcr with(@Nonnull SecurityProvider securityProvider) {
> oak.with(checkNotNull(securityProvider));
> this.securityProvider = securityProvider;
> return this;
> }
> {code}
> injects the security provider which in turn starts configuring the Oak 
> repository provider
> {code:java|title=Oak.java}
> @Nonnull
> public Oak with(@Nonnull SecurityProvider securityProvider) {
> this.securityProvider = checkNotNull(securityProvider);
> if (securityProvider instanceof WhiteboardAware) {
> ((WhiteboardAware) securityProvider).setWhiteboard(whiteboard);
> }
> for (SecurityConfiguration sc : securityProvider.getConfigurations()) 
> {
> RepositoryInitializer ri = sc.getRepositoryInitializer();
> if (ri != RepositoryInitializer.DEFAULT) {
> initializers.add(ri);
> }
> }
> return this;
> }
> {code}
> Instead, the {{Jcr}} class should store the configured dependencies and only 
> configure the {{Oak}} instance when {{createRepository}} is invoked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2808) Active deletion of 'deleted' Lucene index files from DataStore without relying on full scale Blob GC

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2808:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

>   Active deletion of 'deleted' Lucene index files from DataStore without 
> relying on full scale Blob GC
> -
>
> Key: OAK-2808
> URL: https://issues.apache.org/jira/browse/OAK-2808
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>  Labels: datastore, performance
> Fix For: 1.3.3
>
> Attachments: copyonread-stats.png
>
>
> With storing of Lucene index files within DataStore our usage pattern
> of DataStore has changed between JR2 and Oak.
> With JR2 the writes were mostly application based i.e. if application
> stores a pdf/image file then that would be stored in DataStore. JR2 by
> default would not write stuff to DataStore. Further in deployment
> where large number of binary content is present then systems tend to
> share the DataStore to avoid duplication of storage. In such cases
> running Blob GC is a non trivial task as it involves a manual step and
> coordination across multiple deployments. Due to this systems tend to
> delay frequency of GC
> Now with Oak apart from application the Oak system itself *actively*
> uses the DataStore to store the index files for Lucene and there the
> churn might be much higher i.e. frequency of creation and deletion of
> index file is lot higher. This would accelerate the rate of garbage
> generation and thus put lot more pressure on the DataStore storage
> requirements.
> Discussion thread http://markmail.org/thread/iybd3eq2bh372zrl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2545) oak-core IT run out of memory

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2545:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> oak-core IT run out of memory
> -
>
> Key: OAK-2545
> URL: https://issues.apache.org/jira/browse/OAK-2545
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: core
>Reporter: Marcel Reutegger
>Assignee: Alex Parvulescu
>  Labels: CI, travis
> Fix For: 1.3.3
>
>
> Seen on the 1.0 branch only so far when running ITs on my local machine, but 
> travis reports the same:
> https://travis-ci.org/apache/jackrabbit-oak/builds/51589769
> It doesn't necessarily mean the problem is with SegmentReferenceLimitTestIT 
> even though the heap dump shows most of the memory consumed by Segments and 
> SegmentWriter. A recent build on trunk was successful for me where we have 
> the same test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2065) JMX stats for operations being performed in DocumentStore

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2065:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> JMX stats for operations being performed in DocumentStore
> -
>
> Key: OAK-2065
> URL: https://issues.apache.org/jira/browse/OAK-2065
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: tooling
> Fix For: 1.3.3
>
> Attachments: 
> 0001-OAK-2065-JMX-stats-for-operations-being-performed-in.patch, 
> OAK-2065-1.patch
>
>
> Currently DocumentStore performs various background operations like
> # Cache consistency check
> # Pushing the lastRev updates
> # Synchrnizing the root node version
> We should capture some stats like time taken in various task and expose them 
> over JMX to determine if those background operations are performing well or 
> not. For example its important that all tasks performed in background task 
> should be completed under 1 sec (default polling interval). If the time taken 
> increases then it would be cause of concern
> See http://markmail.org/thread/57fax4nyabbubbef



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2472) Add support for atomic counters on cluster solutions

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2472:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.3.3
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1842) ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1842:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used
> 
>
> Key: OAK-1842
> URL: https://issues.apache.org/jira/browse/OAK-1842
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.0
>Reporter: Konrad Windszus
>Assignee: Francesco Mari
>  Labels: resilience
> Fix For: 1.3.3
>
>
> The stacktrace of the call shows something like
> {code}
> 20.05.2014 11:13:07.428 *ERROR* [OsgiInstallerImpl] 
> com.adobe.granite.installer.factory.packages.impl.PackageTransformer Error 
> while processing install task.
> java.lang.IllegalStateException: Unexpected value record type: f2
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.length(SegmentBlob.java:101)
> at 
> org.apache.jackrabbit.oak.plugins.value.BinaryImpl.getSize(BinaryImpl.java:74)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:435)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:376)
> at 
> org.apache.jackrabbit.vault.packaging.impl.JcrPackageImpl.getPackage(JcrPackageImpl.java:324)
> {code}
> The blob store was configured correctly and according to the log also 
> correctly initialized
> {code}
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService 
> Initializing SegmentNodeStore with BlobStore 
> [org.apache.jackrabbit.oak.spi.blob.FileBlobStore@7e3dec43]
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService Component 
> still not activated. Ignoring the initialization call
> 20.05.2014 11:11:07.077 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK opened: 
> crx-quickstart/repository/segmentstore (mmap=true)
> {code}
> Under which circumstances can the length within the SegmentBlob be invalid?
> This only happens if a File Blob Store is configured 
> (http://jackrabbit.apache.org/oak/docs/osgi_config.html). If a file datastore 
> is used, there is no such exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1221) Query fails unexpectedly when property conversion is not possible for joins and "in(...)"

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1221:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Query fails unexpectedly when property conversion is not possible for joins 
> and "in(...)"
> -
>
> Key: OAK-1221
> URL: https://issues.apache.org/jira/browse/OAK-1221
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.3
>
>
> The same as for OAK-1171, however the fix there only solves the problem for 
> comparisons but not joins and conditions of the form "in(x, y)" (the two 
> other places where values are converted).
> I guess those cases are less common than OAK-1171, but it should be quite 
> easy to come up with a simple test case. I will try to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1591) org.apache.jackrabbit.oak.plugins.document.mongo.CacheInvalidationIT fails

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1591:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> org.apache.jackrabbit.oak.plugins.document.mongo.CacheInvalidationIT fails
> --
>
> Key: OAK-1591
> URL: https://issues.apache.org/jira/browse/OAK-1591
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: mongomk
>Reporter: Julian Reschke
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: CI
> Fix For: 1.3.3
>
>
> Fails frequently on my W7 desktop:
> testCacheInvalidationHierarchicalNotExist(org.apache.jackrabbit.oak.plugins.document.mongo.CacheInvalidationIT)
>   Time elapsed: 0.04 sec  <<< FAILURE!
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertTrue(Assert.java:54)
> at 
> org.apache.jackrabbit.oak.plugins.document.mongo.CacheInvalidationIT.testCacheInvalidationHierarchicalNotExist(CacheInvalidationIT.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2932) Limit the scope of exported packages

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2932:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Limit the scope of exported packages
> 
>
> Key: OAK-2932
> URL: https://issues.apache.org/jira/browse/OAK-2932
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: modularization, osgi, technical_debt
> Fix For: 1.3.3
>
>
> Oak currently exports *a lot* of packages even though those are only used by 
> Oak itself. We should probably leverage OSGi subsystems here and only export 
> the bare minimum to the outside world. This will simplify evolution of Oak 
> internal APIs as with the current approach changes to such APIs always leak 
> to the outside world. 
> That is, we should have an Oak OSGi sub-system as an deployment option. 
> Clients would then only need to deploy that into their OSGi container and 
> would only see APIs actually meant to be exported for everyone (like e.g. the 
> JCR API). At the same time Oak could go on leveraging OSGi inside this 
> subsystem.
> cc [~bosschaert] as you introduced us to this idea. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2613) Do versionGC more frequently and at adjusted speed

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2613:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Do versionGC more frequently and at adjusted speed
> --
>
> Key: OAK-2613
> URL: https://issues.apache.org/jira/browse/OAK-2613
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Affects Versions: 1.0.12
>Reporter: Stefan Egli
>  Labels: observation, resilience
> Fix For: 1.3.3
>
>
> This is a follow-up ticket from 
> [here|https://issues.apache.org/jira/browse/OAK-2557?focusedCommentId=14355322&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14355322]
>  mixed with an offline discussion with [~mreutegg]:
>  * we could make the VersionGC play nicely to existing load on the system: it 
> could progress slower if the load is higher and vice-verca. One simple 
> measure could be: if the observation queue is small (eg below 10) then the 
> load is low and it could progress full-speed. Otherwise it could add some 
> artificial sleeping in between.
>  * we could run versionGC more regularly than once a day but instead kind of 
> 'continuously' let it run in the background. While the speed of the gc would 
> be adjusted to the load - it also would have to be assured that it doesn't 
> run too slow (and would never finish if instance is under some constant load)
> Note that 'adjusted speed' would also imply some intelligence about the 
> system load, as pointed out by [~chetanm] on OAK-2557:
> {quote}Version GC currently ensures that query fired is made against the 
> Secondary (if present). However having some throttling in such background 
> task would be good thing to have. But first we need to have some 
> SystemLoadIndicator notion in Oak which can be provide details say in 
> percentage 1..100 about system load. We can then expose configurable 
> threshold which VersionGC would listen for and adjust its working accordingly.
> It can be a JMX bean which emits notification and we have our components 
> listen to those notification (or use OSGi SR/Events). That can be used in 
> other places like Observation processing, Blob GC etc
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2885) Enable saveDirListing by default

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2885:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Enable saveDirListing by default
> 
>
> Key: OAK-2885
> URL: https://issues.apache.org/jira/browse/OAK-2885
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.3
>
>
> OAK-2809 introduced support for saving directory listing. Once this feature 
> is found to be stable we should enable it by default.
> As a start we can enable it by default for trunk for now



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2686) Persistent cache: log activity and timing data, and possible optimizations

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2686:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Persistent cache: log activity and timing data, and possible optimizations
> --
>
> Key: OAK-2686
> URL: https://issues.apache.org/jira/browse/OAK-2686
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: tooling
> Fix For: 1.3.3
>
>
> The persistent cache most likely reduce performance in some uses cases, but 
> currently it's hard to find out if that's the case or not.
> Activity should be captured (and logged with debug level) if possible, for 
> example writing, reading, writing in the foreground / background, opening and 
> closing, switching the generation, moving entries from old to new generation.
> Adding entries to the cache could be completely decoupled from the foreground 
> thread, if they are added to the persistent cache in a separate thread.
> It might be better to only write entries if they were accessed often. To do 
> this, entries could be put in the persistent cache once they are evicted from 
> the in-memory cache, instead of when they are added to the cache. If that's 
> done, we would maintain some data (for example access count) on which we can 
> filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2920) RDBDocumentStore: fail init when database config seems to be inadequate

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2920:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> RDBDocumentStore: fail init when database config seems to be inadequate
> ---
>
> Key: OAK-2920
> URL: https://issues.apache.org/jira/browse/OAK-2920
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Reporter: Julian Reschke
>Priority: Minor
>  Labels: resilience
> Fix For: 1.3.3
>
>
> It has been suggested that the implementation should fail to start (rather 
> than warn) when it detects a DB configuration that is likely to cause 
> problems (such as wrt character encoding or collation sequences)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2631) Use buffered variants for IndexInput and IndexOutput

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2631:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Use buffered variants for IndexInput and IndexOutput
> 
>
> Key: OAK-2631
> URL: https://issues.apache.org/jira/browse/OAK-2631
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.3.3
>
> Attachments: OAK-2631.patch
>
>
> Lucene provides a buffered variants for {{IndexInput}} and {{IndexOutput}}. 
> Currently Oak extends these classes directly. For better performance itshould 
> extend the buffered variants.
> As discussed 
> [here|https://issues.apache.org/jira/browse/OAK-?focusedCommentId=14178265#comment-14178265]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2891) Use more efficient approach to manage in memory map in LengthCachingDataStore

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2891:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Use more efficient approach to manage in memory map in LengthCachingDataStore
> -
>
> Key: OAK-2891
> URL: https://issues.apache.org/jira/browse/OAK-2891
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.3
>
>
> LengthCachingDataStore introduced in OAK-2882 has an in memory map for 
> keeping the mapping between blobId and length. This would pose issue when 
> number of binaries are very large.
> Instead of in memory map we should use some off heap store like MVStore of 
> MapDB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2392) [DocumentMK] Garbage Collect older revisions of binary properties in main document

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2392:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> [DocumentMK] Garbage Collect older revisions of binary properties in main 
> document
> --
>
> Key: OAK-2392
> URL: https://issues.apache.org/jira/browse/OAK-2392
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.3
>
>
> Current GC logic for DocumentMK only collects certain types of garbage (see 
> OAK-1981) and currently only split documents are removed. While complete full 
> blow gc would take time and yet not fully implemented we should handle those 
> documents which have binary properties and those properties get updated few 
> times (but not very frequently).
> For e.g. performing a reindex for Lucene index would lead to removal of index 
> files nodes and again creation of nodes with same name. In such a case the 
> older revision of binary property would remain in main document and would not 
> be eligible for gc as per current impl.
> As a fix the GC logic should look for document which might have binaries and 
> then remove the older revisions of binary properties. Currently we do scan 
> all such documents for Blob GC.
> So this can be done either as part of Revision GC or Blob GC



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-937) Query engine index selection tweaks: shortcut and hint

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-937:
-
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Query engine index selection tweaks: shortcut and hint
> --
>
> Key: OAK-937
> URL: https://issues.apache.org/jira/browse/OAK-937
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, query
>Reporter: Alex Parvulescu
>Priority: Minor
>  Labels: performance
> Fix For: 1.3.3
>
>
> This issue covers 2 different changes related to the way the QueryEngine 
> selects a query index:
>  Firstly there could be a way to end the index selection process early via a 
> known constant value: if an index returns a known value token (like -1000) 
> then the query engine would effectively stop iterating through the existing 
> index impls and use that index directly.
>  Secondly it would be nice to be able to specify a desired index (if one is 
> known to perform better) thus skipping the existing selection mechanism (cost 
> calculation and comparison). This could be done via certain query hints [0].
> [0] http://en.wikipedia.org/wiki/Hint_(SQL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2894) RepositoryImpl should not manage the lifecycle of ContentRepository

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2894:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> RepositoryImpl should not manage the lifecycle of ContentRepository
> ---
>
> Key: OAK-2894
> URL: https://issues.apache.org/jira/browse/OAK-2894
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: jcr
>Affects Versions: 1.2.2
>Reporter: Francesco Mari
>  Labels: modularization, resilience, technical_debt
> Fix For: 1.3.3
>
>
> {{RepositoryImpl}} uses an instance of {{ContentRepository}} that is passed 
> as an external dependency in its constructor.
> {{RepositoryImpl}} is not responsible for the creation of the 
> {{ContentRepository}} instance and, as such, should not manage its lifecycle. 
> In particular, the {{ContentRepository#close}} method should not be called 
> when the {{RepositoryImpl#shutdown}} method is executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2847) Dependency cleanup

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2847:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Dependency cleanup 
> ---
>
> Key: OAK-2847
> URL: https://issues.apache.org/jira/browse/OAK-2847
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Michael Dürig
>Assignee: Vikas Saurabh
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> Early in the next release cycle we should go through the list of Oak's 
> dependencies and decide whether we have candidates we want to upgrade and 
> remove orphaned dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2902) Code coverage

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2902:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Code coverage
> -
>
> Key: OAK-2902
> URL: https://issues.apache.org/jira/browse/OAK-2902
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> We should have automated code coverage results, and then decide upon minimum 
> numbers we want to achieve (for example, initially 100% package or class 
> coverage). Once we reached the goal, we can increase the minimum coverage on 
> a module-by-module basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2793) Time limit for HierarchicalInvalidator

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2793:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Time limit for HierarchicalInvalidator
> --
>
> Key: OAK-2793
> URL: https://issues.apache.org/jira/browse/OAK-2793
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.3.3
>
> Attachments: OAK-2793-Time-limit-for-HierarchicalInvalidator.patch
>
>
> This issue is related to OAK-2646. Every now and then I see reports of 
> background reads with a cache invalidation that takes a rather long time. 
> Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
> upper limit for the time it may take to perform the invalidation. When the 
> time is up, the implementation should simply invalidate the remaining 
> documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2736) Oak instance does not close the executors created upon ContentRepository creation

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2736:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Oak instance does not close the executors created upon ContentRepository 
> creation
> -
>
> Key: OAK-2736
> URL: https://issues.apache.org/jira/browse/OAK-2736
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: CI, Jenkins
> Fix For: 1.3.3
>
> Attachments: OAK-2736-2.patch, OAK-2736.patch
>
>
> Oak.createContentRepository does not closes the executors it creates upon 
> close. It should close the executor if that is created by itself and not 
> passed by outside
> Also see recent [thread|http://markmail.org/thread/rryydj7vpua5qbub].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3020) Async Update fails after IllegalArgumentException

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3020:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Async Update fails after IllegalArgumentException
> -
>
> Key: OAK-3020
> URL: https://issues.apache.org/jira/browse/OAK-3020
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Affects Versions: 1.2.2
>Reporter: Julian Sedding
>Assignee: Amit Jain
> Fix For: 1.3.3
>
> Attachments: OAK-3020-stacktrace.txt, OAK-3020.patch, 
> OAK-3020.test.patch
>
>
> The async index update can fail due to a mismatch between an index definition 
> and the actual content. If that is the case, it seems that it can no longer 
> make any progress. Instead it re-indexes the latest changes over and over 
> again until it hits the problematic property.
> Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq
> Stacktrace attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1328) refactor MongoMK caching

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1328:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> refactor MongoMK caching
> 
>
> Key: OAK-1328
> URL: https://issues.apache.org/jira/browse/OAK-1328
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Julian Reschke
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> Caching currently is part of the DocumentStore API; consider to refactor the 
> code so that caching lives inside MongoMK and automatically applies to all 
> DocumentStore implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2733) Option to convert "like" queries to range queries

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2733:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Option to convert "like" queries to range queries
> -
>
> Key: OAK-2733
> URL: https://issues.apache.org/jira/browse/OAK-2733
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
> Fix For: 1.3.3
>
>
> Queries with "like" conditions of the form "x like 'abc%'" are currently 
> always converted to range queries. With Apache Lucene, using "like" in some 
> cases is a bit faster (but not much, according to our tests).
> Converting "like" to range queries should be disabled by default.
> Potential patch:
> {noformat}
> --- src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (revision 1672070)
> +++ src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (working copy)
> @@ -31,11 +31,21 @@
>  import org.apache.jackrabbit.oak.query.fulltext.LikePattern;
>  import org.apache.jackrabbit.oak.query.index.FilterImpl;
>  import org.apache.jackrabbit.oak.spi.query.PropertyValues;
> +import org.slf4j.Logger;
> +import org.slf4j.LoggerFactory;
>  
>  /**
>   * A comparison operation (including "like").
>   */
>  public class ComparisonImpl extends ConstraintImpl {
> +
> +static final Logger LOG = LoggerFactory.getLogger(ComparisonImpl.class);
> +
> +private final static boolean CONVERT_LIKE_TO_RANGE = 
> Boolean.getBoolean("oak.convertLikeToRange");
> +
> +static {
> +LOG.info("Converting like to range queries is " + 
> (CONVERT_LIKE_TO_RANGE ? "enabled" : "disabled"));
> +}
>  
>  private final DynamicOperandImpl operand1;
>  private final Operator operator;
> @@ -193,7 +203,7 @@
>  if (lowerBound.equals(upperBound)) {
>  // no wildcards
>  operand1.restrict(f, Operator.EQUAL, v);
> -} else if (operand1.supportsRangeConditions()) {
> +} else if (operand1.supportsRangeConditions() && 
> CONVERT_LIKE_TO_RANGE) {
>  if (lowerBound != null) {
>  PropertyValue pv = 
> PropertyValues.newString(lowerBound);
>  operand1.restrict(f, Operator.GREATER_OR_EQUAL, 
> pv);
> @@ -203,7 +213,7 @@
>  operand1.restrict(f, Operator.LESS_OR_EQUAL, pv);
>  }
>  } else {
> -// path conditions
> +// path conditions, or conversion is disabled
>  operand1.restrict(f, operator, v);
>  }
>  } else {
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2466) DataStoreBlobStore: chunk ids should not contain the size

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2466:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> DataStoreBlobStore: chunk ids should not contain the size
> -
>
> Key: OAK-2466
> URL: https://issues.apache.org/jira/browse/OAK-2466
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: datastore, performance
> Fix For: 1.3.3
>
>
> The blob store garbage collection (data store garbage collection) uses the 
> chunk ids to identify binaries to be deleted. The blob ids contain the size 
> now (#), and the blob id is currently equal to the chunk 
> id.
> It would be more efficient to _not_ use the size, and instead just use the 
> content hash, for the chunk ids. That way, enumerating the entries that are 
> in the store is potentially faster. Also, it allows us to change the blob id 
> in the future, for example add more information to it (for example the 
> creation time, or the first few bytes of the content) if we ever want to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2989) Swap large commits to disk in order to avoid OOME

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2989:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Swap large commits to disk in order to avoid OOME
> -
>
> Key: OAK-2989
> URL: https://issues.apache.org/jira/browse/OAK-2989
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.2.2
>Reporter: Timothee Maret
> Fix For: 1.3.3
>
>
> As described in [0] large commits consume a fair amount of memory. With very 
> large commits, this become problematic as commits may eat up 100GB or more 
> and thus causing OOME and aborting the commit.
> Instead of keeping the whole commit in memory, the implementation may store 
> parts of it on the disk once the heap memory consumption reaches a 
> configurable threshold.
> This would allow to solve the issue and not simply mitigate it as in 
> OAK-2968, OAK-2969.
> The behaviour may already be supported for some configurations of Oak. At 
> least the setup Mongo + DocumentStore seemed not to support it.
> [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2622) dynamic cache allocation

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2622:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> dynamic cache allocation
> 
>
> Key: OAK-2622
> URL: https://issues.apache.org/jira/browse/OAK-2622
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Affects Versions: 1.0.12
>Reporter: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.3
>
>
> At the moment mongoMk's various caches are configurable (OAK-2546) but other 
> than that static in terms of size. Different use-cases might require 
> different allocations of the sub caches though. And it might not always be 
> possible to find a good configuration upfront for all use cases. 
> We might be able to come up with dynamically allocating the overall cache 
> size to the different sub-caches, based on which cache is how heavily loaded 
> or how well performing for example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2539:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Davide Giannella
> Fix For: 1.3.3
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2797) Closeable aspect of Analyzer should be accounted for

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2797:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Closeable aspect of Analyzer should be accounted for
> 
>
> Key: OAK-2797
> URL: https://issues.apache.org/jira/browse/OAK-2797
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> Lucene {{Analyzer}} implements {{Closeable}} [1] interface and internally it 
> has a ThreadLocal storage of some persistent resource
> So far in oak-lucene we do not take care of closing any analyzer. In fact we 
> use a singleton Analyzer in all cases. Opening this bug to think about this 
> aspect and see if our usage of Analyzer follows the best practices
> [1] 
> http://lucene.apache.org/core/4_7_0/core/org/apache/lucene/analysis/Analyzer.html#close%28%29
> /cc [~teofili] [~alex.parvulescu]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2805) oak-run: register JMX beans

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2805:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> oak-run: register JMX beans 
> 
>
> Key: OAK-2805
> URL: https://issues.apache.org/jira/browse/OAK-2805
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Reporter: Robert Munteanu
>Priority: Minor
>  Labels: tooling
> Fix For: 1.3.3
>
> Attachments: OAK-2805-1.patch
>
>
> When starting up oak with oak-run the JMX beans are not registered, but it 
> would be convenient for the registration to happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2037) Define standards for plan output

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2037:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Define standards for plan output
> 
>
> Key: OAK-2037
> URL: https://issues.apache.org/jira/browse/OAK-2037
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Justin Edelson
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: tooling
> Fix For: 1.3.3
>
>
> Currently, the syntax for the plan output is chaotic as it varies 
> significantly from index to index. Whereas some of this is expected - each 
> index type will have different data to output, Oak should provide some 
> standards about how a plan will appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2665) RDB: Tool/scripts for repair, recovery and sub-tree deletion

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2665:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> RDB: Tool/scripts for repair, recovery and sub-tree deletion
> 
>
> Key: OAK-2665
> URL: https://issues.apache.org/jira/browse/OAK-2665
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: rdbmk
>Reporter: Amit Jain
>  Labels: production, tools
> Fix For: 1.3.3
>
>
> Scripts and/or support should be added in oak-run console to support repair 
> and recovery options as supported for Mongo.
> Also, needed are options that are supported in the oak-mongo.js file 
> specially sub-tree deletion which has been found useful to delete corrupted 
> indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2761:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
> Fix For: 1.3.3
>
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2683) the "hitting the observation queue limit" problem

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2683:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> the "hitting the observation queue limit" problem
> -
>
> Key: OAK-2683
> URL: https://issues.apache.org/jira/browse/OAK-2683
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk, segmentmk
>Reporter: Stefan Egli
>  Labels: observation, resilience
> Fix For: 1.3.3
>
>
> There are several tickets in this area:
> * OAK-2587: threading with observation being too eagar causing observation 
> queue to grow
> * OAK-2669: avoiding diffing from mongo by using persistent cache instead.
> * OAK-2349: which might be a duplicate or at least similar to 2669..
> * OAK-2562: diffcache is inefficient
> Yet I think it makes sense to create this summarizing ticket, about 
> describing again what happens when the observation queue hits the limit - and 
> eventually about how this can be improved
> Consider the following scenario (also compare with OAK-2587 - but that one 
> focused more on eagerness of threading):
> * rate of incoming commits is large and starts to generate many changes into 
> the observation queues, hence those queue become somewhat filled/loaded
> * depending on the underlying nodestore used the calculation of diffs is more 
> or less expensive - but at least for mongomk it is important that the diff 
> can be served from the cache
> ** in case of mongomk it can happen that diffs are no longer found in the 
> cache and thus require a round-trip to mongo - which is magnitudes slower 
> than via cache of course. this would result in the queue to start increasing 
> even faster as dequeuing becomes slower now.
> ** not sure about tarmk - I believe it should always be fast there
> * so based on the above, there can be a situation where the queue grows and 
> hits the configured limit
> * if this limit is reached, the current mechanism is to collapse any 
> subsequent change into one-big-marked-as-external-event change, lets call 
> this a collapsed-change.
> * this collapsed-change now becomes part of the normal queue and eventually 
> would 'walk down the queue' and be processed normally - hence opening a high 
> chance that yet a new collapsed-change is created should the queue just hit 
> the limit again. and this game can now be played for a while, resulting in 
> the queue to contain many/mostly such collapse-changes.
> * there is now an additional assumption in that the diffing of such collapses 
> is more expensive than normal diffing - plus it is almost guaranteed that the 
> diff cannot for example be shared between observation listeners, since the 
> exact 'collapse borders' depends on timing of each of the listeners' queues - 
> ie the collapse diffs are unique thus not cachable..
> * so as a result: once you have those collapse-diffs you can almost not get 
> rid of them - they are heavy to process - hence dequeuing is very slow
> * at the same time, there is always likely some commits happening in a 
> typical system, eg with sling on top you have sling discovery which does 
> heartbeats every now and then. So there's always new commits that add to the 
> load.
> * this will hence create a situation where quite a small additional commit 
> rate can keep all the queues filled - due to the fact that the queue is full 
> with 'heavy collapse diffs' that have to be calculated for each and every 
> listener (of which you could have eg 150-200) individually.
> So again, possible solutions for this:
> * OAK-2669: tune diffing via persistent cache
> * OAK-2587: have more threads to remain longer 'in the cache zone'
> * tune your input speed explicitly to avoid filling the observation queues 
> (this would be specific to your use-case of course, but can be seen as 
> explicitly throttling on the input side)
> * increase the relevant caches to the max
> * but I think we will come up with yet a broader improvement of this 
> observation queue limit problem by either
> ** doing flow control - eg via the commit rate limiter (also see OAK-1659)
> ** moving out handling of observation changes to a messaging subsystem - be 
> it to handle local events only (since handling external events makes the 
> system problematic wrt scalability if not done right) - also see 
> [corresponding suggestion on dev 
> list|http://markmail.org/message/b5trr6csyn4zzuj7]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2879) Compaction should check for required disk space before running

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2879:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Compaction should check for required disk space before running
> --
>
> Key: OAK-2879
> URL: https://issues.apache.org/jira/browse/OAK-2879
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, doc-impacting, gc, resilience
> Fix For: 1.3.3
>
>
> In the worst case compaction doubles the repository size while running. As 
> this is somewhat unexpected we should check whether there is enough free disk 
> space before running compaction and log a warning otherwise. This is to avoid 
> a common source of running out of disk space and ending up with a corrupted 
> repository. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2857) Run background read and write operation concurrently

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2857:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Run background read and write operation concurrently
> 
>
> Key: OAK-2857
> URL: https://issues.apache.org/jira/browse/OAK-2857
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.3
>
>
> OAK-2624 decoupled the background read from the background write but the 
> methods implementing the operations are synchronized. This means they cannot 
> run at the same time and e.g. an expensive background write may unnecessarily 
> block a background read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2739) take appropriate action when lease cannot be renewed (in time)

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2739:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> take appropriate action when lease cannot be renewed (in time)
> --
>
> Key: OAK-2739
> URL: https://issues.apache.org/jira/browse/OAK-2739
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: mongomk
>Affects Versions: 1.2
>Reporter: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.3
>
>
> Currently, in an oak-cluster when (e.g.) one oak-client stops renewing its 
> lease (ClusterNodeInfo.renewLease()), this will be eventually noticed by the 
> others in the same oak-cluster. Those then mark this client as {{inactive}} 
> and start recoverying and subsequently removing that node from any further 
> merge etc operation.
> Now, whatever the reason was why that client stopped renewing the lease 
> (could be an exception, deadlock, whatever) - that client itself still 
> considers itself as {{active}} and continues to take part in the cluster 
> action.
> This will result in a unbalanced situation where that one client 'sees' 
> everybody as {{active}} while the others see this one as {{inactive}}.
> If this ClusterNodeInfo state should be something that can be built upon, and 
> to avoid any inconsistency due to unbalanced handling, the inactive node 
> should probably retire gracefully - or any other appropriate action should be 
> taken, other than just continuing as today.
> This ticket is to keep track of ideas and actions taken wrt this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2871) Registering NodeTypes should fail if there are broken NodeDefinitionTemplates

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2871:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Registering NodeTypes should fail if there are broken NodeDefinitionTemplates
> -
>
> Key: OAK-2871
> URL: https://issues.apache.org/jira/browse/OAK-2871
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0.13, 1.2.2
>Reporter: Manfred Baedke
>Priority: Minor
> Fix For: 1.3.3
>
>
> Using ReadWriteNodeTypeManager, it is possible to register NodeTypes that 
> contain ChildNodeDefinitions without a name or without a required primary 
> type. This may cause problems later that are difficult to detect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2744) Change default cache distribution ratio if persistent cache is enabled

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2744:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Change default cache distribution ratio if persistent cache is enabled
> --
>
> Key: OAK-2744
> URL: https://issues.apache.org/jira/browse/OAK-2744
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance
> Fix For: 1.3.3
>
>
> By default the cache memory in DocumentNodeStore is distributed in following 
> ratio
> * nodeCache - 25%
> * childrenCache - 10%
> * docChildrenCache - 3%
> * diffCache - 5%
> * documentCache - Is given the rest i.e. 57%
> However off late we have found that with persistent cache enabled we can 
> lower the cache allocated to Document cache. That would reduce the time spent 
> in invalidating cache entries in periodic reads. So far we are using 
> following ration in few setup and that is turning out well
> * nodeCachePercentage=35
> * childrenCachePercentage=20
> * diffCachePercentage=30
> * docChildrenCachePercentage=10
> * documentCache - Is given the rest i.e. 5%
> We should use the above distribution by default if the persistent cache is 
> found to be enabled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1340) Backup and restore for the SQL DocumentStore

2015-07-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1340:
--
Fix Version/s: (was: 1.3.2)
   1.3.3

Bulk move to 1.3.3.

> Backup and restore for the SQL DocumentStore
> 
>
> Key: OAK-1340
> URL: https://issues.apache.org/jira/browse/OAK-1340
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, rdbmk
>Reporter: Alex Parvulescu
>  Labels: production, tools
> Fix For: 1.3.3
>
>
> Similar to OAK-1159 but specific to the SQL Document Store implementation.
> The backup could leverage the existing backup bits and backup to the file 
> system (sql-to-tarmk backup) but the restore functionality is missing 
> (tar-to-sql).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >