[jira] [Comment Edited] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352867#comment-14352867
 ] 

Chetan Mehrotra edited comment on OAK-2557 at 3/9/15 12:07 PM:
---

After an offline discussion with [~mreutegg] we come to following conclusion

# Deletion logic has to work like creation logic and has to perform deletion in 
bottom to top way. Creation currently works from top to bottom i.e. ensures 
that parent get created first and child thereafter. So deletion logic has to 
complement that
# Current logic also has a potential issue where if the system performing GC 
crashes in between then it might lead to a state where parent would have got 
removed before child and in that case such child document can never be GCed. So 
as a fix we should first sort the batch via {{PathComparator}} in reverse and 
then perform deletion from child to parent. {color:brown}Open a new issue for 
that{color}
# For large deletion (like current case) we make use of {{ExternalSort}} where 
sorting is performed on disk and we read for paths stored in file. This would 
make use of all the support developed in Blob GC in 
{{MarkSweepGarbageCollector}}

All in all this would not be a simple fix that I initially thought :(


was (Author: chetanm):
After an offline discussion with [~mreutegg] we come to following conclusion

# Deletion logic has to work like creation logic and has to perform deletion in 
bottom to top way
# Current logic also has a potential issue where if the system performing GC 
crashes in between then it might lead to a state where parent would have got 
removed before child and in that case such child document can never be GCed. So 
as a fix we should first sort the batch via {{PathComparator}} in reverse and 
then perform deletion from child to parent. {color:brown}Open a new issue for 
that{color}
# For large deletion (like current case) we make use of {{ExternalSort}} where 
sorting is performed on disk and we read for paths stored in file. This would 
make use of all the support developed in Blob GC in 
{{MarkSweepGarbageCollector}}

All in all this would not be a simple fix that I initially thought :(

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2582) RDB: improve memory cache handling

2015-03-09 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2582:
--
Fix Version/s: (was: 1.1.7)

 RDB: improve memory cache handling
 --

 Key: OAK-2582
 URL: https://issues.apache.org/jira/browse/OAK-2582
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Affects Versions: 1.0.11, 1.1.6
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.1.8


 Improve memory cache handling:
 - invalidateCache should just mark cache entries as to be revalidated
 - to-be revalidated cache entries can be used for conditional retrieval from 
 DB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2588) MultiDocumentStoreTest.testInvalidateCache failing for Mongo

2015-03-09 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352812#comment-14352812
 ] 

Julian Reschke commented on OAK-2588:
-

The test case used a document ID that didn't conform to the n:/... pattern; 
that apparently confused the hierarchical cache invalidation. I have now fixed 
this, but we really should clarify unstated assumptions in the DocumentStore 
like these.

 MultiDocumentStoreTest.testInvalidateCache failing for Mongo
 

 Key: OAK-2588
 URL: https://issues.apache.org/jira/browse/OAK-2588
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.1.8


 {{MultiDocumentStoreTest.testInvalidateCache}} failing for Mongo
 {noformat}
 Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.255 sec  
 FAILURE!
 testInvalidateCache[0](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
   Time elapsed: 0.343 sec   FAILURE!
 java.lang.AssertionError: modcount should have incremented again expected:3 
 but was:2
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at 
 org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.testInvalidateCache(MultiDocumentStoreTest.java:184)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2588) MultiDocumentStoreTest.testInvalidateCache failing for Mongo

2015-03-09 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352812#comment-14352812
 ] 

Julian Reschke edited comment on OAK-2588 at 3/9/15 11:07 AM:
--

The test case used a document ID that didn't conform to the n:/... pattern; 
that apparently confused the hierarchical cache invalidation. I have now fixed 
this, but we really should clarify unstated assumptions in the DocumentStore 
API like these.


was (Author: reschke):
The test case used a document ID that didn't conform to the n:/... pattern; 
that apparently confused the hierarchical cache invalidation. I have now fixed 
this, but we really should clarify unstated assumptions in the DocumentStore 
like these.

 MultiDocumentStoreTest.testInvalidateCache failing for Mongo
 

 Key: OAK-2588
 URL: https://issues.apache.org/jira/browse/OAK-2588
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.1.8


 {{MultiDocumentStoreTest.testInvalidateCache}} failing for Mongo
 {noformat}
 Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.255 sec  
 FAILURE!
 testInvalidateCache[0](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
   Time elapsed: 0.343 sec   FAILURE!
 java.lang.AssertionError: modcount should have incremented again expected:3 
 but was:2
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at 
 org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.testInvalidateCache(MultiDocumentStoreTest.java:184)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352819#comment-14352819
 ] 

Marcel Reutegger commented on OAK-2557:
---

The revision GC needs to remove documents starting at the leafs. Otherwise the 
store is in an inconsistent state as you noticed. I assume the revision GC 
considers /x/child501 as not yet committed because commit flag is missing, 
hence it cannot be removed.

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated OAK-2594:
-
Description: 
As spellcheck was available in JR2, 1.0 not having that support results in a 
regression.
OAK-2175 should be backported to branch 1.0, that may imply backporting other 
issues from trunk to branch.

  was:As spellcheck was available in JR2, 1.0 not having that support results 
in a regression.


 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1356) Expose the preferred transient space size as repository descriptor

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1356:
-
Fix Version/s: (was: 1.2)
   1.4

 Expose the preferred transient space size as repository descriptor 
 ---

 Key: OAK-1356
 URL: https://issues.apache.org/jira/browse/OAK-1356
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
  Labels: api

 The problem is that the different stores have different transient space 
 characteristics. for example the MongoMK is very slow when handling large 
 saves.
 suggest to expose a repository descriptor that can be used to estimate the 
 preferred transient space, for example when importing content.
 so either a boolean like: 
   {{option.infinite.transientspace}}
 or a number like:
   {{option.transientspace.preferred.size}}
 the later would denote the average number of modified node states that should 
 be put in the transient space before the persistence starts to degrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1356) Expose the preferred transient space size as repository descriptor

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1356:
-
Fix Version/s: (was: 1.4)

 Expose the preferred transient space size as repository descriptor 
 ---

 Key: OAK-1356
 URL: https://issues.apache.org/jira/browse/OAK-1356
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
  Labels: api

 The problem is that the different stores have different transient space 
 characteristics. for example the MongoMK is very slow when handling large 
 saves.
 suggest to expose a repository descriptor that can be used to estimate the 
 preferred transient space, for example when importing content.
 so either a boolean like: 
   {{option.infinite.transientspace}}
 or a number like:
   {{option.transientspace.preferred.size}}
 the later would denote the average number of modified node states that should 
 be put in the transient space before the persistence starts to degrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352878#comment-14352878
 ] 

Tommaso Teofili commented on OAK-2594:
--

OAK-2175 backported in r1665206

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352839#comment-14352839
 ] 

Tommaso Teofili commented on OAK-2594:
--

backported rep:similar support in r1665191 (see OAK-1658) and whitelisted 
properties in r166518[2,3,5] (see OAK-2181).

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2594:


 Summary: Backport spellcheck support to 1.0
 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


As spellcheck was available in JR2, 1.0 not having that support results in a 
regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352867#comment-14352867
 ] 

Chetan Mehrotra commented on OAK-2557:
--

After an offline discussion with [~mreutegg] we come to following conclusion

# Deletion logic has to work like creation logic and has to perform deletion in 
bottom to top way
# Current logic also has a potential issue where if the system performing GC 
crashes in between then it might lead to a state where parent would have got 
removed before child and in that case such child document can never be GCed. So 
as a fix we should first sort the batch via {{PathComparator}} in reverse and 
then perform deletion from child to parent. {color:brown}Open a new issue for 
that{color}
# For large deletion (like current case) we make use of {{ExternalSort}} where 
sorting is performed on disk and we read for paths stored in file. This would 
make use of all the support developed in Blob GC in 
{{MarkSweepGarbageCollector}}

All in all this would not be a simple fix that I initially thought :(

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-2499) Expose mongo and db versions for reporting purposes

2015-03-09 Thread Marius Petria (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marius Petria reopened OAK-2499:


I found a small issue with the current implementation. The description is set 
on SegmentNodeStoreService component but should be set in registerNodeStore 
method on the NodeStore service. 


[1] 
http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/segment/SegmentNodeStoreService.java?view=markuppathrev=1663854

 Expose mongo and db versions for reporting purposes
 ---

 Key: OAK-2499
 URL: https://issues.apache.org/jira/browse/OAK-2499
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, rdbmk, segmentmk
Reporter: Marius Petria
Assignee: Chetan Mehrotra
 Fix For: 1.1.8

 Attachments: OAK-2499.1.diff, OAK-2499.2.diff


 For reporting purposes I need to find out from java the MK type and also
 for mongo version and the info about used DB make/version?
 [~chetanm] suggested that such information can be exposed when we register a
 {{NodeStore}} instances within OSGi we can also add a service property
 which captures such implementation details. Possibly use
 service.description or define a new one which provide a comma separated
 list of attributes.”



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352974#comment-14352974
 ] 

Tommaso Teofili commented on OAK-2594:
--

backported the fix for spellchecking of OAK-2483 in r1665233

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352768#comment-14352768
 ] 

Chetan Mehrotra edited comment on OAK-2557 at 3/9/15 11:06 AM:
---

[patch|^OAK-2557.patch] which flushes the deletes in batches of 450 (kept it 
less than batch size of MongoDocumentStore of 500). 

However saw an strange issue where the test fails. Test case creates nodes like 
{noformat}
/x
/x/child0
/x/child1
...
/x/child950
{noformat}

When initial batch is deleted it also picks up {{/x}}. Due to this when later 
deletion logic tries to deleted say /x/child501 then it is not able to locate 
the commit root for delete and assumes that node is visible

[~mreutegg] Can you have a look. Looks like a potential bug in DocumentStore 
logic. Or we need to ensure that we delete nodes going from child to parent? 
Further this problem would only arise if we try to delete ids in batches and 
would not come if pre emptively determine what all ids to be removed




was (Author: chetanm):
[patch|^OAK-2557.patch] which flushes the deletes in batches of 450 (kept it 
less than batch size of MongoDocumentStore of 500). 

However saw an strange issue where the test fails. Test case creates nodes like 
{noformat}
/x
/x/child0
/x/child1
...
/x/child950
{noformat}

When initial batch is deleted it also picks up {{/x}}. Due to this when later 
deletion logic tries to deleted say /x/child501 then it is not able to locate 
the commit root for delete and assumes that node is visible

[~mreutegg] Can you have a look. Looks like a potential bug in DocumentStore 
logic. Or we need to ensure that we delete nodes going from child to parent?


 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352892#comment-14352892
 ] 

Tommaso Teofili commented on OAK-2594:
--

backported the fix for spellchecking of OAK-2548 in r1665211 

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2593) Release Oak 1.1.8

2015-03-09 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2593:
--
Priority: Minor  (was: Major)

 Release Oak 1.1.8
 -

 Key: OAK-2593
 URL: https://issues.apache.org/jira/browse/OAK-2593
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor

 - release oak
 - update website
 - update javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2560) SegmentStore deadlock on shutdown

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-2560:
-
Fix Version/s: 1.2

 SegmentStore deadlock on shutdown
 -

 Key: OAK-2560
 URL: https://issues.apache.org/jira/browse/OAK-2560
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.2


 We've ran into a deadlock in the Oak SegmentNodeStore between
 _org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.deactivate(SegmentNodeStoreService.java:300)_
  and _ 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:120)_
 This was seen on Oak 1.0.11.
 {code}
 Java stack information for the threads listed above 
  
 pool-9-thread-5:
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:656)
   - waiting to lock 0x000680024468 (a 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:120)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:101)
   - locked 0x0006b0b7ae28 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.size(MapRecord.java:128)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:462)
   - locked 0x000680c634a8 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:460)
   - locked 0x000680c634a8 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMap(SegmentWriter.java:660)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1020)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1022)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1003)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:994)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1003)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:994)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:91)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.updated(SegmentNodeBuilder.java:77)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:204)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.updated(SegmentNodeBuilder.java:73)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:331)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:323)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:311)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.strategy.ContentMirrorStoreStrategy.insert(ContentMirrorStoreStrategy.java:112)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.strategy.ContentMirrorStoreStrategy.update(ContentMirrorStoreStrategy.java:81)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.PropertyIndexEditor.leave(PropertyIndexEditor.java:261)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74)
   at 
 org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130)
   at 
 org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:391)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125)
   at 
 org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
   at 
 

[jira] [Commented] (OAK-1752) Node name queries should use an index

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353138#comment-14353138
 ] 

Chetan Mehrotra commented on OAK-1752:
--

Not yet. Its on my TODO list.

 Node name queries should use an index
 -

 Key: OAK-1752
 URL: https://issues.apache.org/jira/browse/OAK-1752
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Alex Parvulescu
 Fix For: 1.2


 The node name queries don't use any index currently, making them really slow 
 and triggering a lot of traversal warnings.
 Simply adding node names to a property index would be too much content 
 indexed, but as Lucene already indexes the node names, using this index would 
 be one viable option.
 {code}
 /jcr:root//*[fn:name() = 'jcr:content']
 /jcr:root//*[jcr:like(fn:name(), 'jcr:con%')] 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1558) Expose FileStoreBackupRestoreMBean for supported NodeStores

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1558:
-
Assignee: (was: Alex Parvulescu)

 Expose FileStoreBackupRestoreMBean for supported NodeStores
 ---

 Key: OAK-1558
 URL: https://issues.apache.org/jira/browse/OAK-1558
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, segmentmk
Reporter: Michael Dürig
  Labels: monitoring

 {{NodeStore}} implementations should expose the 
 {{FileStoreBackupRestoreMBean}} in order to be interoperable with 
 {{RepositoryManagementMBean}}. See OAK-1160.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353140#comment-14353140
 ] 

Tommaso Teofili commented on OAK-2594:
--

backported OAK-2508 in r1665267

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1451) Expose index size

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1451:
-
Assignee: (was: Alex Parvulescu)

 Expose index size
 -

 Key: OAK-1451
 URL: https://issues.apache.org/jira/browse/OAK-1451
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Priority: Minor
  Labels: production, resilience
 Fix For: 1.2


 At the moment some MK's disc needs are largely from the indexes. Maybe we can 
 do something about this, but in the meantime it would be helpful if we could 
 expose the index sizes (num of indexed nodes) via JMX so that they could be 
 easily monitored.
 This would also be helpful to see at which point an index becomes useless (if 
 the majority of content nodes are indexed one might as well not have an index)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1077) XPath failures for escaped quotes in full-text search

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1077:
-
Assignee: (was: Alex Parvulescu)

 XPath failures for escaped quotes in full-text search
 -

 Key: OAK-1077
 URL: https://issues.apache.org/jira/browse/OAK-1077
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, query
Reporter: Alex Parvulescu
Priority: Minor
 Fix For: 1.2

 Attachments: OAK-1077-test.patch


 It seems that the query doesn't match when a property is _exact\quote_ or 
 _exact\'quote_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2589) Provide progress indication when reindexing is being peformed

2015-03-09 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353051#comment-14353051
 ] 

Alex Parvulescu commented on OAK-2589:
--

+1 looks good

bq. Would it be fine to report for every 1000
I find that low, I'd opt for 10k even, less noise is best, does it help in any 
way to have this much granularity in the logs?

bs. Add javadoc to indexUpdate in IndexUpdateCallback to call out that this 
should be invoked mostly when an entry is made to the index.
Agreed, makes perfect sense


 Provide progress indication when reindexing is being peformed
 -

 Key: OAK-2589
 URL: https://issues.apache.org/jira/browse/OAK-2589
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13

 Attachments: OAK-2589.patch


 When reindex property of an oak:QueryIndexDefinition node is set to true 
 there is a message logged which says:
 29.09.2014 15:04:08.904 *INFO* [pool-9-thread-3] 
 org.apache.jackrabbit.oak.plugins.index.IndexUpdate Reindexing would be 
 performed for following indexes [/oak:index/searchFacets]
 However, after that, there is no indication of progress for this reindexing. 
 The only way to determine if the reindexing is progressing is to check back 
 and see that the reindex property on your oak:QueryIndexDefinition node is 
 now set to false.
 If something goes wrong with the reindexing it is hard to know if the 
 reindexing is happening at all. 
 If there could be some logging to indicate that this reindexing is happening 
 successfully and how complete it is, it would be of a great help while 
 troubleshooting indexing problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1752) Node name queries should use an index

2015-03-09 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353136#comment-14353136
 ] 

Alex Parvulescu commented on OAK-1752:
--

[~chetanm] I think this is already taken care of by the latest Lucene Property 
Index changes?

 Node name queries should use an index
 -

 Key: OAK-1752
 URL: https://issues.apache.org/jira/browse/OAK-1752
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Alex Parvulescu
 Fix For: 1.2


 The node name queries don't use any index currently, making them really slow 
 and triggering a lot of traversal warnings.
 Simply adding node names to a property index would be too much content 
 indexed, but as Lucene already indexes the node names, using this index would 
 be one viable option.
 {code}
 /jcr:root//*[fn:name() = 'jcr:content']
 /jcr:root//*[jcr:like(fn:name(), 'jcr:con%')] 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-318) Excerpt support

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-318:

Assignee: (was: Alex Parvulescu)

 Excerpt support
 ---

 Key: OAK-318
 URL: https://issues.apache.org/jira/browse/OAK-318
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Alex Parvulescu
 Fix For: 1.2


 Test class: ExcerptTest.
 Right now I only see parse errors:
 Caused by: java.text.ParseException: Query:
 {noformat}
 testroot/*[jcr:contains(., 'jackrabbit')]/rep:excerpt((*).); expected: end
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1879) Store record offset as short

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1879:
-
Fix Version/s: (was: 1.2)

 Store record offset as short
 

 Key: OAK-1879
 URL: https://issues.apache.org/jira/browse/OAK-1879
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu

 The offset value is actually a short but the RecordId keeps it as an int. 
 This is a leftover from the initial design and it would be beneficial to 
 refactor those parts of the code to use a short instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2560) SegmentStore deadlock on shutdown

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu reassigned OAK-2560:


Assignee: Alex Parvulescu

 SegmentStore deadlock on shutdown
 -

 Key: OAK-2560
 URL: https://issues.apache.org/jira/browse/OAK-2560
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.2


 We've ran into a deadlock in the Oak SegmentNodeStore between
 _org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService.deactivate(SegmentNodeStoreService.java:300)_
  and _ 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:120)_
 This was seen on Oak 1.0.11.
 {code}
 Java stack information for the threads listed above 
  
 pool-9-thread-5:
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:656)
   - waiting to lock 0x000680024468 (a 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:120)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:101)
   - locked 0x0006b0b7ae28 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.size(MapRecord.java:128)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:462)
   - locked 0x000680c634a8 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMapBucket(SegmentWriter.java:460)
   - locked 0x000680c634a8 (a 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeMap(SegmentWriter.java:660)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1020)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1022)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1003)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:994)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter$2.childNodeChanged(SegmentWriter.java:1003)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:994)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:91)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.updated(SegmentNodeBuilder.java:77)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.updated(MemoryNodeBuilder.java:204)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.updated(SegmentNodeBuilder.java:73)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:331)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setChildNode(MemoryNodeBuilder.java:323)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.child(MemoryNodeBuilder.java:311)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.strategy.ContentMirrorStoreStrategy.insert(ContentMirrorStoreStrategy.java:112)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.strategy.ContentMirrorStoreStrategy.update(ContentMirrorStoreStrategy.java:81)
   at 
 org.apache.jackrabbit.oak.plugins.index.property.PropertyIndexEditor.leave(PropertyIndexEditor.java:261)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.leave(CompositeEditor.java:74)
   at 
 org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:130)
   at 
 org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:391)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeAdded(EditorDiff.java:125)
   at 
 org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:160)
   at 
 

[jira] [Resolved] (OAK-2594) Backport spellcheck support to 1.0

2015-03-09 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-2594.
--
Resolution: Fixed

 Backport spellcheck support to 1.0
 --

 Key: OAK-2594
 URL: https://issues.apache.org/jira/browse/OAK-2594
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-core, oak-lucene, oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


 As spellcheck was available in JR2, 1.0 not having that support results in a 
 regression.
 OAK-2175 should be backported to branch 1.0, that may imply backporting other 
 issues from trunk to branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-903) User transactions support

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-903:

Fix Version/s: (was: 1.2)

 User transactions support
 -

 Key: OAK-903
 URL: https://issues.apache.org/jira/browse/OAK-903
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, jcr
Reporter: Alex Parvulescu

 There is no support in oak for user transactions so far.
 First step is to add the jta dependency (geronimo-jta) to 
 oak-jcr as test so we can enable more jackrabbit tests in oak (see OAK-900).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1340) Backup and restore for the SQL DocumentStore

2015-03-09 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-1340:
-
Assignee: (was: Alex Parvulescu)

 Backup and restore for the SQL DocumentStore
 

 Key: OAK-1340
 URL: https://issues.apache.org/jira/browse/OAK-1340
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, mk
Reporter: Alex Parvulescu
  Labels: production, tools
 Fix For: 1.2


 Similar to OAK-1159 but specific to the SQL Document Store implementation.
 The backup could leverage the existing backup bits and backup to the file 
 system (sql-to-tarmk backup) but the restore functionality is missing 
 (tar-to-sql).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2596) more (jmx) instrumentation for observation queue

2015-03-09 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-2596:


 Summary: more (jmx) instrumentation for observation queue
 Key: OAK-2596
 URL: https://issues.apache.org/jira/browse/OAK-2596
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.12
Reporter: Stefan Egli
 Fix For: 1.0.13


While debugging issues with the observation queue it would be handy to have 
more detailed information available. At the moment you can only see one value 
wrt length of the queue: that is the maximum of all queues. It is unclear if 
the queue is that long for only one or many listeners. And it is unclear from 
that if the listener is slow or the engine that produces the events for the 
listener.

So I'd suggest to add the following details - possible exposed via JMX? :
* add queue length details to each of the observation listeners
* have a history of the last, eg 1000 events per listener showing a) how long 
the event took to be created/generated and b) how long the listener took to 
process. Sometimes averages are not detailed enough so such a in-depth 
information might become useful. (Not sure about the feasibility of '1000' here 
- maybe that could be configurable though - just putting the idea out here).
** have some information about whether a listener is currently 'reading events 
from the cache' or whether it has to go to eg mongo 
* maybe have a 'top 10' listeners that have the largest queue at the moment to 
easily allow navigation instead of having to go through all (eg 200) listeners 
manually each time.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2464) Optimize read of known non-existing children

2015-03-09 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353142#comment-14353142
 ] 

Stefan Egli commented on OAK-2464:
--

bq. [~chetanm], Would merge it later to branch
will this make it into oak 1.0.13 ?

 Optimize read of known non-existing children
 

 Key: OAK-2464
 URL: https://issues.apache.org/jira/browse/OAK-2464
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Vikas Saurabh
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.1.6

 Attachments: 
 0001-OAK-2464-Optimize-read-of-known-non-existing-childre.patch, 
 OAK-2464-Sort order test case.patch, OAK-2464-Take2.patch


 Reading a non-existing node always queries down to {{DocumentStore}}. 
 {{DocumentNodeStore}} should consider {{nodeChildrenCache}} to see if it 
 already knows about non-existence and avoid a look up into {{DocumentStore}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2499) Expose mongo and db versions for reporting purposes

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353150#comment-14353150
 ] 

Chetan Mehrotra commented on OAK-2499:
--

Ah missed that. Fixed now with http://svn.apache.org/r1665272

 Expose mongo and db versions for reporting purposes
 ---

 Key: OAK-2499
 URL: https://issues.apache.org/jira/browse/OAK-2499
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, rdbmk, segmentmk
Reporter: Marius Petria
Assignee: Chetan Mehrotra
 Fix For: 1.1.8

 Attachments: OAK-2499.1.diff, OAK-2499.2.diff


 For reporting purposes I need to find out from java the MK type and also
 for mongo version and the info about used DB make/version?
 [~chetanm] suggested that such information can be exposed when we register a
 {{NodeStore}} instances within OSGi we can also add a service property
 which captures such implementation details. Possibly use
 service.description or define a new one which provide a comma separated
 list of attributes.”



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (OAK-2522) Release Oak 1.1.7

2015-03-09 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella closed OAK-2522.
-

 Release Oak 1.1.7
 -

 Key: OAK-2522
 URL: https://issues.apache.org/jira/browse/OAK-2522
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor

 - release oak
 - update website
 - update javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2522) Release Oak 1.1.7

2015-03-09 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-2522.
---
Resolution: Fixed

 Release Oak 1.1.7
 -

 Key: OAK-2522
 URL: https://issues.apache.org/jira/browse/OAK-2522
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor

 - release oak
 - update website
 - update javadoc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2597) expose mongo's clusterNodes info more prominently

2015-03-09 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-2597:


 Summary: expose mongo's clusterNodes info more prominently
 Key: OAK-2597
 URL: https://issues.apache.org/jira/browse/OAK-2597
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Affects Versions: 1.0.12
Reporter: Stefan Egli


Suggestion: {{db.clusterNodes}} contains very useful information wrt how many 
instances are currently (and have been) active in the oak-mongo-cluster. While 
this should in theory match the topology reported via sling's discovery api, it 
might differ. It could be very helpful if this information was exposed very 
prominently in a UI (assuming this is not yet the case) - eg in a 
/system/console page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2595) High memory consumption of CompactionGainEstimate

2015-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-2595.

Resolution: Fixed

Fixed in trunk at http://svn.apache.org/r1665275
Merged into 1.0 at http://svn.apache.org/r1665298.

The fix makes use of a memory optimised {{RecordIdMap}}, which groups record 
ids by segment id and stores record offsets as a packed array of shorts along 
with them. 

With this fix I've seen significant lower memory pressure and churn when 
running the estimation process on large repositories. 

 High memory consumption of CompactionGainEstimate
 -

 Key: OAK-2595
 URL: https://issues.apache.org/jira/browse/OAK-2595
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: compaction, gc
 Fix For: 1.1.8, 1.0.13


 {{CompactionGainEstimate}} keeps a set for the visited record ids. Each entry 
 in that set is represented by an instance of {{ThinRecordId}}. For big 
 repositories the instance overhead lead to {{OOME}} while running the 
 compaction estimator. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353231#comment-14353231
 ] 

Michael Dürig commented on OAK-2262:


As Oak doesn't keep track of the operation performed we can only provide sort 
of a consolidated approximation of what has happened to a multi valued 
property. The algorithm for calculating the sets of the indices of the 
{{changed}}, {{added}} and {{removed}} properties might look something like 
this:

{code}
for (int k = 0; k  min(beforeCount, afterCount); k++) {
if (!beforeValue[k].equals(afterValue[k])) {
changed.add(k);
}
}

if (beforeCount  afterCount) {
for (int k = afterCount; k  beforeCount; k++) {
removed.add(k);
}
} else if (afterCount  beforeCount) {
for (int k = beforeCount; k  afterCount; k++) {
added.add(k);
}
}
{code}

Of course this would reflect e.g. a deleting followed by an addition as a 
change. 

[~teofili], let me know whether this works for your case and whether we can 
tweak this in a way that better suites you.

 Add metadata about the changed value to a PROPERTY_CHANGED event on a 
 multivalued property
 --

 Key: OAK-2262
 URL: https://issues.apache.org/jira/browse/OAK-2262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Affects Versions: 1.1.2
Reporter: Tommaso Teofili
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.1.8


 When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
 value can have actually changed so that handlers of such events do not need 
 any further information to process it and eventually work on the changed 
 value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
 (e.g. String[]) may relate to any of the values and that brings a source of 
 uncertainty on event handlers processing such changes because there's no mean 
 to understand which property value had been changed and therefore to them to 
 react accordingly.
 A workaround for that is to create Oak specific _Observers_ which can deal 
 with the diff between before and after state and create a specific event 
 containing the diff, however this would add a non trivial load to the 
 repository because of the _Observer_ itself and because of the additional 
 events being generated while it'd be great if the 'default' events would have 
 metadata e.g. of the changed value index or similar information that can help 
 understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353596#comment-14353596
 ] 

Michael Dürig commented on OAK-2262:


It just occurred to me that a more general implementation might be more useful 
and lower on performance impact: what about adding before and/or after property 
values to all property events? On one hand this would mean that the client 
would need to figure out the actual difference in multi valued properties 
itself. On the other hand however the client could exercise full control over 
the process. No processing would take place at all if the client is not 
interested in the property values and thus no CPU cycles would be wasted. 

 Add metadata about the changed value to a PROPERTY_CHANGED event on a 
 multivalued property
 --

 Key: OAK-2262
 URL: https://issues.apache.org/jira/browse/OAK-2262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Affects Versions: 1.1.2
Reporter: Tommaso Teofili
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.1.8


 When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
 value can have actually changed so that handlers of such events do not need 
 any further information to process it and eventually work on the changed 
 value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
 (e.g. String[]) may relate to any of the values and that brings a source of 
 uncertainty on event handlers processing such changes because there's no mean 
 to understand which property value had been changed and therefore to them to 
 react accordingly.
 A workaround for that is to create Oak specific _Observers_ which can deal 
 with the diff between before and after state and create a specific event 
 containing the diff, however this would add a non trivial load to the 
 repository because of the _Observer_ itself and because of the additional 
 events being generated while it'd be great if the 'default' events would have 
 metadata e.g. of the changed value index or similar information that can help 
 understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1956) Set correct OSGi package export version

2015-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1956:
---
Priority: Critical  (was: Major)

 Set correct OSGi package export version
 ---

 Key: OAK-1956
 URL: https://issues.apache.org/jira/browse/OAK-1956
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Michael Dürig
Priority: Critical
 Fix For: 1.2


 This issue serves as a reminder to set the correct OSGi package export 
 versions before we release 1.2.
 OAK-1536 added support for the BND baseline feature: the baseline.xml files 
 in the target directories should help us figuring out the correct versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353601#comment-14353601
 ] 

Michael Dürig commented on OAK-2262:


Further note that any such approach of using the info map to convey additional 
information to the client is strictly speaking in violation with the 
specification. It e.g. causes 
{{o.a.j.test.api.observation.GetInfoTest#testPropertyAdded}} to fail.

 Add metadata about the changed value to a PROPERTY_CHANGED event on a 
 multivalued property
 --

 Key: OAK-2262
 URL: https://issues.apache.org/jira/browse/OAK-2262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Affects Versions: 1.1.2
Reporter: Tommaso Teofili
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.1.8


 When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
 value can have actually changed so that handlers of such events do not need 
 any further information to process it and eventually work on the changed 
 value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
 (e.g. String[]) may relate to any of the values and that brings a source of 
 uncertainty on event handlers processing such changes because there's no mean 
 to understand which property value had been changed and therefore to them to 
 react accordingly.
 A workaround for that is to create Oak specific _Observers_ which can deal 
 with the diff between before and after state and create a specific event 
 containing the diff, however this would add a non trivial load to the 
 repository because of the _Observer_ itself and because of the additional 
 events being generated while it'd be great if the 'default' events would have 
 metadata e.g. of the changed value index or similar information that can help 
 understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2557:
-
Attachment: OAK-2557.patch

[patch|^OAK-2557.patch] which flushes the deletes in batches of 450 (kept it 
less than batch size of MongoDocumentStore of 500). 

However saw an strange issue where the test fails. Test case creates nodes like 
{noformat}
/x
/x/child0
/x/child1
...
/x/child950
{noformat}

When initial batch is deleted it also picks up {{/x}}. Due to this when later 
deletion logic tries to deleted say /x/child501 then it is not able to locate 
the commit root for delete and assumes that node is visible

[~mreutegg] Can you have a look. Looks like a potential bug in DocumentStore 
logic. Or we need to ensure that we delete nodes going from child to parent?


 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2586) Support including and excluding paths during upgrade

2015-03-09 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352769#comment-14352769
 ] 

Julian Sedding commented on OAK-2586:
-

Thanks for the feedback [~chetanm] and [~mduerig]. I agree that we need to make 
sure that performance does not suffer from this and we should measure 
performance to avoid any degradation.

I have tried to address any performance concerns by only wrapping NodeState 
that need to be wrapped (i.e. only NodeStates that are themselves hidden or 
have hidden descendants). Thus for the default case without any 
includes/excludes, no wrapping should occur at all.

I'll see if I can come up with a way to measure performance. Ideas welcome!

 Support including and excluding paths during upgrade
 

 Key: OAK-2586
 URL: https://issues.apache.org/jira/browse/OAK-2586
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.1.6
Reporter: Julian Sedding
  Labels: patch
 Attachments: OAK-2586.patch


 When upgrading a Jackrabbit 2 to an Oak repository it can be desirable to 
 constrain which paths/sub-trees should be copied from the source repository. 
 Not least because this can (drastically) reduce the amount of content that 
 needs to be traversed, copied and indexed.
 I suggest to allow filtering the content visible from the source repository 
 by wrapping the JackrabbitNodeState instance and hiding selected paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2591) Invoke indexUpdate only when new Document are added in LuceneIndexEditor

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2591.
--
Resolution: Fixed

Thanks Alex!
* trunk - http://svn.apache.org/r1665176
* 1.0 - http://svn.apache.org/r1665178

 Invoke indexUpdate only when new Document are added in LuceneIndexEditor
 

 Key: OAK-2591
 URL: https://issues.apache.org/jira/browse/OAK-2591
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.1.8, 1.0.13

 Attachments: OAK-2591.patch


 {{LuceneIndexEditor}} currently invokes the {{indexUpdate}} whenever it adds 
 a new field to a Document. To reduce the noise it would be better to invoke 
 it whenever a new Document is added to index in line with other index 
 implementation.
 Such an implementation would allow the {{IndexUpdateCallback}} to more 
 accurately determine the progress an indexer is making in terms of number of 
 nodes indexed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2168) Make SolrIndex implement AdvanceQueryIndex

2015-03-09 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated OAK-2168:
-
Fix Version/s: (was: 1.0.12)
   1.0.13

 Make SolrIndex implement AdvanceQueryIndex
 --

 Key: OAK-2168
 URL: https://issues.apache.org/jira/browse/OAK-2168
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: oak-solr
Reporter: Chetan Mehrotra
Assignee: Tommaso Teofili
Priority: Minor
 Fix For: 1.1.4, 1.0.13


 Make SolrIndex implement AdvanceQueryIndex to allow it to make use of 
 * Sorting 
 * AggregateIndex which would only work with AdvanceQueryIndex post OAK-2119



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2168) Make SolrIndex implement AdvanceQueryIndex

2015-03-09 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated OAK-2168:
-
Fix Version/s: 1.0.12

 Make SolrIndex implement AdvanceQueryIndex
 --

 Key: OAK-2168
 URL: https://issues.apache.org/jira/browse/OAK-2168
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: oak-solr
Reporter: Chetan Mehrotra
Assignee: Tommaso Teofili
Priority: Minor
 Fix For: 1.1.4, 1.0.13


 Make SolrIndex implement AdvanceQueryIndex to allow it to make use of 
 * Sorting 
 * AggregateIndex which would only work with AdvanceQueryIndex post OAK-2119



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2168) Make SolrIndex implement AdvanceQueryIndex

2015-03-09 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352793#comment-14352793
 ] 

Tommaso Teofili commented on OAK-2168:
--

merged in branch 1.0 in r1665181

 Make SolrIndex implement AdvanceQueryIndex
 --

 Key: OAK-2168
 URL: https://issues.apache.org/jira/browse/OAK-2168
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: oak-solr
Reporter: Chetan Mehrotra
Assignee: Tommaso Teofili
Priority: Minor
 Fix For: 1.1.4, 1.0.13


 Make SolrIndex implement AdvanceQueryIndex to allow it to make use of 
 * Sorting 
 * AggregateIndex which would only work with AdvanceQueryIndex post OAK-2119



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2464) Optimize read of known non-existing children

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2464:
-
Fix Version/s: 1.0.13

 Optimize read of known non-existing children
 

 Key: OAK-2464
 URL: https://issues.apache.org/jira/browse/OAK-2464
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Vikas Saurabh
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.1.6, 1.0.13

 Attachments: 
 0001-OAK-2464-Optimize-read-of-known-non-existing-childre.patch, 
 OAK-2464-Sort order test case.patch, OAK-2464-Take2.patch


 Reading a non-existing node always queries down to {{DocumentStore}}. 
 {{DocumentNodeStore}} should consider {{nodeChildrenCache}} to see if it 
 already knows about non-existence and avoid a look up into {{DocumentStore}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2464) Optimize read of known non-existing children

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354238#comment-14354238
 ] 

Chetan Mehrotra commented on OAK-2464:
--

Missed doing that. Now done with http://svn.apache.org/r1665401

 Optimize read of known non-existing children
 

 Key: OAK-2464
 URL: https://issues.apache.org/jira/browse/OAK-2464
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Vikas Saurabh
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.1.6, 1.0.13

 Attachments: 
 0001-OAK-2464-Optimize-read-of-known-non-existing-childre.patch, 
 OAK-2464-Sort order test case.patch, OAK-2464-Take2.patch


 Reading a non-existing node always queries down to {{DocumentStore}}. 
 {{DocumentNodeStore}} should consider {{nodeChildrenCache}} to see if it 
 already knows about non-existence and avoid a look up into {{DocumentStore}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2589) Provide progress indication when reindexing is being peformed

2015-03-09 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2589.
--
Resolution: Fixed

Change the logging threshold to 1.
* trunk - http://svn.apache.org/r1665271
* 1.0 - http://svn.apache.org/r1665398

 Provide progress indication when reindexing is being peformed
 -

 Key: OAK-2589
 URL: https://issues.apache.org/jira/browse/OAK-2589
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13

 Attachments: OAK-2589.patch


 When reindex property of an oak:QueryIndexDefinition node is set to true 
 there is a message logged which says:
 29.09.2014 15:04:08.904 *INFO* [pool-9-thread-3] 
 org.apache.jackrabbit.oak.plugins.index.IndexUpdate Reindexing would be 
 performed for following indexes [/oak:index/searchFacets]
 However, after that, there is no indication of progress for this reindexing. 
 The only way to determine if the reindexing is progressing is to check back 
 and see that the reindex property on your oak:QueryIndexDefinition node is 
 now set to false.
 If something goes wrong with the reindexing it is hard to know if the 
 reindexing is happening at all. 
 If there could be some logging to indicate that this reindexing is happening 
 successfully and how complete it is, it would be of a great help while 
 troubleshooting indexing problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2037) Define standards for plan output

2015-03-09 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352662#comment-14352662
 ] 

Thomas Mueller commented on OAK-2037:
-

Using JSON is a good idea.

 Define standards for plan output
 

 Key: OAK-2037
 URL: https://issues.apache.org/jira/browse/OAK-2037
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: query
Reporter: Justin Edelson
Assignee: Thomas Mueller
 Fix For: 1.2, 1.1.8


 Currently, the syntax for the plan output is chaotic as it varies 
 significantly from index to index. Whereas some of this is expected - each 
 index type will have different data to output, Oak should provide some 
 standards about how a plan will appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1981) Implement full scale Revision GC for DocumentNodeStore

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1981:
--
Fix Version/s: 1.2

 Implement full scale Revision GC for DocumentNodeStore
 --

 Key: OAK-1981
 URL: https://issues.apache.org/jira/browse/OAK-1981
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk
Reporter: Chetan Mehrotra
 Fix For: 1.2


 So far we have implemented garbage collection in some form with OAK-1341. 
 Those approaches help us remove quite a bit of garbage (mostly due to deleted 
 nodes) but till some part is left
 However full GC is still not performed due to which some of the old revision 
 related data cannot be GCed like
 * Revision info present in revision maps of various commit roots
 * Revision related to unmerged branches (OAK-1926)
 * Revision data created to property being modified by different cluster nodes
 So having a tool which can perform above GC would be helpful. For start we 
 can have an implementation which takes a brute force approach and scans whole 
 repo (would take quite a bit of time) and later we can evolve it. Or allow 
 system admins to determine to what level GC has to be done



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2549) Persistent Cache: support append-only mode

2015-03-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2549:

Summary: Persistent Cache: support append-only mode  (was: Persistent 
Cache: use append-only mode)

 Persistent Cache: support append-only mode
 --

 Key: OAK-2549
 URL: https://issues.apache.org/jira/browse/OAK-2549
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.2, 1.1.8, 1.0.13


 The persistent cache mainly adds data (in some rare cases it overwrites 
 existing entries: the list of child nodes, if more child nodes are read). 
 Because of that, the storage backend (H2 MVStore currently) can operate in 
 append-only mode.
 This should also avoid the following problem: IllegalStateException: Chunk 
 ... no longer exists [1.4.185/9]. This is probably a bug in H2, where old 
 chunks are removed too early. This error is not (yet) reproducible, but 
 happened while testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2549) Persistent Cache: support append-only mode

2015-03-09 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352655#comment-14352655
 ] 

Thomas Mueller commented on OAK-2549:
-

I changed the title to support instead of use append-only mode.

The reason is, first I thought that append only-mode is really needed to avoid 
errors, but now I found out that the errors were caused by Thread.interrupt(), 
which is now solved thanks to OAK-2571.

 Persistent Cache: support append-only mode
 --

 Key: OAK-2549
 URL: https://issues.apache.org/jira/browse/OAK-2549
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.2, 1.1.8, 1.0.13


 The persistent cache mainly adds data (in some rare cases it overwrites 
 existing entries: the list of child nodes, if more child nodes are read). 
 Because of that, the storage backend (H2 MVStore currently) can operate in 
 append-only mode.
 This should also avoid the following problem: IllegalStateException: Chunk 
 ... no longer exists [1.4.185/9]. This is probably a bug in H2, where old 
 chunks are removed too early. This error is not (yet) reproducible, but 
 happened while testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1506) Concurrent FlatTreeWithAceForSamePrincipalTest fails on Oak-Mongo

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1506:
--
Fix Version/s: (was: 1.2)
   1.4

 Concurrent FlatTreeWithAceForSamePrincipalTest fails on Oak-Mongo
 -

 Key: OAK-1506
 URL: https://issues.apache.org/jira/browse/OAK-1506
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk, run
Affects Versions: 0.17
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
  Labels: concurrency
 Fix For: 1.4


 The benchmark test fails when run concurrently in a cluster. Setting up the 
 test content fails with a conflict. I assume this happens because nodes in 
 the permission store are populated concurrently and may conflict.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1769) Better cooperation for conflicting updates across cluster nodes

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1769:
--
Fix Version/s: (was: 1.2)
   1.4

 Better cooperation for conflicting updates across cluster nodes
 ---

 Key: OAK-1769
 URL: https://issues.apache.org/jira/browse/OAK-1769
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.4


 Every now and then we see commit failures in a cluster when many sessions try 
 to update the same property or perform some other conflicting update.
 The current implementation will retry the merge after a delay, but chances 
 are some session on another cluster node again changed the property in the 
 meantime. This will lead to yet another retry until the limit is reached and 
 the commit fails. The conflict logic is quite unfair, because it favors the 
 winning session.
 The implementation should be improved to show a more fair behavior across 
 cluster nodes when there are conflicts caused by competing session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2574) Update mongo-java-driver to 2.13.0

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2574:
--
Fix Version/s: 1.2

 Update mongo-java-driver to 2.13.0
 --

 Key: OAK-2574
 URL: https://issues.apache.org/jira/browse/OAK-2574
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.2


 MongoDB 3.0 was released yesterday and the current java driver (2.12.2) used 
 by Oak is marked as untested with 3.0:
 http://docs.mongodb.org/ecosystem/drivers/java/#compatibility
 We should update to 2.13.0, which is compatible with 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2357) Aggregate index: Abnormal behavior of and not(..) clause in XPath

2015-03-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2357:

Summary: Aggregate index: Abnormal behavior of and not(..) clause in 
XPath  (was: Abnormal behavior of and not(..) clause in XPath)

 Aggregate index: Abnormal behavior of and not(..) clause in XPath
 ---

 Key: OAK-2357
 URL: https://issues.apache.org/jira/browse/OAK-2357
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: query
Affects Versions: 1.0.8, 1.1.3
Reporter: Geoffroy Schneck
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 1.2, 1.0.13


 Create a node {{/tmp/node1}} with property {{prop1 = 'foobar'}}. 
 Perform following query :
 {noformat}
 /jcr:root/tmp//*[prop1 = 'foobar' and not(prop1 = 'fooba')]
 {noformat}
 {{/tmp/node1}} is returned by the search.
 Now replace the = clause by the {{jcr:contains()}} :
 {noformat}
 /jcr:root/tmp//*[jcr:contains(., 'foobar') and not(jcr:contains(., 
 'foobar'))]
 {noformat}
 No result is returned. 
 Despite the presence of {{/tmp/node1}} in the results of 
 {{/jcr:root/tmp//\*\[not(jcr:contains(., 'foobar'))\]}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2549) Persistent Cache: support append-only mode

2015-03-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-2549.
-
Resolution: Fixed

_Support_ for the append-only mode is already committed.

 Persistent Cache: support append-only mode
 --

 Key: OAK-2549
 URL: https://issues.apache.org/jira/browse/OAK-2549
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.2, 1.1.8, 1.0.13


 The persistent cache mainly adds data (in some rare cases it overwrites 
 existing entries: the list of child nodes, if more child nodes are read). 
 Because of that, the storage backend (H2 MVStore currently) can operate in 
 append-only mode.
 This should also avoid the following problem: IllegalStateException: Chunk 
 ... no longer exists [1.4.185/9]. This is probably a bug in H2, where old 
 chunks are removed too early. This error is not (yet) reproducible, but 
 happened while testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2037) Define standards for plan output

2015-03-09 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-2037:
---

Assignee: Thomas Mueller

 Define standards for plan output
 

 Key: OAK-2037
 URL: https://issues.apache.org/jira/browse/OAK-2037
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: query
Reporter: Justin Edelson
Assignee: Thomas Mueller
 Fix For: 1.2, 1.1.8


 Currently, the syntax for the plan output is chaotic as it varies 
 significantly from index to index. Whereas some of this is expected - each 
 index type will have different data to output, Oak should provide some 
 standards about how a plan will appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2395) RDB: MS SQL Server support

2015-03-09 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2395:
---
Assignee: Julian Reschke  (was: Amit Jain)

 RDB: MS SQL Server support
 --

 Key: OAK-2395
 URL: https://issues.apache.org/jira/browse/OAK-2395
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Amit Jain
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.2

 Attachments: OAK-2395-v0.1.patch, sqlserver.diff






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2395) RDB: MS SQL Server support

2015-03-09 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352666#comment-14352666
 ] 

Amit Jain commented on OAK-2395:


[~julian.resc...@gmx.de] Assigning this to you since you started working on 
this.

 RDB: MS SQL Server support
 --

 Key: OAK-2395
 URL: https://issues.apache.org/jira/browse/OAK-2395
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Amit Jain
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.2

 Attachments: OAK-2395-v0.1.patch, sqlserver.diff






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2586) Support including and excluding paths during upgrade

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352709#comment-14352709
 ] 

Chetan Mehrotra commented on OAK-2586:
--

This looks useful! What I am not sure is would wrapping of {{NodeState}} would 
lead to unoptimized code path particularly in equals evaluation where the 
specific NodeState implementation do take a faster approach if instance passed 
is of same type. May be the equals can unwrap the NodeState and then perform 
equals.

[~mduerig] Thoughts?

 Support including and excluding paths during upgrade
 

 Key: OAK-2586
 URL: https://issues.apache.org/jira/browse/OAK-2586
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.1.6
Reporter: Julian Sedding
  Labels: patch
 Attachments: OAK-2586.patch


 When upgrading a Jackrabbit 2 to an Oak repository it can be desirable to 
 constrain which paths/sub-trees should be copied from the source repository. 
 Not least because this can (drastically) reduce the amount of content that 
 needs to be traversed, copied and indexed.
 I suggest to allow filtering the content visible from the source repository 
 by wrapping the JackrabbitNodeState instance and hiding selected paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2114) Aggregate index returns the ancestor node as well

2015-03-09 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352651#comment-14352651
 ] 

Thomas Mueller commented on OAK-2114:
-

Yes this is a bug.

 Aggregate index returns the ancestor node as well
 -

 Key: OAK-2114
 URL: https://issues.apache.org/jira/browse/OAK-2114
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, oak-lucene, query
Affects Versions: 1.1.0
Reporter: Davide Giannella
Assignee: Davide Giannella
 Attachments: OAK-2114-test.patch


 When performing a query like
 {noformat}
 //*[jcr:contains(. 'something') or jcr:contains(metadata, 'something')]
 {noformat}
 having a structure like {{/content/node/jcr:content}} where {{jcr:content}} 
 contains 'something' both {{/content/node}} and {{/content/node/jcr:content}} 
 are returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352714#comment-14352714
 ] 

Chetan Mehrotra commented on OAK-2557:
--

bq. Is that set becoming too large to cause an OOM

Indeed thats the case. The array list had 12M entries taking 3 GB of space. So 
GC logic should do the deletion in in between

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2592) Commit does not ensure w:majority

2015-03-09 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-2592:
-

 Summary: Commit does not ensure w:majority
 Key: OAK-2592
 URL: https://issues.apache.org/jira/browse/OAK-2592
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.4


The MongoDocumentStore uses {{findAndModify()}} to commit a transaction. This 
operation does not allow an application specified write concern and always uses 
the MongoDB default write concern {{Acknowledged}}. This means a commit may not 
make it to a majority of a replica set when the primary fails. From a 
MongoDocumentStore perspective it may appear as if a write was successful and 
later reverted. See also the test in OAK-1641.

To fix this, we'd probably have to change the MongoDocumentStore to avoid 
{{findAndModify()}} and use {{update()}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1641) Mongo: Un-/CheckedExecutionException on replica-primary crash

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-1641.
---
   Resolution: Fixed
Fix Version/s: (was: 1.2)
   1.1.8

The initially reported issue is resolved. The remaining issue with the write 
concern is tracked with OAK-2592.

 Mongo: Un-/CheckedExecutionException on replica-primary crash
 -

 Key: OAK-1641
 URL: https://issues.apache.org/jira/browse/OAK-1641
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 28, 2014
Reporter: Stefan Egli
Assignee: Marcel Reutegger
 Fix For: 1.1.8

 Attachments: ReplicaCrashResilienceTest.java, 
 ReplicaCrashResilienceTest.java, mongoUrl_fixture_patch_oak1641.diff


 Testing with a mongo replicaSet setup: 1 primary, 1 secondary and 1 
 secondary-arbiter-only.
 Running a simple test which has 2 threads: a writer thread and a reader 
 thread.
 The following exception occurs when crashing mongo primary
 {code}
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getNode(DocumentNodeStore.java:593)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.hasChildNode(DocumentNodeState.java:164)
   at 
 org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.hasChildNode(MemoryNodeBuilder.java:301)
   at 
 org.apache.jackrabbit.oak.core.SecureNodeBuilder.hasChildNode(SecureNodeBuilder.java:299)
   at 
 org.apache.jackrabbit.oak.plugins.tree.AbstractTree.hasChild(AbstractTree.java:267)
   at 
 org.apache.jackrabbit.oak.core.MutableTree.getChild(MutableTree.java:147)
   at org.apache.jackrabbit.oak.util.TreeUtil.getTree(TreeUtil.java:171)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getTree(NodeDelegate.java:865)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.getChild(NodeDelegate.java:339)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:274)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:113)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:253)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:238)
   at 
 org.apache.jackrabbit.oak.run.ReplicaCrashResilienceTest$1.run(ReplicaCrashResilienceTest.java:103)
   at java.lang.Thread.run(Thread.java:695)
 Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
 com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
   at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:267)
   at 
 org.apache.jackrabbit.oak.plugins.document.mongo.MongoDocumentStore.find(MongoDocumentStore.java:234)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.readNode(DocumentNodeStore.java:802)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:596)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore$3.call(DocumentNodeStore.java:1)
   at 
 com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
   ... 19 more
 Caused by: com.mongodb.MongoException$Network: Read operation to server 
 localhost/127.0.0.1:12322 failed on database resilienceTest
   at 

[jira] [Updated] (OAK-2263) Avoid unnecessary reads on index lookup

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2263:
--
Fix Version/s: (was: 1.1.8)
   1.4

 Avoid unnecessary reads on index lookup
 ---

 Key: OAK-2263
 URL: https://issues.apache.org/jira/browse/OAK-2263
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.4


 The PropertyIndex looks up the matching index definitions multiple times. 
 This means each time all definitions are read to find out which one matches 
 the given property restriction. AFAICS this happens at least six times. The 
 first two times when the cost is calculated: 1) check if the property is 
 indexed and 2) calculate the cost. The next four times when the query is 
 executed: 3) check if the property is indexed, 4) calculate the cost, 5) 
 check if the property is indexed, 6) lookup index to create cursor for query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2575) improve documentation for DocumentStore.invalidateCache

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2575:
--
Fix Version/s: 1.2

 improve documentation for DocumentStore.invalidateCache
 ---

 Key: OAK-2575
 URL: https://issues.apache.org/jira/browse/OAK-2575
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: mongomk
Affects Versions: 1.2
Reporter: Julian Reschke
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.2


 The JavaDoc currently just says invalidate the document cache; it would be 
 good to clarify 
 - that it's ok to keep cache entries as long as they get re-validated before 
 being used
 and 
 - whether it's ok to do the re-validation lazily (as opposed to immediately, 
 as done by MongoDS) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1550) Incorrect handling of addExistingNode conflict in NodeStore

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1550:
--
Fix Version/s: (was: 1.2)
   1.4

 Incorrect handling of addExistingNode conflict in NodeStore
 ---

 Key: OAK-1550
 URL: https://issues.apache.org/jira/browse/OAK-1550
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 1.4


 {{MicroKernel.rebase}} says: addExistingNode: node has been added that is 
 different from a node of them same name that has been added to the trunk.
 However, the {{NodeStore}} implementation
 # throws a {{CommitFailedException}} itself instead of annotating the 
 conflict,
 # also treats the equal childs with the same name as a conflict. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1575) DocumentNS: Implement refined conflict resolution for addExistingNode conflicts

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1575:
--
Fix Version/s: (was: 1.2)
   1.4

 DocumentNS: Implement refined conflict resolution for addExistingNode 
 conflicts
 ---

 Key: OAK-1575
 URL: https://issues.apache.org/jira/browse/OAK-1575
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 1.4


 Implement refined conflict resolution for addExistingNode conflicts as 
 defined in the parent issue for the document NS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1974) Fail fast on branch conflict

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1974:
--
Fix Version/s: 1.2

 Fail fast on branch conflict
 

 Key: OAK-1974
 URL: https://issues.apache.org/jira/browse/OAK-1974
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.2


 The current MongoMK implementation performs retries when it runs into merge
 conflicts caused by collisions. It may be possible to resolve a conflict by 
 resetting
 the branch back to the state as it was before the merge and re-run the commit 
 hooks again.
 This helps if the conflict was introduced by a commit hook. At the moment the 
 retries
 also happen when the conflict was introduced before the merge. In this case, 
 a retry
 is useless and the commit should fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2589) Provide progress indication when reindexing is being peformed

2015-03-09 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352727#comment-14352727
 ] 

Michael Marth commented on OAK-2589:


{quote}
Later we can expose those stats via JMX also 
{quote}

+1 to that!
Higher level could then create a UI for the progress

 Provide progress indication when reindexing is being peformed
 -

 Key: OAK-2589
 URL: https://issues.apache.org/jira/browse/OAK-2589
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13

 Attachments: OAK-2589.patch


 When reindex property of an oak:QueryIndexDefinition node is set to true 
 there is a message logged which says:
 29.09.2014 15:04:08.904 *INFO* [pool-9-thread-3] 
 org.apache.jackrabbit.oak.plugins.index.IndexUpdate Reindexing would be 
 performed for following indexes [/oak:index/searchFacets]
 However, after that, there is no indication of progress for this reindexing. 
 The only way to determine if the reindexing is progressing is to check back 
 and see that the reindex property on your oak:QueryIndexDefinition node is 
 now set to false.
 If something goes wrong with the reindexing it is hard to know if the 
 reindexing is happening at all. 
 If there could be some logging to indicate that this reindexing is happening 
 successfully and how complete it is, it would be of a great help while 
 troubleshooting indexing problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-09 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352738#comment-14352738
 ] 

Chetan Mehrotra commented on OAK-2557:
--

bq. Could it be a side-effect of too many nodes being indexed due to OAK-2559 ?

Nopes. The paths in the list were for some property index. Probably the GC was 
disabled or has been run after long time.

OAK-2559 is related to Lucene which stores the content in binary files. Worst 
case I can expect 10k files which need to be deleted but not 12M. 

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Priority: Blocker
 Fix For: 1.2, 1.0.13


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2586) Support including and excluding paths during upgrade

2015-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352717#comment-14352717
 ] 

Michael Dürig commented on OAK-2586:


It might not be too bad as {{JackrabbitNodeState}} doesn't have an optimised 
{{equals}} implementation neither. But we should definitely benchmark it 
against the current implementation with a large repository. 

 Support including and excluding paths during upgrade
 

 Key: OAK-2586
 URL: https://issues.apache.org/jira/browse/OAK-2586
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.1.6
Reporter: Julian Sedding
  Labels: patch
 Attachments: OAK-2586.patch


 When upgrading a Jackrabbit 2 to an Oak repository it can be desirable to 
 constrain which paths/sub-trees should be copied from the source repository. 
 Not least because this can (drastically) reduce the amount of content that 
 needs to be traversed, copied and indexed.
 I suggest to allow filtering the content visible from the source repository 
 by wrapping the JackrabbitNodeState instance and hiding selected paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1838) NodeStore API: divergence in contract and implementations

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1838:
--
Fix Version/s: 1.4

 NodeStore API: divergence in contract and implementations
 -

 Key: OAK-1838
 URL: https://issues.apache.org/jira/browse/OAK-1838
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 1.4


 Currently there is a gap between what the API mandates and what the document 
 and kernel node stores implement. This hinders further evolution of the Oak 
 stack as implementers must always be aware of non explicit design 
 requirements. We should look into ways we could close this gap by bringing 
 implementation and specification closer towards each other. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1185) DocumentNodeStore: annotate conflicts

2015-03-09 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-1185:
--
Fix Version/s: (was: 1.2)
   1.4

 DocumentNodeStore: annotate conflicts
 -

 Key: OAK-1185
 URL: https://issues.apache.org/jira/browse/OAK-1185
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.4


 Conflict handling is mostly implemented in MongoMK but it does not yet 
 annotate conflicts on rebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2593) Release Oak 1.1.8

2015-03-09 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-2593:
-

 Summary: Release Oak 1.1.8
 Key: OAK-2593
 URL: https://issues.apache.org/jira/browse/OAK-2593
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella


- release oak
- update website
- update javadoc




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2591) Invoke indexUpdate only when new Document are added in LuceneIndexEditor

2015-03-09 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352748#comment-14352748
 ] 

Alex Parvulescu commented on OAK-2591:
--

+1 looks good to me

 Invoke indexUpdate only when new Document are added in LuceneIndexEditor
 

 Key: OAK-2591
 URL: https://issues.apache.org/jira/browse/OAK-2591
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.1.8, 1.0.13

 Attachments: OAK-2591.patch


 {{LuceneIndexEditor}} currently invokes the {{indexUpdate}} whenever it adds 
 a new field to a Document. To reduce the noise it would be better to invoke 
 it whenever a new Document is added to index in line with other index 
 implementation.
 Such an implementation would allow the {{IndexUpdateCallback}} to more 
 accurately determine the progress an indexer is making in terms of number of 
 nodes indexed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)