[jira] [Commented] (OAK-8440) Build Jackrabbit Oak #2235 failed

2019-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872812#comment-16872812
 ] 

Hudson commented on OAK-8440:
-

Build is still failing.
Failed run: [Jackrabbit Oak 
#2237|https://builds.apache.org/job/Jackrabbit%20Oak/2237/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2237/console]

> Build Jackrabbit Oak #2235 failed
> -
>
> Key: OAK-8440
> URL: https://issues.apache.org/jira/browse/OAK-8440
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2235 has failed.
> First failed run: [Jackrabbit Oak 
> #2235|https://builds.apache.org/job/Jackrabbit%20Oak/2235/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2235/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8437) direct children, exact, and parent path restrictions don't work when path transformation takes place

2019-06-25 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-8437.

   Resolution: Fixed
Fix Version/s: 1.16.0

Fixed on trunk at [r1862093|https://svn.apache.org/r1862093].

> direct children, exact, and parent path restrictions don't work when path 
> transformation takes place
> 
>
> Key: OAK-8437
> URL: https://issues.apache.org/jira/browse/OAK-8437
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.16.0
>
>
> An index such as:
> {noformat}
> + /oak:index/ntbaseIdx
>- evaluatePathRestrictions = true
>+ indexRules/nt:base/properties
>   + prop
>  - propertyIndex = true
> {noformat}
> attempts to answer a query such as:
> {noformat}
> /jcr:root/path/element(*, some:Type)[par/@prop='bar']
> {noformat}
> but current this query is planned by this index as
> {noformat}
> prop:bar :depth:[2 TO 2] :ancestors:/path
> {noformat}
> which won't get this result. This is because the depth constraint should've 
> been modified to {{:depth:\[3 TO 3]}}
> -Do note that even {{:ancestors}} constraint is wrong (too lenient).-
> So, the correct plan should've looked like:
> {noformat}
> prop:bar :depth:[3 TO 3] :ancestors:/path
> {noformat}
> Similar issue exist for exact and parent path restrictions (these don't need 
> evaluatePathRestriction as well)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8437) direct children, exact, and parent path restrictions don't work when path transformation takes place

2019-06-25 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-8437:
---
Summary: direct children, exact, and parent path restrictions don't work 
when path transformation takes place  (was: direct child, exact, and parent 
path restrictions don't work when path transformation takes place)

> direct children, exact, and parent path restrictions don't work when path 
> transformation takes place
> 
>
> Key: OAK-8437
> URL: https://issues.apache.org/jira/browse/OAK-8437
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
>
> An index such as:
> {noformat}
> + /oak:index/ntbaseIdx
>- evaluatePathRestrictions = true
>+ indexRules/nt:base/properties
>   + prop
>  - propertyIndex = true
> {noformat}
> attempts to answer a query such as:
> {noformat}
> /jcr:root/path/element(*, some:Type)[par/@prop='bar']
> {noformat}
> but current this query is planned by this index as
> {noformat}
> prop:bar :depth:[2 TO 2] :ancestors:/path
> {noformat}
> which won't get this result. This is because the depth constraint should've 
> been modified to {{:depth:\[3 TO 3]}}
> -Do note that even {{:ancestors}} constraint is wrong (too lenient).-
> So, the correct plan should've looked like:
> {noformat}
> prop:bar :depth:[3 TO 3] :ancestors:/path
> {noformat}
> Similar issue exist for exact and parent path restrictions (these don't need 
> evaluatePathRestriction as well)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8437) direct child, exact, and parent path restrictions don't work when path transformation takes place

2019-06-25 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-8437:
---
Description: 
An index such as:
{noformat}
+ /oak:index/ntbaseIdx
   - evaluatePathRestrictions = true
   + indexRules/nt:base/properties
  + prop
 - propertyIndex = true
{noformat}

attempts to answer a query such as:
{noformat}
/jcr:root/path/element(*, some:Type)[par/@prop='bar']
{noformat}

but current this query is planned by this index as
{noformat}
prop:bar :depth:[2 TO 2] :ancestors:/path
{noformat}
which won't get this result. This is because the depth constraint should've 
been modified to {{:depth:\[3 TO 3]}}
-Do note that even {{:ancestors}} constraint is wrong (too lenient).-
So, the correct plan should've looked like:
{noformat}
prop:bar :depth:[3 TO 3] :ancestors:/path
{noformat}

Similar issue exist for exact and parent path restrictions (these don't need 
evaluatePathRestriction as well)

  was:
An index such as:
{noformat}
+ /oak:index/ntbaseIdx
   - evaluatePathRestrictions = true
   + indexRules/nt:base/properties
  + prop
 - propertyIndex = true
{noformat}

attempts to answer a query such as:
{noformat}
/jcr:root/path/element(*, some:Type)[par/@prop='bar']
{noformat}

but current this query is planned by this index as
{noformat}
prop:bar :depth:[2 TO 2] :ancestors:/path
{noformat}
which won't get this result. This is because the depth constraint should've 
been modified to {{:depth:\[3 TO 3]}}
Do note that even {{:ancestors}} constraint is wrong (too lenient).
So, the correct plan should've looked like:
{noformat}
prop:bar :depth:[3 TO 3] :ancestors:/path/prop
{noformat}


> direct child, exact, and parent path restrictions don't work when path 
> transformation takes place
> -
>
> Key: OAK-8437
> URL: https://issues.apache.org/jira/browse/OAK-8437
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
>
> An index such as:
> {noformat}
> + /oak:index/ntbaseIdx
>- evaluatePathRestrictions = true
>+ indexRules/nt:base/properties
>   + prop
>  - propertyIndex = true
> {noformat}
> attempts to answer a query such as:
> {noformat}
> /jcr:root/path/element(*, some:Type)[par/@prop='bar']
> {noformat}
> but current this query is planned by this index as
> {noformat}
> prop:bar :depth:[2 TO 2] :ancestors:/path
> {noformat}
> which won't get this result. This is because the depth constraint should've 
> been modified to {{:depth:\[3 TO 3]}}
> -Do note that even {{:ancestors}} constraint is wrong (too lenient).-
> So, the correct plan should've looked like:
> {noformat}
> prop:bar :depth:[3 TO 3] :ancestors:/path
> {noformat}
> Similar issue exist for exact and parent path restrictions (these don't need 
> evaluatePathRestriction as well)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8437) direct child, exact, and parent path restrictions don't work when path transformation takes place

2019-06-25 Thread Vikas Saurabh (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-8437:
---
Summary: direct child, exact, and parent path restrictions don't work when 
path transformation takes place  (was: depth and ancestor constraints are not 
adjusted when path transformation takes place)

> direct child, exact, and parent path restrictions don't work when path 
> transformation takes place
> -
>
> Key: OAK-8437
> URL: https://issues.apache.org/jira/browse/OAK-8437
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Major
>
> An index such as:
> {noformat}
> + /oak:index/ntbaseIdx
>- evaluatePathRestrictions = true
>+ indexRules/nt:base/properties
>   + prop
>  - propertyIndex = true
> {noformat}
> attempts to answer a query such as:
> {noformat}
> /jcr:root/path/element(*, some:Type)[par/@prop='bar']
> {noformat}
> but current this query is planned by this index as
> {noformat}
> prop:bar :depth:[2 TO 2] :ancestors:/path
> {noformat}
> which won't get this result. This is because the depth constraint should've 
> been modified to {{:depth:\[3 TO 3]}}
> Do note that even {{:ancestors}} constraint is wrong (too lenient).
> So, the correct plan should've looked like:
> {noformat}
> prop:bar :depth:[3 TO 3] :ancestors:/path/prop
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8440) Build Jackrabbit Oak #2235 failed

2019-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872480#comment-16872480
 ] 

Hudson commented on OAK-8440:
-

Build is still failing.
Failed run: [Jackrabbit Oak 
#2236|https://builds.apache.org/job/Jackrabbit%20Oak/2236/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2236/console]

> Build Jackrabbit Oak #2235 failed
> -
>
> Key: OAK-8440
> URL: https://issues.apache.org/jira/browse/OAK-8440
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2235 has failed.
> First failed run: [Jackrabbit Oak 
> #2235|https://builds.apache.org/job/Jackrabbit%20Oak/2235/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2235/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872432#comment-16872432
 ] 

Stefan Egli commented on OAK-8351:
--

* leaving ticket open for couple days for potential backport feedback from the 
list

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.16.0
>
> Attachments: OAK-8351-1.10.patch, OAK-8351-1.8.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>

[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Attachment: OAK-8351-1.8.patch

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.16.0
>
> Attachments: OAK-8351-1.10.patch, OAK-8351-1.8.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
> 

[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872430#comment-16872430
 ] 

Stefan Egli commented on OAK-8351:
--

* ported to 1.8 in [this 
branch|https://github.com/stefan-egli/jackrabbit-oak/tree/1.8] with the svn 
patch being attached as  [^OAK-8351-1.8.patch] 

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.16.0
>
> Attachments: OAK-8351-1.10.patch, OAK-8351-1.8.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
>  

[jira] [Resolved] (OAK-8408) UserImporter must not trigger creation of rep:pwd node unless included in xml (initial-pw-change)

2019-06-25 Thread angela (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-8408.
-
   Resolution: Fixed
Fix Version/s: 1.16.0

Committed revision 1862074 and mentioned the changed behavior upon XML import 
in the corresponding section of the documentation

> UserImporter must not trigger creation of rep:pwd node unless included in xml 
> (initial-pw-change)
> -
>
> Key: OAK-8408
> URL: https://issues.apache.org/jira/browse/OAK-8408
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, security
>Reporter: angela
>Assignee: angela
>Priority: Major
> Fix For: 1.16.0
>
> Attachments: OAK-8408-tests.patch, OAK-8408.patch
>
>
> when xml-importing an existing user (i.e. {{Tree}} doesn't have status NEW 
> upon import) calling {{UserManagerImpl.setPassword}} will force the creation 
> of the {{rep:pwd}} node and {{rep:passwordLastModified}} property contained 
> therein _if_ theinitial-password-change feature is enabled.
> imo the {{rep:pwd}} (and any properties contained therein) must not be 
> auto-created by should only be imported if contained in the XML. 
> proposed fix: {{UserManagerImpl.setPassword}} already contains special 
> treatment for the password hashing triggered upon xml import -> renaming that 
> flag and respect it for the handling of the pw last modified.
> [~stillalex], wdyt?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8440) Build Jackrabbit Oak #2235 failed

2019-06-25 Thread Hudson (JIRA)
Hudson created OAK-8440:
---

 Summary: Build Jackrabbit Oak #2235 failed
 Key: OAK-8440
 URL: https://issues.apache.org/jira/browse/OAK-8440
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: continuous integration
Reporter: Hudson


No description is provided

The build Jackrabbit Oak #2235 has failed.
First failed run: [Jackrabbit Oak 
#2235|https://builds.apache.org/job/Jackrabbit%20Oak/2235/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2235/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7286) DocumentNodeStoreBranch handling of non-recoverable DocumentStoreExceptions

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7286:

Description: 
In {{DocumentNodeStoreBranch.merge()}}, any {{DocumentStoreException}} is 
mapped to a {{DocumentStoreException}} to a {{CommitFailedException}} of type 
"MERGE", which leads to the operation being retried, and a non-helpful 
exception being generated.

The effect can be observed by enabling a test in {{ValidNamesTest}}:

{noformat}
--- oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java 
(Revision 1825371)
+++ oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java 
(Arbeitskopie)
@@ -300,7 +300,6 @@
 public void testUnpairedHighSurrogateEnd() {
 // see OAK-5506
 
org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("segment"));
-
org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("rdb"));
 nameTest("foo" + SURROGATE_PAIR[0]);
 }

@@ -336,6 +335,7 @@
 assertEquals("paths should be equal", p.getPath(), n.getPath());
 return p;
 } catch (RepositoryException ex) {
+ex.printStackTrace();
 fail(ex.getMessage());
 return null;
 }

{noformat}

The underlying issue is that {{RDBDocumentStore}} is throwing a 
{{DocumentStoreException}} due to the invalid ID, and repeating the call will 
not help.

We probably should have a way to distinguish between different types of 
problems.

I hacked {{DocumentNodeStoreBranch}} like that:

{noformat}
--- 
oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
(Revision 1825371)
+++ 
oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
(Arbeitskopie)
@@ -520,8 +520,12 @@
 } catch (ConflictException e) {
 throw e.asCommitFailedException();
 } catch(DocumentStoreException e) {
-throw new CommitFailedException(MERGE, 1,
-"Failed to merge changes to the underlying store", 
e);
+if (e.getMessage().contains("Invalid ID")) {
+throw new CommitFailedException(OAK, 123,
+"Failed to store changes in the underlying 
store: " + e.getMessage(), e);
+} else {
+throw new CommitFailedException(MERGE, 1, "Failed to 
merge changes to the underlying store", e);
+}
 } catch (Exception e) {
 throw new CommitFailedException(OAK, 1,
 "Failed to merge changes to the underlying store", 
e);

{noformat}

...which causes the exception to surface immediately (see 
https://issues.apache.org/jira/secure/attachment/12912117/OAK-7286.diff).

(cc  [~mreutegg])



  was:
In {{DocumentNodeStoreBranch.merge()}}, any {{DocumentStoreException}} is 
mapped to a {{DocumentStoreException}} to a {{CommitFailedException}} of type 
"MERGE", which leads to the operation being retried, and a non-helpful 
exception being generated.

The effect can be observed by enabling a test in {{ValidNamesTest}}:

{noformat}
--- oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java 
(Revision 1825371)
+++ oak-jcr/src/test/java/org/apache/jackrabbit/oak/jcr/ValidNamesTest.java 
(Arbeitskopie)
@@ -300,7 +300,6 @@
 public void testUnpairedHighSurrogateEnd() {
 // see OAK-5506
 
org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("segment"));
-
org.junit.Assume.assumeFalse(super.fixture.toString().toLowerCase().contains("rdb"));
 nameTest("foo" + SURROGATE_PAIR[0]);
 }

@@ -336,6 +335,7 @@
 assertEquals("paths should be equal", p.getPath(), n.getPath());
 return p;
 } catch (RepositoryException ex) {
+ex.printStackTrace();
 fail(ex.getMessage());
 return null;
 }

{noformat}

The underlying issue is that {{RDBDocumentStore}} is throwing a 
{{DocumentStoreException}} due to the invalid ID, and repeating the call will 
not help.

We probably should have a way to dinstinguish between different types of 
problems.

I hacked {{DocumentNodeStoreBranch}} like that:

{noformat}
--- 
oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
(Revision 1825371)
+++ 
oak-store-document/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentNodeStoreBranch.java
(Arbeitskopie)
@@ -520,8 +520,12 @@
 } catch (ConflictException e) {
 throw e.asCommitFailedException();
 } catch(DocumentStoreException e) {
-throw new 

[jira] [Updated] (OAK-8226) Documentation for principal-based authorization and AggregationFilter

2019-06-25 Thread angela (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-8226:

Summary: Documentation for principal-based authorization and 
AggregationFilter  (was: Documentation)

> Documentation for principal-based authorization and AggregationFilter
> -
>
> Key: OAK-8226
> URL: https://issues.apache.org/jira/browse/OAK-8226
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: doc
>Reporter: angela
>Assignee: angela
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8226) Documentation

2019-06-25 Thread angela (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872356#comment-16872356
 ] 

angela commented on OAK-8226:
-

revision 1862063.


> Documentation
> -
>
> Key: OAK-8226
> URL: https://issues.apache.org/jira/browse/OAK-8226
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: doc
>Reporter: angela
>Assignee: angela
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8436) Build Jackrabbit Oak #2229 failed

2019-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872355#comment-16872355
 ] 

Hudson commented on OAK-8436:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#2234|https://builds.apache.org/job/Jackrabbit%20Oak/2234/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2234/console]

> Build Jackrabbit Oak #2229 failed
> -
>
> Key: OAK-8436
> URL: https://issues.apache.org/jira/browse/OAK-8436
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2229 has failed.
> First failed run: [Jackrabbit Oak 
> #2229|https://builds.apache.org/job/Jackrabbit%20Oak/2229/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2229/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872293#comment-16872293
 ] 

Julian Reschke edited comment on OAK-7213 at 6/25/19 1:06 PM:
--

trunk: (1.9.0) [r1822802|http://svn.apache.org/r1822802] 
[r1822638|http://svn.apache.org/r1822638] 
[r1822496|http://svn.apache.org/r1822496]
1.8: [r1862061|http://svn.apache.org/r1862061]



was (Author: reschke):
trunk: (1.9.0) [r1822802|http://svn.apache.org/r1822802] 
[r1822638|http://svn.apache.org/r1822638] 
[r1822496|http://svn.apache.org/r1822496]

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling, candidate_oak_1_6
> Fix For: 1.9.0, 1.10.0, 1.8.14
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (OAK-8439) Broken links to JSR Specification/Javadoc

2019-06-25 Thread angela (JIRA)
angela created OAK-8439:
---

 Summary: Broken links to JSR Specification/Javadoc
 Key: OAK-8439
 URL: https://issues.apache.org/jira/browse/OAK-8439
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: doc
Reporter: angela


It seems that all links to JSR 170/283 specification and javadoc are broken. 
E.g. reference to UUIDImportBehavior at 
http://jackrabbit.apache.org/oak/docs/differences.html or to property types at 
http://jackrabbit.apache.org/oak/docs/query/lucene.html or to security related 
items like e.g. JCR API references at 
http://jackrabbit.apache.org/oak/docs/security/authentication.html

[~mreutegg] fyi



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7213:

Labels: bundling  (was: bundling candidate_oak_1_8)

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling
> Fix For: 1.9.0, 1.10.0, 1.8.14
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7213:

Labels: bundling candidate_oak_1_6  (was: bundling)

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling, candidate_oak_1_6
> Fix For: 1.9.0, 1.10.0, 1.8.14
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7213:

Fix Version/s: 1.8.14

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling, candidate_oak_1_8
> Fix For: 1.9.0, 1.10.0, 1.8.14
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Fix Version/s: 1.16.0

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Fix For: 1.16.0
>
> Attachments: OAK-8351-1.10.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
>  

[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872308#comment-16872308
 ] 

Stefan Egli commented on OAK-8351:
--

* [announced|https://oak.markmail.org/thread/hp3ekgz3cdr54sad] intent to 
backport on the list

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351-1.10.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>  

[jira] [Comment Edited] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872298#comment-16872298
 ] 

Stefan Egli edited comment on OAK-8351 at 6/25/19 12:18 PM:


* ported to 1.10 in [this 
branch|https://github.com/stefan-egli/jackrabbit-oak/tree/1.10] with the svn 
patch being attached as  [^OAK-8351-1.10.patch] 


was (Author: egli):
* ported to 1.10 in [this 
branch|https://github.com/stefan-egli/jackrabbit-oak/tree/1.10]

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351-1.10.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
>

[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Attachment: OAK-8351-1.10.patch

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351-1.10.patch, OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
>

[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872298#comment-16872298
 ] 

Stefan Egli commented on OAK-8351:
--

* ported to 1.10 in [this 
branch|https://github.com/stefan-egli/jackrabbit-oak/tree/1.10]

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> 

[jira] [Updated] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7213:

Labels: bundling candidate_oak_1_8  (was: bundling)

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling, candidate_oak_1_8
> Fix For: 1.9.0, 1.10.0
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7213) Avoid call for child node when bundle contains all children

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872293#comment-16872293
 ] 

Julian Reschke commented on OAK-7213:
-

trunk: (1.9.0) [r1822802|http://svn.apache.org/r1822802] 
[r1822638|http://svn.apache.org/r1822638] 
[r1822496|http://svn.apache.org/r1822496]

> Avoid call for child node when bundle contains all children
> ---
>
> Key: OAK-7213
> URL: https://issues.apache.org/jira/browse/OAK-7213
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Minor
>  Labels: bundling
> Fix For: 1.9.0, 1.10.0
>
> Attachments: OAK-7213.patch
>
>
> When nodes are bundled in a document, the DocumentNodeStore keeps track of 
> whether all children are included in a document. The presence of the hidden 
> {{:doc-has-child-non-bundled}} property indicates there are non bundled child 
> nodes. For the case when a document contains all children in the bundle, the 
> DocumentNodeStore still does a find call on the DocumentStore when asked for 
> an unknown child node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8436) Build Jackrabbit Oak #2229 failed

2019-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872288#comment-16872288
 ] 

Hudson commented on OAK-8436:
-

Previously failing build now is OK.
 Passed run: [Jackrabbit Oak 
#2233|https://builds.apache.org/job/Jackrabbit%20Oak/2233/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2233/console]

> Build Jackrabbit Oak #2229 failed
> -
>
> Key: OAK-8436
> URL: https://issues.apache.org/jira/browse/OAK-8436
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2229 has failed.
> First failed run: [Jackrabbit Oak 
> #2229|https://builds.apache.org/job/Jackrabbit%20Oak/2229/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2229/console]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8213) BlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8213:

Fix Version/s: 1.16.0

> BlobStore instantiated from ReadOnly DocumentNodeStore should never modify 
> persistence
> --
>
> Key: OAK-8213
> URL: https://issues.apache.org/jira/browse/OAK-8213
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: blob, documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.16.0
>
> Attachments: OAK-8213.diff
>
>
> Currently the "readOnly" setting is only passed to the underlying 
> DocumentStore, but not a BlobStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8252) MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872251#comment-16872251
 ] 

Julian Reschke commented on OAK-8252:
-

trunk: [r1862050|http://svn.apache.org/r1862050]


> MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never 
> modify persistence
> ---
>
> Key: OAK-8252
> URL: https://issues.apache.org/jira/browse/OAK-8252
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, mongomk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10
> Fix For: 1.16.0
>
> Attachments: OAK-8252.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8213) BlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8213.
-
Resolution: Fixed

> BlobStore instantiated from ReadOnly DocumentNodeStore should never modify 
> persistence
> --
>
> Key: OAK-8213
> URL: https://issues.apache.org/jira/browse/OAK-8213
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: blob, documentmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.16.0
>
> Attachments: OAK-8213.diff
>
>
> Currently the "readOnly" setting is only passed to the underlying 
> DocumentStore, but not a BlobStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (OAK-8252) MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-8252.
-
   Resolution: Fixed
Fix Version/s: 1.16.0

> MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never 
> modify persistence
> ---
>
> Key: OAK-8252
> URL: https://issues.apache.org/jira/browse/OAK-8252
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, mongomk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.16.0
>
> Attachments: OAK-8252.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8252) MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8252:

Labels: candidate_oak_1_10  (was: )

> MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never 
> modify persistence
> ---
>
> Key: OAK-8252
> URL: https://issues.apache.org/jira/browse/OAK-8252
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, mongomk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_10
> Fix For: 1.16.0
>
> Attachments: OAK-8252.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (OAK-8252) MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned OAK-8252:
---

Assignee: Julian Reschke

> MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never 
> modify persistence
> ---
>
> Key: OAK-8252
> URL: https://issues.apache.org/jira/browse/OAK-8252
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, mongomk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Attachments: OAK-8252.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Labels: cand candidate_oak_1_10 candidate_oak_1_8  (was: candidate_oak_1_10 
candidate_oak_1_8)

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: cand, candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
>   

[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Labels: candidate_oak_1_10 candidate_oak_1_8  (was: cand candidate_oak_1_10 
candidate_oak_1_8)

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
> 

[jira] [Comment Edited] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872197#comment-16872197
 ] 

Stefan Egli edited comment on OAK-8351 at 6/25/19 9:37 AM:
---

thx [~mreutegg]
bq. There's a Thread.sleep(100) before the garbage is collected. Is this really 
necessary?
good point, that's indeed a left-over from earlier copy-paste work, removed it 
now.
* committed to trunk 
[here|http://svn.apache.org/viewvc?view=revision=1862044]
* pending: backports


was (Author: egli):
thx [~mreutegg]
* committed to trunk 
[here|http://svn.apache.org/viewvc?view=revision=1862044]
* pending: backports

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   

[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872197#comment-16872197
 ] 

Stefan Egli commented on OAK-8351:
--

thx [~mreutegg]
* committed to trunk 
[here|http://svn.apache.org/viewvc?view=revision=1862044]
* pending: backports

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
> 

[jira] [Updated] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Stefan Egli (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-8351:
-
Labels: candidate_oak_1_10 candidate_oak_1_8  (was: )

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
>  Labels: candidate_oak_1_10, candidate_oak_1_8
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-2\/0/
>   
> },
>   

[jira] [Commented] (OAK-7401) Changes kept in memory when update limit is hit in commit hook

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872118#comment-16872118
 ] 

Julian Reschke commented on OAK-7401:
-

trunk: (1.9.0) [r1829826|http://svn.apache.org/r1829826] 
[r1829824|http://svn.apache.org/r1829824] 
[r1828868|http://svn.apache.org/r1828868]
1.8: (1.8.13) [r1855327|http://svn.apache.org/r1855327]
1.6: (1.6.17) [r1855331|http://svn.apache.org/r1855331]

> Changes kept in memory when update limit is hit in commit hook
> --
>
> Key: OAK-7401
> URL: https://issues.apache.org/jira/browse/OAK-7401
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Affects Versions: 1.0, 1.2, 1.4, 1.6.0, 1.8.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.9.0, 1.10.0
>
>
> In some cases no persisted branch is created by the DocumentNodeStore when 
> the number of changes hit the update limit. This happens when the current 
> branch state is in-memory and the commit hook contributes changes that reach 
> the update limit. The implementation keeps those changes in memory, which may 
> lead to a commit way bigger than specified by the update limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7886) Re-registering node type may corrupt registry

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872114#comment-16872114
 ] 

Julian Reschke commented on OAK-7886:
-

trunk: (1.9.11) [r1846162|http://svn.apache.org/r1846162] 
[r1846057|http://svn.apache.org/r1846057]
1.8: [r1862036|http://svn.apache.org/r1862036] (1.8.10) 
[r1846170|http://svn.apache.org/r1846170]
1.6: (1.6.15) [r1846171|http://svn.apache.org/r1846171]
1.4: (1.4.24) [r1846175|http://svn.apache.org/r1846175]
1.2: (1.2.31) [r1846176|http://svn.apache.org/r1846176]
1.0: [r1846177|http://svn.apache.org/r1846177]

> Re-registering node type may corrupt registry
> -
>
> Key: OAK-7886
> URL: https://issues.apache.org/jira/browse/OAK-7886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0, 1.2, 1.4.0, 1.6.0, 1.8.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.10.0, 1.9.11, 1.8.14
>
> Attachments: OAK-7886.patch
>
>
> Re-registering an existing node type may corrupt the registry. This happens 
> for node types that are not mixins and do not extend from other primary types 
> (except for the implicit {{nt:base}}). After re-registering such a node type 
> the {{jcr:supertypes}} list does not have the {{nt:base}} anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7886) Re-registering node type may corrupt registry

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7886:

Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 
candidate_oak_1_6  (was: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 
candidate_oak_1_6 candidate_oak_1_8)

> Re-registering node type may corrupt registry
> -
>
> Key: OAK-7886
> URL: https://issues.apache.org/jira/browse/OAK-7886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0, 1.2, 1.4.0, 1.6.0, 1.8.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6
> Fix For: 1.10.0, 1.9.11, 1.8.14
>
> Attachments: OAK-7886.patch
>
>
> Re-registering an existing node type may corrupt the registry. This happens 
> for node types that are not mixins and do not extend from other primary types 
> (except for the implicit {{nt:base}}). After re-registering such a node type 
> the {{jcr:supertypes}} list does not have the {{nt:base}} anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7886) Re-registering node type may corrupt registry

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7886:

Fix Version/s: 1.8.14

> Re-registering node type may corrupt registry
> -
>
> Key: OAK-7886
> URL: https://issues.apache.org/jira/browse/OAK-7886
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0, 1.2, 1.4.0, 1.6.0, 1.8.0
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>Priority: Major
>  Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, 
> candidate_oak_1_6, candidate_oak_1_8
> Fix For: 1.10.0, 1.9.11, 1.8.14
>
> Attachments: OAK-7886.patch
>
>
> Re-registering an existing node type may corrupt the registry. This happens 
> for node types that are not mixins and do not extend from other primary types 
> (except for the implicit {{nt:base}}). After re-registering such a node type 
> the {{jcr:supertypes}} list does not have the {{nt:base}} anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7837) oak-run check crashes with SNFE

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872067#comment-16872067
 ] 

Julian Reschke commented on OAK-7837:
-

Removed candidate_oak_1_8 label as already backported.

> oak-run check crashes with SNFE
> ---
>
> Key: OAK-7837
> URL: https://issues.apache.org/jira/browse/OAK-7837
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.10.0, 1.8.9
>
> Attachments: OAK-7837.patch
>
>
> I experienced a crash of {{oak-run check}} with a {{SNFE}}:
> {noformat}
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> 48973c89-9e61-4757-a93d-384da83ec170 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:281)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore$1.call(ReadOnlyFileStore.java:124)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore$1.call(ReadOnlyFileStore.java:121)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore.readSegment(ReadOnlyFileStore.java:121)
> at org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153)
> at org.apache.jackrabbit.oak.segment.Record.getSegment(Record.java:70)
> at org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:160)
> at org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:172)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.hasChildNode(SegmentNodeState.java:441)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker$NodeWrapper.deriveTraversableNodeOnPath(ConsistencyChecker.java:498)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkPathAtRoot(ConsistencyChecker.java:383)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkPathsAtRoot(ConsistencyChecker.java:353)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:200)
> at org.apache.jackrabbit.oak.segment.tool.Check.run(Check.java:243)
> at org.apache.jackrabbit.oak.run.CheckCommand.execute(CheckCommand.java:88)
> at org.apache.jackrabbit.oak.run.Main.main(Main.java:49)
> {noformat}
> AFAICS the problem is the check of the path not being resilient against 
> {{SNFE}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7837) oak-run check crashes with SNFE

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7837:

Labels:   (was: candidate_oak_1_8)

> oak-run check crashes with SNFE
> ---
>
> Key: OAK-7837
> URL: https://issues.apache.org/jira/browse/OAK-7837
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Major
> Fix For: 1.10.0, 1.8.9
>
> Attachments: OAK-7837.patch
>
>
> I experienced a crash of {{oak-run check}} with a {{SNFE}}:
> {noformat}
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> 48973c89-9e61-4757-a93d-384da83ec170 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:281)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore$1.call(ReadOnlyFileStore.java:124)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore$1.call(ReadOnlyFileStore.java:121)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.lambda$getSegment$0(SegmentCache.java:163)
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache$NonEmptyCache.getSegment(SegmentCache.java:160)
> at 
> org.apache.jackrabbit.oak.segment.file.ReadOnlyFileStore.readSegment(ReadOnlyFileStore.java:121)
> at org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:153)
> at org.apache.jackrabbit.oak.segment.Record.getSegment(Record.java:70)
> at org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:160)
> at org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:172)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.hasChildNode(SegmentNodeState.java:441)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker$NodeWrapper.deriveTraversableNodeOnPath(ConsistencyChecker.java:498)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkPathAtRoot(ConsistencyChecker.java:383)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkPathsAtRoot(ConsistencyChecker.java:353)
> at 
> org.apache.jackrabbit.oak.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:200)
> at org.apache.jackrabbit.oak.segment.tool.Check.run(Check.java:243)
> at org.apache.jackrabbit.oak.run.CheckCommand.execute(CheckCommand.java:88)
> at org.apache.jackrabbit.oak.run.Main.main(Main.java:49)
> {noformat}
> AFAICS the problem is the check of the path not being resilient against 
> {{SNFE}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7132) SNFE after full compaction

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7132:

Labels: compaction  (was: candidate_oak_1_8 compaction)

> SNFE after full compaction
> --
>
> Key: OAK-7132
> URL: https://issues.apache.org/jira/browse/OAK-7132
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.8.0
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: compaction
> Fix For: 1.9.0, 1.10.0, 1.8.1
>
> Attachments: size.png
>
>
> In some cases we observed a {{SNFE}} right after a the cleanup following a 
> full compaction:
> {noformat}
> 31.12.2017 04:25:19.816 *ERROR* [pool-17-thread-22] 
> org.apache.jackrabbit.oak.segment.SegmentNotFoundExceptionListener Segment 
> not found: a82a99a3-f1e9-49b7-a1e0-55e7fec80c41. SegmentId 
> age=609487478ms,segment-generation=GCGeneration{generation=4,fullGeneration=2,isCompacted=true}
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> a82a99a3-f1e9-49b7-a1e0-55e7fec80c41 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:276)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$5(FileStore.java:478)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache.lambda$getSegment$0(SegmentCache.java:116)
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache.getSegment(SegmentCache.java:113)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:478)
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:154)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader$1.apply(CachingSegmentReader.java:94)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader$1.apply(CachingSegmentReader.java:90)
> at 
> org.apache.jackrabbit.oak.segment.ReaderCache.get(ReaderCache.java:118)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader.readString(CachingSegmentReader.java:90)
> at 
> org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:220)
> at 
> org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:173)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.getChildNode(SegmentNodeState.java:423)
> at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.(MemoryNodeBuilder.java:143)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeBuilder.(SegmentNodeBuilder.java:93)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeBuilder.createChildBuilder(SegmentNodeBuilder.java:148)
> at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.getChildNode(MemoryNodeBuilder.java:331)
> at 
> org.apache.jackrabbit.oak.core.SecureNodeBuilder.(SecureNodeBuilder.java:112)
> at 
> org.apache.jackrabbit.oak.core.SecureNodeBuilder.getChildNode(SecureNodeBuilder.java:329)
> at 
> org.apache.jackrabbit.oak.core.MutableTree.getTree(MutableTree.java:290)
> at 
> org.apache.jackrabbit.oak.core.MutableRoot.getTree(MutableRoot.java:220)
> at 
> org.apache.jackrabbit.oak.core.MutableRoot.getTree(MutableRoot.java:69)
> at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.getItem(SessionDelegate.java:442)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.getItemInternal(SessionImpl.java:167)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.access$400(SessionImpl.java:82)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$3.performNullable(SessionImpl.java:229)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$3.performNullable(SessionImpl.java:226)
> at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performNullable(SessionDelegate.java:243)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.getItemOrNull(SessionImpl.java:226)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7174) The check command returns a zero exit code on error

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872065#comment-16872065
 ] 

Julian Reschke commented on OAK-7174:
-

Removed candidate_oak_1_8 label as already backported.

> The check command returns a zero exit code on error
> ---
>
> Key: OAK-7174
> URL: https://issues.apache.org/jira/browse/OAK-7174
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0, 1.8.2
>
>
> The check command should return a non-zero exit code if it fails with an 
> exception. Moreover, every error message should be printed on the standard 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-7132) SNFE after full compaction

2019-06-25 Thread Julian Reschke (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872066#comment-16872066
 ] 

Julian Reschke commented on OAK-7132:
-

Removed candidate_oak_1_8 label as already backported.

> SNFE after full compaction
> --
>
> Key: OAK-7132
> URL: https://issues.apache.org/jira/browse/OAK-7132
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segment-tar
>Affects Versions: 1.8.0
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>Priority: Blocker
>  Labels: compaction
> Fix For: 1.9.0, 1.10.0, 1.8.1
>
> Attachments: size.png
>
>
> In some cases we observed a {{SNFE}} right after a the cleanup following a 
> full compaction:
> {noformat}
> 31.12.2017 04:25:19.816 *ERROR* [pool-17-thread-22] 
> org.apache.jackrabbit.oak.segment.SegmentNotFoundExceptionListener Segment 
> not found: a82a99a3-f1e9-49b7-a1e0-55e7fec80c41. SegmentId 
> age=609487478ms,segment-generation=GCGeneration{generation=4,fullGeneration=2,isCompacted=true}
> org.apache.jackrabbit.oak.segment.SegmentNotFoundException: Segment 
> a82a99a3-f1e9-49b7-a1e0-55e7fec80c41 not found
> at 
> org.apache.jackrabbit.oak.segment.file.AbstractFileStore.readSegmentUncached(AbstractFileStore.java:276)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.lambda$readSegment$5(FileStore.java:478)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache.lambda$getSegment$0(SegmentCache.java:116)
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4724)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4721)
> at 
> org.apache.jackrabbit.oak.segment.SegmentCache.getSegment(SegmentCache.java:113)
> at 
> org.apache.jackrabbit.oak.segment.file.FileStore.readSegment(FileStore.java:478)
> at 
> org.apache.jackrabbit.oak.segment.SegmentId.getSegment(SegmentId.java:154)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader$1.apply(CachingSegmentReader.java:94)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader$1.apply(CachingSegmentReader.java:90)
> at 
> org.apache.jackrabbit.oak.segment.ReaderCache.get(ReaderCache.java:118)
> at 
> org.apache.jackrabbit.oak.segment.CachingSegmentReader.readString(CachingSegmentReader.java:90)
> at 
> org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:220)
> at 
> org.apache.jackrabbit.oak.segment.MapRecord.getEntry(MapRecord.java:173)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeState.getChildNode(SegmentNodeState.java:423)
> at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.(MemoryNodeBuilder.java:143)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeBuilder.(SegmentNodeBuilder.java:93)
> at 
> org.apache.jackrabbit.oak.segment.SegmentNodeBuilder.createChildBuilder(SegmentNodeBuilder.java:148)
> at 
> org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.getChildNode(MemoryNodeBuilder.java:331)
> at 
> org.apache.jackrabbit.oak.core.SecureNodeBuilder.(SecureNodeBuilder.java:112)
> at 
> org.apache.jackrabbit.oak.core.SecureNodeBuilder.getChildNode(SecureNodeBuilder.java:329)
> at 
> org.apache.jackrabbit.oak.core.MutableTree.getTree(MutableTree.java:290)
> at 
> org.apache.jackrabbit.oak.core.MutableRoot.getTree(MutableRoot.java:220)
> at 
> org.apache.jackrabbit.oak.core.MutableRoot.getTree(MutableRoot.java:69)
> at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.getItem(SessionDelegate.java:442)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.getItemInternal(SessionImpl.java:167)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.access$400(SessionImpl.java:82)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$3.performNullable(SessionImpl.java:229)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$3.performNullable(SessionImpl.java:226)
> at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performNullable(SessionDelegate.java:243)
> at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.getItemOrNull(SessionImpl.java:226)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (OAK-7174) The check command returns a zero exit code on error

2019-06-25 Thread Julian Reschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/OAK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-7174:

Labels:   (was: candidate_oak_1_8)

> The check command returns a zero exit code on error
> ---
>
> Key: OAK-7174
> URL: https://issues.apache.org/jira/browse/OAK-7174
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: run, segment-tar
>Reporter: Francesco Mari
>Assignee: Francesco Mari
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0, 1.8.2
>
>
> The check command should return a non-zero exit code if it fails with an 
> exception. Moreover, every error message should be printed on the standard 
> error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8252) MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never modify persistence

2019-06-25 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872054#comment-16872054
 ] 

Marcel Reutegger commented on OAK-8252:
---

OK, agreed. Let's go with your changes then.

> MongoBlobStore instantiated from ReadOnly DocumentNodeStore should never 
> modify persistence
> ---
>
> Key: OAK-8252
> URL: https://issues.apache.org/jira/browse/OAK-8252
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: blob, mongomk
>Reporter: Julian Reschke
>Priority: Minor
> Attachments: OAK-8252.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (OAK-8351) Long running RGC remove and getmore operations

2019-06-25 Thread Marcel Reutegger (JIRA)


[ 
https://issues.apache.org/jira/browse/OAK-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872052#comment-16872052
 ] 

Marcel Reutegger commented on OAK-8351:
---

Looks good to me. Just one minor comment about the {{gc()}} method in the new 
test. There's a {{Thread.sleep(100)}} before the garbage is collected. Is this 
really necessary? The test is using a virtual clock and the tests pass on my 
machine when I comment out the sleep.

> Long running RGC remove and getmore operations
> --
>
> Key: OAK-8351
> URL: https://issues.apache.org/jira/browse/OAK-8351
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk
>Affects Versions: 1.12.0
>Reporter: Stefan Egli
>Assignee: Stefan Egli
>Priority: Major
> Attachments: OAK-8351.patch
>
>
> On a mongodb setup a long running revision garbage collection operation has 
> been witnessed. The query was running for hours. Doing a 
> {{planCacheSetFilter}}, which hinted mongodb to use a specific index, 
> together with killing the running command resolved the situation.
> The problem was that mongodb generated a query plan which scored high 
> (2.0003) but included an index scan through the {{\_id_}} index (and the 
> collection contained millions of documents). It also generated other, better, 
> plans, but they all "only" had the same high score, so it seemed legitimate 
> that mongodb would choose this one.
> The reason why this, problematic, query plan resulted in a high score seems 
> to be that it does indeed find 101 documents after entering the first "or" - 
> but during query execution it would also enter the other "or" parts where it 
> has chosen to do a {{\_id_}} index scan.
> The query involved was:
> {noformat}
>   {
>   "_sdType" : {
>   "$in" : [
>   
> 50,
>   
> 60,
>   
> 70
>   ]
>   },
>   "$or" : [
>   {
>   
> "_sdType" : 50
>   },
>   {
>   
> "_sdType" : 60
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
> "_id" : /.*-1\/0/
>   
> },
>   
> {
>   
> "_id" : /[^-]*/,
>   
> "_path" : /.*-1\/0/
>   
> }
>   
> ],
>   
> "_sdMaxRevTime" : {
>   
> "$lt" : NumberLong(1551843365)
>   
> }
>   },
>   {
>   
> "_sdType" : 70,
>   
> "$or" : [
>   
> {
>   
>