[jira] [Comment Edited] (OAK-2910) oak-jcr bundle should be usable as a standalone bundle

2015-07-08 Thread Robert Munteanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618501#comment-14618501
 ] 

Robert Munteanu edited comment on OAK-2910 at 7/8/15 12:14 PM:
---

[~olli] - I don't have a better idea, but if we you please let us know.


was (Author: rombert):
[~olli] - I don't have a better idea, but if we do please let us know.

 oak-jcr bundle should be usable as a standalone bundle
 --

 Key: OAK-2910
 URL: https://issues.apache.org/jira/browse/OAK-2910
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: modularization, osgi, technical_debt
 Fix For: 1.3.3


 Currently oak-jcr bundle needs to be embedded within some other bundle if the 
 Oak needs to be properly configured in OSGi env. Need to revisit this aspect 
 and see what needs to be done to enable Oak to be properly configured without 
 requiring the oak-jcr bundle to be embedded in the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing

2015-07-08 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618516#comment-14618516
 ] 

Davide Giannella commented on OAK-3054:
---

[~alex.parvulescu]

bq. I recently found out about the CheckpointManger mbean (OAK-2267), does that 
help?

Awesome! Perfect for my purposes :)

 IndexStatsMBean should provide some details if the async indexing is failing
 

 Key: OAK-3054
 URL: https://issues.apache.org/jira/browse/OAK-3054
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 If the background indexing fails for some reason it logs the exception for 
 the first time then it logs the exception like _The index update failed ..._. 
 After that if indexing continues to fail then no further logging is done so 
 as to avoid creating noise.
 This poses a problem on long running system where original exception might 
 not be noticed and index does not show updated result. For such cases we 
 should expose the indexing health as part of {{IndexStatsMBean}}. Also we can 
 provide the last recorded exception. 
 Administrator can then check for MBean and enable debug logs for further 
 troubleshooting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618469#comment-14618469
 ] 

Marcel Reutegger commented on OAK-3081:
---

Remarks about the problem and the fix: the problem was that the head revision 
could change between the the calls to doc.isCommitted(rev) and 
SplitOperations.isGarbage(rev). The change was committed in the meantime and 
the head revision is updated. This breaks an assumption in SplitOperations that 
the head revision will reflect the same state about the commit as the previous 
check with isCommitted(). The fix introduces a happens-before relation between 
the head revision and the document to split and makes sure the head revision is 
not newer than any uncommitted change observed in the document.

 SplitOperations may undo committed changes
 --

 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.10, 1.1.6, 1.2
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


 There is a race condition when SplitOperations identifies garbage on a 
 document. The feature was introduced with OAK-2421.
 The issue occurs if the following sequence of operations happens on a 
 document:
 - The document is updated with e.g. _deleted.rX-0-1 = false within 
 Commit.applyToDocumentStore()
 - Commit.createOrUpdateNode() perform the update and calls 
 checkSplitCandidate(). The document becomes a split candidate, because it is 
 over the threshold of 8kB
 - At the same time a background update is running and starts to look at the 
 split candidates.
 - The background update looks at the document and calls 
 SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false 
 is not committed and doc.isCommitted(rev) returns false
 - The commit proceeds, updates the commit root and sets the new head revision
 - The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
 method will return true!
 - The background update then removes _deleted.rX-0-1 = false together with 
 the _commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3081.
---
Resolution: Fixed

Merged into 1.2 branch: http://svn.apache.org/r1689853 and 1.0 branch: 
http://svn.apache.org/r1689860

 SplitOperations may undo committed changes
 --

 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.10, 1.1.6, 1.2
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


 There is a race condition when SplitOperations identifies garbage on a 
 document. The feature was introduced with OAK-2421.
 The issue occurs if the following sequence of operations happens on a 
 document:
 - The document is updated with e.g. _deleted.rX-0-1 = false within 
 Commit.applyToDocumentStore()
 - Commit.createOrUpdateNode() perform the update and calls 
 checkSplitCandidate(). The document becomes a split candidate, because it is 
 over the threshold of 8kB
 - At the same time a background update is running and starts to look at the 
 split candidates.
 - The background update looks at the document and calls 
 SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false 
 is not committed and doc.isCommitted(rev) returns false
 - The commit proceeds, updates the commit root and sets the new head revision
 - The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
 method will return true!
 - The background update then removes _deleted.rX-0-1 = false together with 
 the _commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3057) Simplify debugging conflict related errors

2015-07-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618471#comment-14618471
 ] 

Michael Dürig commented on OAK-3057:


Yes, this is along the lines I was thinking of. We should also do similar 
logging for property conflicts. Furthermore I'd also log the resolution along 
with the name of the conflict handler that resolved the conflict. 

 Simplify debugging conflict related errors
 --

 Key: OAK-3057
 URL: https://issues.apache.org/jira/browse/OAK-3057
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.3.3

 Attachments: OAK-3057-v0.patch


 Many a times we see conflict exception in the logs like below. Looking at the 
 stacktrace its hard to reason out what was the conflict and hence harder to 
 debug such issues. Oak should provide more details as part of exception 
 message itself to simplify debugging such issues. 
 Note that such issues are intermittent and might happen in background 
 processing logic. So its required that all details should be made part of 
 exception message itself as it would not be possible to turn on any trace 
 logging to get more details given the intermittent nature of such issues
 {noformat}
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakState0001: 
 Unresolved conflicts in /content/dam/news/images/2015/06/27/blue-sky.jpg
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyAdded(ConflictValidator.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyAdded(CompositeEditor.java:83)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyAdded(EditorDiff.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:375)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:557)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge0(AbstractNodeStoreBranch.java:329)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:159)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1482)
   at 
 org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
 {noformat}



--
This 

[jira] [Commented] (OAK-2910) oak-jcr bundle should be usable as a standalone bundle

2015-07-08 Thread Robert Munteanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618501#comment-14618501
 ] 

Robert Munteanu commented on OAK-2910:
--

[~olli] - I don't have a better idea, but if we do please let us know.

 oak-jcr bundle should be usable as a standalone bundle
 --

 Key: OAK-2910
 URL: https://issues.apache.org/jira/browse/OAK-2910
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: modularization, osgi, technical_debt
 Fix For: 1.3.3


 Currently oak-jcr bundle needs to be embedded within some other bundle if the 
 Oak needs to be properly configured in OSGi env. Need to revisit this aspect 
 and see what needs to be done to enable Oak to be properly configured without 
 requiring the oak-jcr bundle to be embedded in the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing

2015-07-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618336#comment-14618336
 ] 

Alex Parvulescu commented on OAK-3054:
--

bq. the list of all available checkpoints
I recently found out about the _CheckpointManger_ mbean (OAK-2267), does that 
help?

 IndexStatsMBean should provide some details if the async indexing is failing
 

 Key: OAK-3054
 URL: https://issues.apache.org/jira/browse/OAK-3054
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 If the background indexing fails for some reason it logs the exception for 
 the first time then it logs the exception like _The index update failed ..._. 
 After that if indexing continues to fail then no further logging is done so 
 as to avoid creating noise.
 This poses a problem on long running system where original exception might 
 not be noticed and index does not show updated result. For such cases we 
 should expose the indexing health as part of {{IndexStatsMBean}}. Also we can 
 provide the last recorded exception. 
 Administrator can then check for MBean and enable debug logs for further 
 troubleshooting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3080) JournalTest.cleanupTest failure on CI

2015-07-08 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618208#comment-14618208
 ] 

Stefan Egli commented on OAK-3080:
--

[~chetanm], I don't have much karma wrt setting up jenkins, would it be 
possible to dump the log output of JournalTest on the console (perhaps even 
setting org.apache.jackrabbit.oak.plugins.document to DEBUG)?

 JournalTest.cleanupTest failure on CI
 -

 Key: OAK-3080
 URL: https://issues.apache.org/jira/browse/OAK-3080
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Stefan Egli
Priority: Minor
  Labels: CI, Jenkins
 Fix For: 1.3.3


 Following test failure was seen 
 [here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/248/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/JournalTest/cleanupTest/]
 {noformat}
 java.lang.AssertionError: expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.jackrabbit.oak.plugins.document.JournalTest.cleanupTest(JournalTest.java:183)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3082) Redundent entries in effective policies per principal-set

2015-07-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-3082:
---

Assignee: angela

 Redundent entries in effective policies per principal-set
 -

 Key: OAK-3082
 URL: https://issues.apache.org/jira/browse/OAK-3082
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor
 Fix For: 1.3.3


 when retrieving the effective policies for a given set of principals the 
 resulting array of policies contains redundant entries if a given policy 
 contains multiple ACEs for the given set of principals. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing

2015-07-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618336#comment-14618336
 ] 

Alex Parvulescu edited comment on OAK-3054 at 7/8/15 10:11 AM:
---

bq. the list of all available checkpoints
I recently found out about the _CheckpointManger_ mbean (OAK-2267), does that 
help?

ps. 'CheckpointManger': the name typo probably needs to be fixed :)


was (Author: alex.parvulescu):
bq. the list of all available checkpoints
I recently found out about the _CheckpointManger_ mbean (OAK-2267), does that 
help?

 IndexStatsMBean should provide some details if the async indexing is failing
 

 Key: OAK-3054
 URL: https://issues.apache.org/jira/browse/OAK-3054
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 If the background indexing fails for some reason it logs the exception for 
 the first time then it logs the exception like _The index update failed ..._. 
 After that if indexing continues to fail then no further logging is done so 
 as to avoid creating noise.
 This poses a problem on long running system where original exception might 
 not be noticed and index does not show updated result. For such cases we 
 should expose the indexing health as part of {{IndexStatsMBean}}. Also we can 
 provide the last recorded exception. 
 Administrator can then check for MBean and enable debug logs for further 
 troubleshooting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3080) JournalTest.cleanupTest failure on CI

2015-07-08 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3080:
-
Attachment: OAK-3080.patch

[~chetanm], that's perfect!

I've extended the LogDumper idea and added a {{LogLevelModifier}} (attached in 
[^OAK-3080.patch]) which would allow to additionally change the loglevel to 
trace - might be more informative than info. wdyt?



 JournalTest.cleanupTest failure on CI
 -

 Key: OAK-3080
 URL: https://issues.apache.org/jira/browse/OAK-3080
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Stefan Egli
Priority: Minor
  Labels: CI, Jenkins
 Fix For: 1.3.3

 Attachments: OAK-3080.patch


 Following test failure was seen 
 [here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/248/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/JournalTest/cleanupTest/]
 {noformat}
 java.lang.AssertionError: expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.jackrabbit.oak.plugins.document.JournalTest.cleanupTest(JournalTest.java:183)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2910) oak-jcr bundle should be usable as a standalone bundle

2015-07-08 Thread Oliver Lietz (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618355#comment-14618355
 ] 

Oliver Lietz commented on OAK-2910:
---

Please, do not use start levels for managing service dependencies or start 
order.

 oak-jcr bundle should be usable as a standalone bundle
 --

 Key: OAK-2910
 URL: https://issues.apache.org/jira/browse/OAK-2910
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: modularization, osgi, technical_debt
 Fix For: 1.3.3


 Currently oak-jcr bundle needs to be embedded within some other bundle if the 
 Oak needs to be properly configured in OSGi env. Need to revisit this aspect 
 and see what needs to be done to enable Oak to be properly configured without 
 requiring the oak-jcr bundle to be embedded in the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618353#comment-14618353
 ] 

Marcel Reutegger commented on OAK-3081:
---

Committed fix and enabled test in trunk: http://svn.apache.org/r1689833

 SplitOperations may undo committed changes
 --

 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.10, 1.1.6, 1.2
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


 There is a race condition when SplitOperations identifies garbage on a 
 document. The feature was introduced with OAK-2421.
 The issue occurs if the following sequence of operations happens on a 
 document:
 - The document is updated with e.g. _deleted.rX-0-1 = false within 
 Commit.applyToDocumentStore()
 - Commit.createOrUpdateNode() perform the update and calls 
 checkSplitCandidate(). The document becomes a split candidate, because it is 
 over the threshold of 8kB
 - At the same time a background update is running and starts to look at the 
 split candidates.
 - The background update looks at the document and calls 
 SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false 
 is not committed and doc.isCommitted(rev) returns false
 - The commit proceeds, updates the commit root and sets the new head revision
 - The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
 method will return true!
 - The background update then removes _deleted.rX-0-1 = false together with 
 the _commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3080) JournalTest.cleanupTest failure on CI

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618211#comment-14618211
 ] 

Chetan Mehrotra commented on OAK-3080:
--

For that you can try out oak-commons {{LogDumper}}. Enabling debug log for all 
run might be troublesome. We can enhance the {{LogDumper}} to change the log 
level for specific test only based on some config file.

 JournalTest.cleanupTest failure on CI
 -

 Key: OAK-3080
 URL: https://issues.apache.org/jira/browse/OAK-3080
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Stefan Egli
Priority: Minor
  Labels: CI, Jenkins
 Fix For: 1.3.3


 Following test failure was seen 
 [here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/248/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/JournalTest/cleanupTest/]
 {noformat}
 java.lang.AssertionError: expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.jackrabbit.oak.plugins.document.JournalTest.cleanupTest(JournalTest.java:183)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3069) Provide option to eagerly copy the new index files in CopyOnRead

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3069.
--
Resolution: Fixed

* Ported to branches
* Updated release notes and docs

 Provide option to eagerly copy the new index files in CopyOnRead
 

 Key: OAK-3069
 URL: https://issues.apache.org/jira/browse/OAK-3069
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: docs-impacting
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-3069+logs-v2.patch, OAK-3069+logs.patch, 
 OAK-3069.patch


 Currently the CopyOnRead feature lazily starts copying the index files upon 
 first access. This at times does not work efficiently because the index gets 
 updated frequently and hence by the time copy is done index would have moved 
 on. Due to this most of the index read turn out to be remote reads and cause 
 slowness
 We should provide an option on per index basis to enable copying the files 
 eagerly before exposing the new index to the query engine such that any read 
 done is done locally



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3080) JournalTest.cleanupTest failure on CI

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618361#comment-14618361
 ] 

Chetan Mehrotra commented on OAK-3080:
--

Looks useful. Lets put this to action and debug this issue ;) Later we can make 
it configurable via some file so as to avoid polluting commit history with 
debug related commits

 JournalTest.cleanupTest failure on CI
 -

 Key: OAK-3080
 URL: https://issues.apache.org/jira/browse/OAK-3080
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Stefan Egli
Priority: Minor
  Labels: CI, Jenkins
 Fix For: 1.3.3

 Attachments: OAK-3080.patch


 Following test failure was seen 
 [here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/248/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/JournalTest/cleanupTest/]
 {noformat}
 java.lang.AssertionError: expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.jackrabbit.oak.plugins.document.JournalTest.cleanupTest(JournalTest.java:183)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3081:
-

 Summary: SplitOperations may undo committed changes
 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.2, 1.1.6, 1.0.10
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


There is a race condition when SplitOperations identifies garbage on a 
document. The feature was introduced with OAK-2421.

The issue occurs if the following sequence of operations happens on a document:

- The document is updated with e.g. _deleted.rX-0-1 = false within 
Commit.applyToDocumentStore()
- Commit.createOrUpdateNode() perform the update and calls 
checkSplitCandidate(). The document becomes a split candidate, because it is 
over the threshold of 8kB
- At the same time a background update is running and starts to look at the 
split candidates.
- The background update looks at the document and calls 
SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false is 
not committed and doc.isCommitted(rev) returns false
- The commit proceeds, updates the commit root and sets the new head revision
- The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
method will return true!
- The background update then removes _deleted.rX-0-1 = false together with the 
_commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3073) Make preftech of index files as default option

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618196#comment-14618196
 ] 

Chetan Mehrotra commented on OAK-3073:
--

[~alex.parvulescu] Should we enable this for trunk?

 Make preftech of index files  as default option
 ---

 Key: OAK-3073
 URL: https://issues.apache.org/jira/browse/OAK-3073
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.3.3


 As part of OAK-3069 a new feature is being provided to enable pre fetching of 
 index files to local directory. To start with this would need to be 
 explicitly. Once it is found to be stable we should enable by default.
 To start with we just enable it by default on trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618221#comment-14618221
 ] 

Marcel Reutegger commented on OAK-3081:
---

Added a test to trunk (currently ignored): http://svn.apache.org/r1689810

 SplitOperations may undo committed changes
 --

 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.10, 1.1.6, 1.2
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


 There is a race condition when SplitOperations identifies garbage on a 
 document. The feature was introduced with OAK-2421.
 The issue occurs if the following sequence of operations happens on a 
 document:
 - The document is updated with e.g. _deleted.rX-0-1 = false within 
 Commit.applyToDocumentStore()
 - Commit.createOrUpdateNode() perform the update and calls 
 checkSplitCandidate(). The document becomes a split candidate, because it is 
 over the threshold of 8kB
 - At the same time a background update is running and starts to look at the 
 split candidates.
 - The background update looks at the document and calls 
 SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false 
 is not committed and doc.isCommitted(rev) returns false
 - The commit proceeds, updates the commit root and sets the new head revision
 - The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
 method will return true!
 - The background update then removes _deleted.rX-0-1 = false together with 
 the _commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3081) SplitOperations may undo committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618298#comment-14618298
 ] 

Marcel Reutegger commented on OAK-3081:
---

Updated test in trunk: http://svn.apache.org/r1689828

 SplitOperations may undo committed changes
 --

 Key: OAK-3081
 URL: https://issues.apache.org/jira/browse/OAK-3081
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.10, 1.1.6, 1.2
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 1.2.3, 1.3.3, 1.0.17


 There is a race condition when SplitOperations identifies garbage on a 
 document. The feature was introduced with OAK-2421.
 The issue occurs if the following sequence of operations happens on a 
 document:
 - The document is updated with e.g. _deleted.rX-0-1 = false within 
 Commit.applyToDocumentStore()
 - Commit.createOrUpdateNode() perform the update and calls 
 checkSplitCandidate(). The document becomes a split candidate, because it is 
 over the threshold of 8kB
 - At the same time a background update is running and starts to look at the 
 split candidates.
 - The background update looks at the document and calls 
 SplitOperations.collectLocalChanges(). At this point _deleted.rX-0-1 = false 
 is not committed and doc.isCommitted(rev) returns false
 - The commit proceeds, updates the commit root and sets the new head revision
 - The background update now calls SplitOperations.isGarbage(rX-0-1) and the 
 method will return true!
 - The background update then removes _deleted.rX-0-1 = false together with 
 the _commitRoot entry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3073) Make preftech of index files as default option

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3073:
-
Labels: candidate_oak_1_0  (was: )

 Make preftech of index files  as default option
 ---

 Key: OAK-3073
 URL: https://issues.apache.org/jira/browse/OAK-3073
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: candidate_oak_1_0
 Fix For: 1.3.3


 As part of OAK-3069 a new feature is being provided to enable pre fetching of 
 index files to local directory. To start with this would need to be 
 explicitly. Once it is found to be stable we should enable by default.
 To start with we just enable it by default on trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3073) Make preftech of index files as default option

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3073:
-
Labels: candidate_oak_1_0 candidate_oak_1_2  (was: candidate_oak_1_0)

 Make preftech of index files  as default option
 ---

 Key: OAK-3073
 URL: https://issues.apache.org/jira/browse/OAK-3073
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: candidate_oak_1_0, candidate_oak_1_2
 Fix For: 1.3.3


 As part of OAK-3069 a new feature is being provided to enable pre fetching of 
 index files to local directory. To start with this would need to be 
 explicitly. Once it is found to be stable we should enable by default.
 To start with we just enable it by default on trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3073) Make preftech of index files as default option

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3073.
--
Resolution: Fixed

Done it with http://svn.apache.org/r1689831

 Make preftech of index files  as default option
 ---

 Key: OAK-3073
 URL: https://issues.apache.org/jira/browse/OAK-3073
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: candidate_oak_1_0
 Fix For: 1.3.3


 As part of OAK-3069 a new feature is being provided to enable pre fetching of 
 index files to local directory. To start with this would need to be 
 explicitly. Once it is found to be stable we should enable by default.
 To start with we just enable it by default on trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3032) LDAP test failures

2015-07-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618227#comment-14618227
 ] 

angela commented on OAK-3032:
-

[~tripod], though i made some improvements to the authentication (OAK-3006, 
OAK-2998, OAK-2978,OAK-2690) those changes were committed already some time ago 
(before june 20th) and i am quite sure i run all tests... but i can take a 
closer look. 

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: angela
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3032) LDAP test failures

2015-07-08 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618227#comment-14618227
 ] 

angela edited comment on OAK-3032 at 7/8/15 9:58 AM:
-

[~tripod], though i made some improvements to the authentication (OAK-3006, 
OAK-2998, OAK-2978,OAK-2690) those changes were committed already some time ago 
(before june 20th) and i am quite sure i run all tests... but i could take a 
closer look...


was (Author: anchela):
[~tripod], though i made some improvements to the authentication (OAK-3006, 
OAK-2998, OAK-2978,OAK-2690) those changes were committed already some time ago 
(before june 20th) and i am quite sure i run all tests... but i can take a 
closer look. 

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: Tobias Bocanegra
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3057) Simplify debugging conflict related errors

2015-07-08 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3057:
-
Attachment: OAK-3057-v0.patch

example + a tiny patch for enabling some logs on the MerginNodeStateDiff.
the patch has a lot of extra stuff that needs to be removed, and the test is 
not really a test in itself, it's just an example.

I would move the provided test to a dedicated conflicts test class, and try to 
reproduce all possible conflict scenarios while capturing the logs and making 
sure the things we log make sense.

another thing to consider is the amount of stuff we log, the diff can become 
quite large. I actually prefer logging more info rather than less, if it helps 
pointing to the actual issue.

cc [~chetanm], [~mduerig], [~catholicon]




 Simplify debugging conflict related errors
 --

 Key: OAK-3057
 URL: https://issues.apache.org/jira/browse/OAK-3057
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.3.3

 Attachments: OAK-3057-v0.patch


 Many a times we see conflict exception in the logs like below. Looking at the 
 stacktrace its hard to reason out what was the conflict and hence harder to 
 debug such issues. Oak should provide more details as part of exception 
 message itself to simplify debugging such issues. 
 Note that such issues are intermittent and might happen in background 
 processing logic. So its required that all details should be made part of 
 exception message itself as it would not be possible to turn on any trace 
 logging to get more details given the intermittent nature of such issues
 {noformat}
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakState0001: 
 Unresolved conflicts in /content/dam/news/images/2015/06/27/blue-sky.jpg
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyAdded(ConflictValidator.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyAdded(CompositeEditor.java:83)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyAdded(EditorDiff.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:375)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:557)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge0(AbstractNodeStoreBranch.java:329)
   at 
 

[jira] [Resolved] (OAK-3064) Oak Run Main.java passing values in the wrong order

2015-07-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-3064.

   Resolution: Fixed
Fix Version/s: 1.3.3

Fixed at http://svn.apache.org/r1689801.
Thanks for spotting! 

 Oak Run Main.java passing values in the wrong order
 ---

 Key: OAK-3064
 URL: https://issues.apache.org/jira/browse/OAK-3064
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: upgrade
Reporter: Jason E Bailey
Assignee: Michael Dürig
Priority: Minor
 Fix For: 1.3.3


 in org.apache.jackrabbit.oak.run.Main.java on line 952 the following method 
 is being called:
 RepositoryContext.create(RepositoryConfig.create(dir, xml));
 The RepositoryConfig method signature is `create(File xml, File dir)` this is 
 creating an error when running the upgrade process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-08 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3052.
--
Resolution: Fixed

fixed on trunk with http://svn.apache.org/r1689805

 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052-v2.patch, OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3082) Redundent entries in effective policies per principal-set

2015-07-08 Thread angela (JIRA)
angela created OAK-3082:
---

 Summary: Redundent entries in effective policies per principal-set
 Key: OAK-3082
 URL: https://issues.apache.org/jira/browse/OAK-3082
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: angela
Priority: Minor
 Fix For: 1.3.3


when retrieving the effective policies for a given set of principals the 
resulting array of policies contains redundant entries if a given policy 
contains multiple ACEs for the given set of principals. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3082) Redundent entries in effective policies per principal-set

2015-07-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3082.
-
Resolution: Fixed

Committed revision 1689820.

 Redundent entries in effective policies per principal-set
 -

 Key: OAK-3082
 URL: https://issues.apache.org/jira/browse/OAK-3082
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor
 Fix For: 1.3.3


 when retrieving the effective policies for a given set of principals the 
 resulting array of policies contains redundant entries if a given policy 
 contains multiple ACEs for the given set of principals. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3054) IndexStatsMBean should provide some details if the async indexing is failing

2015-07-08 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618259#comment-14618259
 ] 

Davide Giannella commented on OAK-3054:
---

In order to debug such situations would help if the MBean (or any other) 
could expose the status of the properties under the {{/:async}} node and the 
list of all available checkpoints.

 IndexStatsMBean should provide some details if the async indexing is failing
 

 Key: OAK-3054
 URL: https://issues.apache.org/jira/browse/OAK-3054
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 If the background indexing fails for some reason it logs the exception for 
 the first time then it logs the exception like _The index update failed ..._. 
 After that if indexing continues to fail then no further logging is done so 
 as to avoid creating noise.
 This poses a problem on long running system where original exception might 
 not be noticed and index does not show updated result. For such cases we 
 should expose the indexing health as part of {{IndexStatsMBean}}. Also we can 
 provide the last recorded exception. 
 Administrator can then check for MBean and enable debug logs for further 
 troubleshooting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3083) Unable to resolve org.apache.jackrabbit.oak-core (missing service org.apache.jackrabbit.core.util.db.ConnectionFactory)

2015-07-08 Thread Oliver Lietz (JIRA)
Oliver Lietz created OAK-3083:
-

 Summary: Unable to resolve org.apache.jackrabbit.oak-core (missing 
service org.apache.jackrabbit.core.util.db.ConnectionFactory)
 Key: OAK-3083
 URL: https://issues.apache.org/jira/browse/OAK-3083
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
 Environment: Apache Karaf 4.0.0
Reporter: Oliver Lietz


The bundle {{org.apache.jackrabbit.oak-core}} requires ({{Require-Capability}}) 
a service {{org.apache.jackrabbit.core.util.db.ConnectionFactory}} which is not 
provided by its (Maven) dependencies or known bundles:

{{Unable to resolve org.apache.jackrabbit.oak-core/1.2.2: missing requirement 
\[org.apache.jackrabbit.oak-core/1.2.2\] osgi.service; effective:=active; 
filter:=(objectClass=org.apache.jackrabbit.core.util.db.ConnectionFactory)}}

The bundle {{org.apache.jackrabbit.jackrabbit-data}} which contains 
{{org.apache.jackrabbit.core.util.db.ConnectionFactory}} does not provide 
({{Provide-Capability}}) a service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3032) LDAP test failures

2015-07-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3032:

Assignee: Tobias Bocanegra  (was: angela)

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: Tobias Bocanegra
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3071) Add a compound index for _modified + _id

2015-07-08 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618611#comment-14618611
 ] 

Thomas Mueller edited comment on OAK-3071 at 7/8/15 1:54 PM:
-

Are we sure such a compound index will help for the condition we use (benchmark 
numbers)? As far as I understand the docs at 
http://docs.mongodb.org/manual/core/multikey-index-bounds/ , a compound index 
will not help much here. Sure, it will use an index scan and not load the 
documents, but I'm not sure if this will improve performance a lot.

Maybe changing the query could help, in combination with this index (use 
modified in(...) instead of modified = x). 


was (Author: tmueller):
Are we sure such a compound index will help for the condition we use (benchmark 
numbers)? As far as I understand the docs at 
http://docs.mongodb.org/manual/core/multikey-index-bounds/ , a compound index 
will not help much here.

 Add a compound index for _modified + _id
 

 Key: OAK-3071
 URL: https://issues.apache.org/jira/browse/OAK-3071
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
 Fix For: 1.2.3, 1.3.4, 1.0.17


 As explained in OAK-1966 diff logic makes a call like
 bq. db.nodes.find({ _id: { $gt: 3:/content/foo/01/, $lt: 
 3:/content/foo010 }, _modified: { $gte: 1405085300 } }).sort({_id:1})
 For better and deterministic query performance we would need to create a 
 compound index like \{_modified:1, _id:1\}. This index would ensure that 
 Mongo does not have to perform object scan while evaluating such a query.
 Care must be taken that index is only created by default for fresh setup. For 
 existing setup we should expose a JMX operation which can be invoked by 
 system admin to create the required index as per maintenance window



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3084) Commit.applyToDocumentStore(Revision) may rollback committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-3084:
-

 Summary: Commit.applyToDocumentStore(Revision) may rollback 
committed changes
 Key: OAK-3084
 URL: https://issues.apache.org/jira/browse/OAK-3084
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.3.0, 1.2, 1.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


At the end of a commit Commit.applyToDocumentStore(Revision) performs a 
conflict check on the commit root and adds a collision marker to other 
conflicting uncommitted changes. The method checkConflicts() can potentially 
throw a DocumentStoreException and trigger a rollback of already committed 
changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2956) NPE in UserImporter when importing group with modified authorizable id

2015-07-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-2956.
-
   Resolution: Fixed
Fix Version/s: 1.3.3

NPE prevent by pro-active check for valid input; will throw ConstraintViolation 
immediately in this case.

 NPE in UserImporter when importing group with modified authorizable id
 --

 Key: OAK-2956
 URL: https://issues.apache.org/jira/browse/OAK-2956
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Priority: Minor
 Fix For: 1.3.3


 When importing a file vault package with a group that exists before (same 
 uuid in the .content.xml) but modified authorizable id, this NPE happens:
 {noformat}
 Caused by: java.lang.NullPointerException: null
   at 
 org.apache.jackrabbit.oak.security.user.UserImporter.handlePropInfo(UserImporter.java:242)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.importProperties(ImporterImpl.java:287)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.startNode(ImporterImpl.java:470)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.processNode(SysViewImportHandler.java:80)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.startElement(SysViewImportHandler.java:117)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImportHandler.startElement(ImportHandler.java:183)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.JcrSysViewTransformer.startNode(JcrSysViewTransformer.java:167)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.DocViewSAXImporter.startElement(DocViewSAXImporter.java:598)
   ... 87 common frames omitted
 {noformat}
 It looks like [this 
 line|https://github.com/apache/jackrabbit-oak/blob/105f890e04ee990f0e71d88937955680670d96f7/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/UserImporter.java#L242],
  meaning {{userManager.getAuthorizable(id)}} returned null.
 While the use case might not be something to support (this happens during 
 development, changing group names/details until right), at least the NPE 
 should be guarded against and a better error message given.
 Looking closer, it seems like a difference between 
 {{UserManager.getAuthorizable(Tree)}} (called earlier in the method) (I 
 assume using UUID to look up the existing group successfully) and 
 {{UserManager.getAuthorizable(String)}} (using the rep:authorizableId to look 
 up the user, failing, because it changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3071) Add a compound index for _modified + _id

2015-07-08 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618611#comment-14618611
 ] 

Thomas Mueller commented on OAK-3071:
-

Are we sure such a compound index will help for the condition we use (benchmark 
numbers)? As far as I understand the docs at 
http://docs.mongodb.org/manual/core/multikey-index-bounds/ , a compound index 
will not help much here.

 Add a compound index for _modified + _id
 

 Key: OAK-3071
 URL: https://issues.apache.org/jira/browse/OAK-3071
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
 Fix For: 1.2.3, 1.3.4, 1.0.17


 As explained in OAK-1966 diff logic makes a call like
 bq. db.nodes.find({ _id: { $gt: 3:/content/foo/01/, $lt: 
 3:/content/foo010 }, _modified: { $gte: 1405085300 } }).sort({_id:1})
 For better and deterministic query performance we would need to create a 
 compound index like \{_modified:1, _id:1\}. This index would ensure that 
 Mongo does not have to perform object scan while evaluating such a query.
 Care must be taken that index is only created by default for fresh setup. For 
 existing setup we should expose a JMX operation which can be invoked by 
 system admin to create the required index as per maintenance window



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3071) Add a compound index for _modified + _id

2015-07-08 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618611#comment-14618611
 ] 

Thomas Mueller edited comment on OAK-3071 at 7/8/15 1:56 PM:
-

Are we sure such a compound index will help for the condition we use (benchmark 
numbers)? As far as I understand the docs at 
http://docs.mongodb.org/manual/core/multikey-index-bounds/ , a compound index 
will not help much here. Sure, it will use an index scan and not load the 
documents, but I'm not sure if this will improve performance a lot. Please note 
adding an column to an index (specially a large column) will decrease insert 
and update performance.

Maybe changing the query could help, in combination with this index (use 
modified in(...) instead of modified = x). 


was (Author: tmueller):
Are we sure such a compound index will help for the condition we use (benchmark 
numbers)? As far as I understand the docs at 
http://docs.mongodb.org/manual/core/multikey-index-bounds/ , a compound index 
will not help much here. Sure, it will use an index scan and not load the 
documents, but I'm not sure if this will improve performance a lot.

Maybe changing the query could help, in combination with this index (use 
modified in(...) instead of modified = x). 

 Add a compound index for _modified + _id
 

 Key: OAK-3071
 URL: https://issues.apache.org/jira/browse/OAK-3071
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
 Fix For: 1.2.3, 1.3.4, 1.0.17


 As explained in OAK-1966 diff logic makes a call like
 bq. db.nodes.find({ _id: { $gt: 3:/content/foo/01/, $lt: 
 3:/content/foo010 }, _modified: { $gte: 1405085300 } }).sort({_id:1})
 For better and deterministic query performance we would need to create a 
 compound index like \{_modified:1, _id:1\}. This index would ensure that 
 Mongo does not have to perform object scan while evaluating such a query.
 Care must be taken that index is only created by default for fresh setup. For 
 existing setup we should expose a JMX operation which can be invoked by 
 system admin to create the required index as per maintenance window



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3085) Add timestamp property to journal entries

2015-07-08 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-3085:


 Summary: Add timestamp property to journal entries
 Key: OAK-3085
 URL: https://issues.apache.org/jira/browse/OAK-3085
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Affects Versions: 1.3.2, 1.2.2
Reporter: Stefan Egli
 Fix For: 1.2.3, 1.3.3


OAK-3001 is about improving the JournalGarbageCollector by querying on a 
separated-out timestamp property (rather than the id that encapsulated the 
timestamp).

In order to remove OAK-3001 as a blocker ticket from the 1.2.3 release, this 
ticket is about adding a timestamp property to the journal entry but not making 
use of it yet. Later on, when OAK-3001 is tackled, this timestamp property 
already exists and migration is not an issue anymore (as 1.2.3 introduces the 
journal entry first time)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2977) Fast result size estimate: OSGi configuration

2015-07-08 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2977:

Fix Version/s: (was: 1.2.3)

 Fast result size estimate: OSGi configuration
 -

 Key: OAK-2977
 URL: https://issues.apache.org/jira/browse/OAK-2977
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Thomas Mueller
Assignee: Thomas Mueller
  Labels: doc-impacting
 Fix For: 1.3.3


 The fast result size option in OAK-2926 should be configurable, for example 
 over OSGi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3069) Provide option to eagerly copy the new index files in CopyOnRead

2015-07-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618616#comment-14618616
 ] 

Alex Parvulescu commented on OAK-3069:
--

[~chetanm] have you also included r1689008 in the backport?

 Provide option to eagerly copy the new index files in CopyOnRead
 

 Key: OAK-3069
 URL: https://issues.apache.org/jira/browse/OAK-3069
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: docs-impacting
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-3069+logs-v2.patch, OAK-3069+logs.patch, 
 OAK-3069.patch


 Currently the CopyOnRead feature lazily starts copying the index files upon 
 first access. This at times does not work efficiently because the index gets 
 updated frequently and hence by the time copy is done index would have moved 
 on. Due to this most of the index read turn out to be remote reads and cause 
 slowness
 We should provide an option on per index basis to enable copying the files 
 eagerly before exposing the new index to the query engine such that any read 
 done is done locally



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-08 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3001:
-
Fix Version/s: (was: 1.2.3)
   1.2.4

Factored out adding just the timestamp property to avoid later migration 
problems to OAK-3085. OAK-3001 itself can therefore be removed from 1.2.3 as a 
blocking issue.

 Simplify JournalGarbageCollector using a dedicated timestamp property
 -

 Key: OAK-3001
 URL: https://issues.apache.org/jira/browse/OAK-3001
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Stefan Egli
Priority: Critical
  Labels: scalability
 Fix For: 1.2.4, 1.3.3


 This subtask is about spawning out a 
 [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
  from [~chetanm] re JournalGC:
 {quote}
 Further looking at JournalGarbageCollector ... it would be simpler if you 
 record the journal entry timestamp as an attribute in JournalEntry document 
 and then you can delete all the entries which are older than some time by a 
 simple query. This would avoid fetching all the entries to be deleted on the 
 Oak side
 {quote}
 and a corresponding 
 [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
  from myself:
 {quote}
 Re querying by timestamp: that would indeed be simpler. With the current set 
 of DocumentStore API however, I believe this is not possible. But: 
 [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
  comes quite close: it would probably just require the opposite of that 
 method too: 
 {code}
 public T extends Document ListT query(CollectionT collection,
   String fromKey,
   String toKey,
   String indexedProperty,
   long endValue,
   int limit) {
 {code}
 .. or what about generalizing this method to have both a {{startValue}} and 
 an {{endValue}} - with {{-1}} indicating when one of them is not used?
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3085) Add timestamp property to journal entries

2015-07-08 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3085:
-
Attachment: OAK-3085.patch

Attached [^OAK-3085.patch] that adds a {{_ts}} property to the journal entries 
- without making any use of it - just being ready for OAK-3001 to query and 
delete.

[~mreutegg], [~chetanm], can you pls review and apply, thx!

 Add timestamp property to journal entries
 -

 Key: OAK-3085
 URL: https://issues.apache.org/jira/browse/OAK-3085
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Affects Versions: 1.2.2, 1.3.2
Reporter: Stefan Egli
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3085.patch


 OAK-3001 is about improving the JournalGarbageCollector by querying on a 
 separated-out timestamp property (rather than the id that encapsulated the 
 timestamp).
 In order to remove OAK-3001 as a blocker ticket from the 1.2.3 release, this 
 ticket is about adding a timestamp property to the journal entry but not 
 making use of it yet. Later on, when OAK-3001 is tackled, this timestamp 
 property already exists and migration is not an issue anymore (as 1.2.3 
 introduces the journal entry first time)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2885) Enable saveDirListing by default

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618762#comment-14618762
 ] 

Chetan Mehrotra commented on OAK-2885:
--

Merged to branches
* 1.0 - http://svn.apache.org/r1689900
* 1.2 - http://svn.apache.org/r1689901

 Enable saveDirListing by default
 

 Key: OAK-2885
 URL: https://issues.apache.org/jira/browse/OAK-2885
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 OAK-2809 introduced support for saving directory listing. Once this feature 
 is found to be stable we should enable it by default.
 As a start we can enable it by default for trunk for now



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2892) Speed up lucene indexing post migration by pre extracting the text content from binaries

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2892:
-
Fix Version/s: (was: 1.0.17)
   1.0.18

 Speed up lucene indexing post migration by pre extracting the text content 
 from binaries
 

 Key: OAK-2892
 URL: https://issues.apache.org/jira/browse/OAK-2892
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: lucene, run
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: performance
 Fix For: 1.3.3, 1.0.18


 While migrating large repositories say having 3 M docs (250k PDF) Lucene 
 indexing takes long time to complete (at time 4 days!). Currently the text 
 extraction logic is coupled with Lucene indexing and hence is performed in a 
 single threaded mode which slows down the indexing process. Further if the 
 reindexing has to be triggered it has to be done all over again.
 To speed up the Lucene indexing we can decouple the text extraction
 from actual indexing. It is partly based on discussion on OAK-2787
 # Introduce a new ExtractedTextProvider which can provide extracted text for 
 a given Blob instance
 # In oak-run introduce a new indexer mode - This would take a path in 
 repository and would then traverse the repository and look for existing 
 binaries and extract text from that
 So before or after migration is done one can run this oak-run tool to create 
 this store which has the text already extracted. Then post startup we need to 
 wire up the ExtractedTextProvider instance (which is backed by the BlobStore 
 populated before) and indexing logic can just get content from that. This 
 would avoid performing expensive text extraction in the indexing thread.
 See discussion thread http://markmail.org/thread/ndlfpkwfgpey6o66



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3007) SegmentStore cache does not take string map into account

2015-07-08 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618694#comment-14618694
 ] 

Thomas Mueller commented on OAK-3007:
-

While writing tests, I found and fixed some issues and changed the API a bit to 
simplify testing. Still working on this.

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.3

 Attachments: OAK-3007-2.patch, OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3085) Add timestamp property to journal entries

2015-07-08 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3085:
-
Attachment: OAK-3085.v2.patch

[~chetanm]
bq. We would need to index that field. So required index needs to be created in 
{{MongoDocumentStore}}
good point :S - added in [^OAK-3085.v2.patch]

[~reschke], Chetan pointed out that we'd have to update RBD for allowing 
queries on {{journal._ts}}. Wdyt? (This was discussed to be a blocker for 1.2.3 
release this friday)

 Add timestamp property to journal entries
 -

 Key: OAK-3085
 URL: https://issues.apache.org/jira/browse/OAK-3085
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Affects Versions: 1.2.2, 1.3.2
Reporter: Stefan Egli
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3085.patch, OAK-3085.v2.patch


 OAK-3001 is about improving the JournalGarbageCollector by querying on a 
 separated-out timestamp property (rather than the id that encapsulated the 
 timestamp).
 In order to remove OAK-3001 as a blocker ticket from the 1.2.3 release, this 
 ticket is about adding a timestamp property to the journal entry but not 
 making use of it yet. Later on, when OAK-3001 is tackled, this timestamp 
 property already exists and migration is not an issue anymore (as 1.2.3 
 introduces the journal entry first time)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3085) Add timestamp property to journal entries

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618728#comment-14618728
 ] 

Chetan Mehrotra commented on OAK-3085:
--

Following comments
* We would need to index that field. So required index needs to be created in 
{{MongoDocumentStore}}
* For RDB I am not sure - I think this should be added as a first class column 
rather than part of JSON. And yes an index should also be created

 Add timestamp property to journal entries
 -

 Key: OAK-3085
 URL: https://issues.apache.org/jira/browse/OAK-3085
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Affects Versions: 1.2.2, 1.3.2
Reporter: Stefan Egli
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3085.patch


 OAK-3001 is about improving the JournalGarbageCollector by querying on a 
 separated-out timestamp property (rather than the id that encapsulated the 
 timestamp).
 In order to remove OAK-3001 as a blocker ticket from the 1.2.3 release, this 
 ticket is about adding a timestamp property to the journal entry but not 
 making use of it yet. Later on, when OAK-3001 is tackled, this timestamp 
 property already exists and migration is not an issue anymore (as 1.2.3 
 introduces the journal entry first time)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3069) Provide option to eagerly copy the new index files in CopyOnRead

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618754#comment-14618754
 ] 

Chetan Mehrotra commented on OAK-3069:
--

Thanks for noticing that. I merged that to 1.0 but missed doing that for 1.2. 
Done that now with http://svn.apache.org/r1689897. Can you do a quick check so 
that nothing gets missed (juggling few tasks in parallel and now the 
aftereffects :( )

 Provide option to eagerly copy the new index files in CopyOnRead
 

 Key: OAK-3069
 URL: https://issues.apache.org/jira/browse/OAK-3069
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: docs-impacting
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-3069+logs-v2.patch, OAK-3069+logs.patch, 
 OAK-3069.patch


 Currently the CopyOnRead feature lazily starts copying the index files upon 
 first access. This at times does not work efficiently because the index gets 
 updated frequently and hence by the time copy is done index would have moved 
 on. Due to this most of the index read turn out to be remote reads and cause 
 slowness
 We should provide an option on per index basis to enable copying the files 
 eagerly before exposing the new index to the query engine such that any read 
 done is done locally



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2885) Enable saveDirListing by default

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2885.
--
Resolution: Fixed

 Enable saveDirListing by default
 

 Key: OAK-2885
 URL: https://issues.apache.org/jira/browse/OAK-2885
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 OAK-2809 introduced support for saving directory listing. Once this feature 
 is found to be stable we should enable it by default.
 As a start we can enable it by default for trunk for now



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2844) Introducing a simple document-based discovery-light service (to circumvent documentMk's eventual consistency delays)

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618827#comment-14618827
 ] 

Chetan Mehrotra commented on OAK-2844:
--

Looks neat! Would take some time to review. However just a quick feedback. Can 
you move change around Discriptors to a sub task dealt independently as it 
touches parts outside of Document store logic and might require other people to 
take a look? 

 Introducing a simple document-based discovery-light service (to circumvent 
 documentMk's eventual consistency delays)
 

 Key: OAK-2844
 URL: https://issues.apache.org/jira/browse/OAK-2844
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.4

 Attachments: InstanceStateChangeListener.java, OAK-2844.WIP-02.patch, 
 OAK-2844.patch, OAK-2844.v3.patch


 When running discovery.impl on a mongoMk-backed jcr repository, there are 
 risks of hitting problems such as described in SLING-3432 
 pseudo-network-partitioning: this happens when a jcr-level heartbeat does 
 not reach peers within the configured heartbeat timeout - it then treats that 
 affected instance as dead, removes it from the topology, and continues with 
 the remainings, potentially electing a new leader, running the risk of 
 duplicate leaders. This happens when delays in mongoMk grow larger than the 
 (configured) heartbeat timeout. These problems ultimately are due to the 
 'eventual consistency' nature of, not only mongoDB, but more so of mongoMk. 
 The only alternative so far is to increase the heartbeat timeout to match the 
 expected or measured delays that mongoMk can produce (under say given 
 load/performance scenarios).
 Assuming that mongoMk will always carry a risk of certain delays and a 
 maximum, reasonable (for discovery.impl timeout that is) maximum cannot be 
 guaranteed, a better solution is to provide discovery with more 'real-time' 
 like information and/or privileged access to mongoDb.
 Here's a summary of alternatives that have so far been floating around as a 
 solution to circumvent eventual consistency:
  # expose existing (jmx) information about active 'clusterIds' - this has 
 been proposed in SLING-4603. The pros: reuse of existing functionality. The 
 cons: going via jmx, binding of exposed functionality as 'to be maintained 
 API'
  # expose a plain mongo db/collection (via osgi injection) such that a higher 
 (sling) level discovery could directly write heartbeats there. The pros: 
 heartbeat latency would be minimal (assuming the collection is not sharded). 
 The cons: exposes a mongo db/collection potentially also to anyone else, with 
 the risk of opening up to unwanted possibilities
  # introduce a simple 'discovery-light' API to oak which solely provides 
 information about which instances are active in a cluster. The implementation 
 of this is not exposed. The pros: no need to expose a mongoDb/collection, 
 allows any other jmx-functionality to remain unchanged. The cons: a new API 
 that must be maintained
 This ticket is about the 3rd option, about a new mongo-based discovery-light 
 service that is introduced to oak. The functionality in short:
  * it defines a 'local instance id' that is non-persisted, ie can change at 
 each bundle activation.
  * it defines a 'view id' that uniquely identifies a particular incarnation 
 of a 'cluster view/state' (which is: a list of active instance ids)
  * and it defines a list of active instance ids
  * the above attributes are passed to interested components via a listener 
 that can be registered. that listener is called whenever the discovery-light 
 notices the cluster view has changed.
 While the actual implementation could in fact be based on the existing 
 {{getActiveClusterNodes()}} {{getClusterId()}} of the 
 {{DocumentNodeStoreMBean}}, the suggestion is to not fiddle with that part, 
 as that has dependencies to other logic. But instead, the suggestion is to 
 create a dedicated, other, collection ('discovery') where heartbeats as well 
 as the currentView are stored.
 Will attach a suggestion for an initial version of this for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3086) [oak-mongo.js] Generate mongoexport command to get a slice of oplog entries

2015-07-08 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3086:
--

 Summary: [oak-mongo.js] Generate mongoexport command to get a 
slice of oplog entries
 Key: OAK-3086
 URL: https://issues.apache.org/jira/browse/OAK-3086
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor


During investigation of OAK-3081, the biggest evidence of how the inconsistency 
occurred was in oplog entries around the time the corrupt revision was 
generated.

It'd nice to have a utility method in oak-mongo.js which generates a 
mongoexport command to extract oplog entries around the time when a given 
revision was made.
(cc [~mreutegg], [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Vikas Saurabh (JIRA)
Vikas Saurabh created OAK-3087:
--

 Summary: [oak-mongo.js] Add utility to cleanup hidden structure 
under disabled indices
 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Vikas Saurabh
Priority: Minor


While disabling property indices, avoids usage of those indices. But, they 
still maintain the data already stored under them. That data would keep on 
consuming storage space without serving any purpose. Also, it'd pile on mongo's 
id index.

While one can delete index definition node to clear these nodes up -- but, it'd 
be really slow and a JCR based deleted would first create a HUGE commit while 
marking all documents under it as deleted. And, then the actual deletion would 
happen in next revision GC after 24 hours have past.

Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3087:
---
Component/s: mongomk

 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor

 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3086) [oak-mongo.js] Generate mongoexport command to get a slice of oplog entries

2015-07-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3086:
---
Attachment: oplog-slice.patch

Attaching [^oplog-slice.patch] which adds {{oak.printOplogSliceCommand}} method 
to the api.
This method can be used as follows:
{noformat}
 oak.printOplogSliceCommand(r12345678-0-1)
mongoexport --host 127.0.0.1 --port 27017 --db local --collection oplog.rs 
--out oplog.json --query '{ns : oak.nodes, ts: {$gte: Timestamp(305409, 
1), $lte: Timestamp(305430, 1)}}'
{noformat}
OR, to provide some custom parameters as:
{noformat}
 oak.printOplogSliceCommand(r12345678-0-1, {filename:new-oplog.json, 
 db:oak-aem, oplogTimeBuffer:5})
mongoexport --host 127.0.0.1 --port 27017 --db local --collection oplog.rs 
--out new-oplog.json --query '{ns : oak-aem.nodes, ts: {$gte: 
Timestamp(305414, 1), $lte: Timestamp(305425, 1)}}'
{noformat}

[~chetanm], can you please review?

 [oak-mongo.js] Generate mongoexport command to get a slice of oplog entries
 ---

 Key: OAK-3086
 URL: https://issues.apache.org/jira/browse/OAK-3086
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: oplog-slice.patch


 During investigation of OAK-3081, the biggest evidence of how the 
 inconsistency occurred was in oplog entries around the time the corrupt 
 revision was generated.
 It'd nice to have a utility method in oak-mongo.js which generates a 
 mongoexport command to extract oplog entries around the time when a given 
 revision was made.
 (cc [~mreutegg], [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3084) Commit.applyToDocumentStore(Revision) may rollback committed changes

2015-07-08 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-3084.
---
Resolution: Fixed

Fixed in trunk: http://svn.apache.org/r1689903

Merged into 1.2 branch: http://svn.apache.org/r1689912

Merged into 1.0 branch: http://svn.apache.org/r1689916

 Commit.applyToDocumentStore(Revision) may rollback committed changes
 

 Key: OAK-3084
 URL: https://issues.apache.org/jira/browse/OAK-3084
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0, 1.2, 1.3.0
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.2.3, 1.3.3, 1.0.17


 At the end of a commit Commit.applyToDocumentStore(Revision) performs a 
 conflict check on the commit root and adds a collision marker to other 
 conflicting uncommitted changes. The method checkConflicts() can potentially 
 throw a DocumentStoreException and trigger a rollback of already committed 
 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3073) Make preftech of index files as default option

2015-07-08 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618291#comment-14618291
 ] 

Alex Parvulescu commented on OAK-3073:
--

+1 the sooner we start gathering feedback the better.

 Make preftech of index files  as default option
 ---

 Key: OAK-3073
 URL: https://issues.apache.org/jira/browse/OAK-3073
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.3.3


 As part of OAK-3069 a new feature is being provided to enable pre fetching of 
 index files to local directory. To start with this would need to be 
 explicitly. Once it is found to be stable we should enable by default.
 To start with we just enable it by default on trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3032) LDAP test failures

2015-07-08 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra resolved OAK-3032.
---
Resolution: Fixed

fixed in r1689932

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: Tobias Bocanegra
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618890#comment-14618890
 ] 

Tobias Bocanegra commented on OAK-2874:
---

looks like apache ds doesn't support the soft-limit for search results, i.e. we 
can't test the AD behaviour:

http://markmail.org/message/ggwbqxm3kiufepxo
{quote}

If you look at OpenLDAP documentation, they have two limits on the
server : a soft limit and a hard limit. The hard limit can't be
overruled, except by the admin user. The soft limit is the one that is
used when there is no limit set in the search request. In any case, you
won't be able to fetch more than the server's hard size limit :

http://www.openldap.org/doc/admin24/limits.html

In ApacheDS, we don't have any soft limit, but we have a hard limit.

AD has a different implementation, which allows you to read all the
entries, whatever the server's sizeLimit is.
{quote}


 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra
 Fix For: 1.4


 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3032) LDAP test failures

2015-07-08 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619091#comment-14619091
 ] 

Tobias Bocanegra commented on OAK-3032:
---

the error was introduced by OAK-2998. the revision before still works. I'll 
have a look.

btw, git bisect is cool:
{noformat}
$ git bisect log
git bisect start
# bad: [7e5185ea3d5f6df5eda12f4917b3ab59a77d66fb] OAK-3041: Baseline plugin 
suggests version increase for unmodified class
git bisect bad 7e5185ea3d5f6df5eda12f4917b3ab59a77d66fb
# bad: [07fdfd21df4ce3e86abd22bcd9e3f0eb6076c522] OAK-2924: DocumentNodeStore 
background update thread handling of persistence exceptions
git bisect bad 07fdfd21df4ce3e86abd22bcd9e3f0eb6076c522
# good: [bce34e173aea8ece5921150f803fc349235d8e92] OAK-2995: RDB*Store: check 
transaction isolation level
git bisect good bce34e173aea8ece5921150f803fc349235d8e92
# bad: [39092552f1d733a3eacb4b8125bccab5bdeb027d] OAK-2998 : Postpone 
calculation of effective principals to LoginModule.commit
git bisect bad 39092552f1d733a3eacb4b8125bccab5bdeb027d
# first bad commit: [39092552f1d733a3eacb4b8125bccab5bdeb027d] OAK-2998 : 
Postpone calculation of effective principals to LoginModule.commit
{noformat}

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: Tobias Bocanegra
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3057) Simplify debugging conflict related errors

2015-07-08 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3057:
-
Attachment: OAK-3057-v2.patch

[further improved version of the patch|^OAK-3057-v2.patch]. basically v1 + lazy 
path building (only build the path variable if the log is enabled)

 Simplify debugging conflict related errors
 --

 Key: OAK-3057
 URL: https://issues.apache.org/jira/browse/OAK-3057
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.3.3

 Attachments: OAK-3057-v0.patch, OAK-3057-v1.patch, OAK-3057-v2.patch


 Many a times we see conflict exception in the logs like below. Looking at the 
 stacktrace its hard to reason out what was the conflict and hence harder to 
 debug such issues. Oak should provide more details as part of exception 
 message itself to simplify debugging such issues. 
 Note that such issues are intermittent and might happen in background 
 processing logic. So its required that all details should be made part of 
 exception message itself as it would not be possible to turn on any trace 
 logging to get more details given the intermittent nature of such issues
 {noformat}
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakState0001: 
 Unresolved conflicts in /content/dam/news/images/2015/06/27/blue-sky.jpg
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyAdded(ConflictValidator.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyAdded(CompositeEditor.java:83)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyAdded(EditorDiff.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:375)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:557)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge0(AbstractNodeStoreBranch.java:329)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:159)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1482)
   at 
 org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-07-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.trivialUpdates | 236, 
249, 253 | DOCUMENT_RDB | 1.6, 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | 
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | 
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | 
DOCUMENT_RDB | 1.6 |


  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| 

[jira] [Updated] (OAK-2878) Test failure: AutoCreatedItemsTest.autoCreatedItems

2015-07-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2878:
---
Description: 
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138, 249

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/

  was:
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/


 Test failure: AutoCreatedItemsTest.autoCreatedItems
 ---

 Key: OAK-2878
 URL: https://issues.apache.org/jira/browse/OAK-2878
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: Jenkins: https://builds.apache.org/job/Apache Jackrabbit 
 Oak matrix/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: CI, jenkins

 {{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
 on Jenkins: No default node type available for /testdata/property
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata/property
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 

[jira] [Assigned] (OAK-3064) Oak Run Main.java passing values in the wrong order

2015-07-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-3064:
--

Assignee: Michael Dürig

 Oak Run Main.java passing values in the wrong order
 ---

 Key: OAK-3064
 URL: https://issues.apache.org/jira/browse/OAK-3064
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: upgrade
Reporter: Jason E Bailey
Assignee: Michael Dürig
Priority: Minor

 in org.apache.jackrabbit.oak.run.Main.java on line 952 the following method 
 is being called:
 RepositoryContext.create(RepositoryConfig.create(dir, xml));
 The RepositoryConfig method signature is `create(File xml, File dir)` this is 
 creating an error when running the upgrade process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra reassigned OAK-2874:
-

Assignee: Tobias Bocanegra

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra

 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618071#comment-14618071
 ] 

Tobias Bocanegra commented on OAK-2874:
---

btw: asked users at directory for guidance: 
http://directory.markmail.org/thread/u55ye73u4lerhrya

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra
 Fix For: 1.4


 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra resolved OAK-2874.
---
   Resolution: Fixed
Fix Version/s: 1.4

fixed in r1689798

(note that respective test case still has a problem, since the apache ds is not 
working as expected; active directory tests work, though)

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra
 Fix For: 1.4


 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2329) Use LuceneQuery parser to create query from fulltext string

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2329:
-
Labels: candidate_oak_1_0  (was: )

 Use LuceneQuery parser to create query from fulltext string
 ---

 Key: OAK-2329
 URL: https://issues.apache.org/jira/browse/OAK-2329
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: lucene
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
  Labels: candidate_oak_1_0
 Fix For: 1.1.6

 Attachments: OAK-2329.patch


 Currently the Oak QueryEngine parses the fulltext expression and
 produces the FulltextExpression. This was required for runtime
 aggregation to work. This allowe [d the LuceneIndex to just convert
 the given text (in FulltextTerm) to corresponding Lucene Query [1].
 Note that JCR contract for fulltext clause [2] supports some grammer
 which was so far handled by the Oak QueryEngine.
 Now when moving to index time aggregation it was realized that such parsing 
 of fulltext expression is problamatic [3] in presence of
 different analyzers. As we fix that we need also support the
 requirements mentioned in [2]. JR2 used to do that via Lucene
 QueryParser [4].
 As a fix {{LucenePropertyIndex}} should use a Lucene QueryParser to construct 
 the query. 
 Now need to determine which parser to use [5] :). 
 [SimpleQueryParser|https://issues.apache.org/jira/browse/LUCENE-5336] looks 
 useful!
 [1] 
 https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/LucenePropertyIndex.java#L913
 [2] http://www.day.com/specs/jcr/1.0/6.6.5.2_jcr_contains_Function.html
 [3] https://issues.apache.org/jira/browse/OAK-2301
 [4] 
 https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/query/lucene/JackrabbitQueryParser.java
 [5] https://lucene.apache.org/core/4_7_0/queryparser/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-08 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-3026.

Resolution: Fixed

Additionally changes merged to 
* 1.0 branch with http://svn.apache.org/r1689986
* 1.2 branch with http://svn.apache.org/r1689938

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3057) Simplify debugging conflict related errors

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619914#comment-14619914
 ] 

Chetan Mehrotra commented on OAK-3057:
--

Patch looks fine!

bq. I'd love to hear some ideas on how to enable debug logs for the above just 
for this test and collect them, then check them for messages. now this would 
make for some pretty interesting unit tests

Some test so far did it in ad hoc way. For an example lookup 
RepositoryTest#largeMultiValueProperty. Probably we can have a simple utility 
which can be used to simplify turning on logs , collecting them, provide access 
and revert the logs.

 Simplify debugging conflict related errors
 --

 Key: OAK-3057
 URL: https://issues.apache.org/jira/browse/OAK-3057
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.3.3

 Attachments: OAK-3057-v0.patch, OAK-3057-v1.patch, OAK-3057-v2.patch


 Many a times we see conflict exception in the logs like below. Looking at the 
 stacktrace its hard to reason out what was the conflict and hence harder to 
 debug such issues. Oak should provide more details as part of exception 
 message itself to simplify debugging such issues. 
 Note that such issues are intermittent and might happen in background 
 processing logic. So its required that all details should be made part of 
 exception message itself as it would not be possible to turn on any trace 
 logging to get more details given the intermittent nature of such issues
 {noformat}
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakState0001: 
 Unresolved conflicts in /content/dam/news/images/2015/06/27/blue-sky.jpg
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyAdded(ConflictValidator.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyAdded(CompositeEditor.java:83)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyAdded(EditorDiff.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:375)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:557)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge0(AbstractNodeStoreBranch.java:329)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:148)
   at 
 

[jira] [Updated] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3087:
---
Attachment: 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch
0001-update-removeDescendantsAndSelf-for-upwards-removal.patch

Adding 2 patches:
# [update removeDes* 
api|^0001-update-removeDescendantsAndSelf-for-upwards-removal.patch] - update 
the API to accept a {{maxDepth}} parameter and then delete upwards from that 
depth
# [index 
cleanup|^0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch] - add 
some methods to assist in removal of hidden structure under disabled indices

[~chetanm], can you please review?

 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: 
 0001-update-removeDescendantsAndSelf-for-upwards-removal.patch, 
 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch


 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619938#comment-14619938
 ] 

Chetan Mehrotra commented on OAK-3087:
--

bq. update the API to accept a maxDepth parameter and then delete upwards from 
that depth

May be we make that as default i.e. upward. For that we need to define the 
maxDepth. That I think we can do with a call to mongo with increasing depth and 
using findOne untill you get a null response. That would then constitute as 
maxDepth



 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: 
 0001-update-removeDescendantsAndSelf-for-upwards-removal.patch, 
 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch


 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3086) [oak-mongo.js] Generate mongoexport command to get a slice of oplog entries

2015-07-08 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3086.
--
   Resolution: Fixed
 Assignee: Chetan Mehrotra
Fix Version/s: 1.3.3

Thanks for the patch Vikas!

Applied it in http://svn.apache.org/r1689987

 [oak-mongo.js] Generate mongoexport command to get a slice of oplog entries
 ---

 Key: OAK-3086
 URL: https://issues.apache.org/jira/browse/OAK-3086
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.3.3

 Attachments: oplog-slice.patch


 During investigation of OAK-3081, the biggest evidence of how the 
 inconsistency occurred was in oplog entries around the time the corrupt 
 revision was generated.
 It'd nice to have a utility method in oak-mongo.js which generates a 
 mongoexport command to extract oplog entries around the time when a given 
 revision was made.
 (cc [~mreutegg], [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619956#comment-14619956
 ] 

Chetan Mehrotra commented on OAK-3087:
--

bq. index cleanup - add some methods to assist in removal of hidden structure 
under disabled indices

This would be really useful. However can we change the way its invoked
{noformat}
 oak.indexCleanup.findDisabledIndexNodes() //fill initial list of index names 
 to be cleaned up
 oak.indexCleanup.indexNames //validate index names to be cleaned up
   [disabledIndex1, notToBeCleanedDisabledIndex1, disabledIndex2]
 oak.indexCleanup.indexNames = [disabledIndex1, disabledIndex2] //update 
 initial list as required
 oak.indexCleanup.cleanupIndices() //run a dry mode to validate the operations
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

Instead of above if we change it to 
{noformat}
 oak.indexCleanup.cleanupIndices() //It also takes an optional list of index 
 names
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

So the script would ensure that its run in dryMode first (as it does now). But 
would not require the user to play with {{oak.indexCleanup}}. Instead the first 
call itself would tell which indexes it would be removing

 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: 
 0001-update-removeDescendantsAndSelf-for-upwards-removal.patch, 
 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch


 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619956#comment-14619956
 ] 

Chetan Mehrotra edited comment on OAK-3087 at 7/9/15 5:58 AM:
--

bq. index cleanup - add some methods to assist in removal of hidden structure 
under disabled indices

This would be really useful. However can we change the way its invoked
{noformat}
 oak.indexCleanup.findDisabledIndexNodes() //fill initial list of index names 
 to be cleaned up
 oak.indexCleanup.indexNames //validate index names to be cleaned up
   [disabledIndex1, notToBeCleanedDisabledIndex1, disabledIndex2]
 oak.indexCleanup.indexNames = [disabledIndex1, disabledIndex2] //update 
 initial list as required
 oak.indexCleanup.cleanupIndices() //run a dry mode to validate the operations
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

Instead of above if we change it to 
{noformat}
 oak.indexCleanup.cleanupIndices() //It also takes an optional list of index 
 names
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

So the script would ensure that its run in dryMode first (as it does now). But 
would not require the user to play with {{oak.indexCleanup}}. Instead the first 
call itself would tell which indexes it would be removing

[~mreutegg] Can you review if the approach to validate (more like a guess) 
whats the value for {{type}} field is
{noformat}
+var isIndexDisabled = function(indexName) {
+var indexDoc = api.findOne(/oak:index/ + indexName);
+if (indexDoc == null) {
+return -1;
+}
+var type = indexDoc.type;
+
+if (type == null) {
+return -2;
+}
+
+var maxRev = new Revision(r0-0-0);
+for (var revStr in type) {
+var rev = new Revision(revStr);
+if (rev.isNewerThan(maxRev)) {
+maxRev = rev;
+}
+}
+
+var latestTypeValue = type[maxRev.toString()];
+return (\disabled\ == latestTypeValue);
+};
{noformat}


was (Author: chetanm):
bq. index cleanup - add some methods to assist in removal of hidden structure 
under disabled indices

This would be really useful. However can we change the way its invoked
{noformat}
 oak.indexCleanup.findDisabledIndexNodes() //fill initial list of index names 
 to be cleaned up
 oak.indexCleanup.indexNames //validate index names to be cleaned up
   [disabledIndex1, notToBeCleanedDisabledIndex1, disabledIndex2]
 oak.indexCleanup.indexNames = [disabledIndex1, disabledIndex2] //update 
 initial list as required
 oak.indexCleanup.cleanupIndices() //run a dry mode to validate the operations
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

Instead of above if we change it to 
{noformat}
 oak.indexCleanup.cleanupIndices() //It also takes an optional list of index 
 names
 oak.indexCleanup.cleanupIndices(true) //carry out actual cleanup
{noformat}

So the script would ensure that its run in dryMode first (as it does now). But 
would not require the user to play with {{oak.indexCleanup}}. Instead the first 
call itself would tell which indexes it would be removing

 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: 
 0001-update-removeDescendantsAndSelf-for-upwards-removal.patch, 
 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch


 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3087) [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices

2015-07-08 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619940#comment-14619940
 ] 

Vikas Saurabh commented on OAK-3087:


Ok, I'd update the patch to find maxDepth for a given path.

 [oak-mongo.js] Add utility to cleanup hidden structure under disabled indices
 -

 Key: OAK-3087
 URL: https://issues.apache.org/jira/browse/OAK-3087
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Vikas Saurabh
Priority: Minor
 Attachments: 
 0001-update-removeDescendantsAndSelf-for-upwards-removal.patch, 
 0002-OAK-3087-Add-some-methods-to-find-disabled-indices-a.patch


 While disabling property indices, avoids usage of those indices. But, they 
 still maintain the data already stored under them. That data would keep on 
 consuming storage space without serving any purpose. Also, it'd pile on 
 mongo's id index.
 While one can delete index definition node to clear these nodes up -- but, 
 it'd be really slow and a JCR based deleted would first create a HUGE commit 
 while marking all documents under it as deleted. And, then the actual 
 deletion would happen in next revision GC after 24 hours have past.
 Hence, it might be beneficial to have a low level api in oak-mongo.js, which 
 simply removes the document from mongo altogether.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3057) Simplify debugging conflict related errors

2015-07-08 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3057:
-
Attachment: OAK-3057-v1.patch

[new version of the patch|^OAK-3057-v1.patch], this ones is a bit more involved 
and I'm really curious about your thoughts!

* completely refactored ConflictValidator. it seems that the current version is 
keeping track of all the tree states traversing down on each commit, in the 
case it might have to log the famous commit message. that's a lot of resources 
for nothing possibly. this thing lead to another, and so I ended up with this 
version :)

* MergingNodeStateDiff added extra logs suggested by Michael, but it also 
needed some toString methods so the logs make some sense.

* ConflictResolutionTest: move the conflict test to its own class. I'd love to 
hear some ideas on how to enable debug logs for the above just for this test 
and collect them, then check them for messages. now this would make for some 
pretty interesting unit tests :) so far it's just the same old test moved away

cc. [~chetanm], [~mduerig], [~catholicon]



 Simplify debugging conflict related errors
 --

 Key: OAK-3057
 URL: https://issues.apache.org/jira/browse/OAK-3057
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.3.3

 Attachments: OAK-3057-v0.patch, OAK-3057-v1.patch


 Many a times we see conflict exception in the logs like below. Looking at the 
 stacktrace its hard to reason out what was the conflict and hence harder to 
 debug such issues. Oak should provide more details as part of exception 
 message itself to simplify debugging such issues. 
 Note that such issues are intermittent and might happen in background 
 processing logic. So its required that all details should be made part of 
 exception message itself as it would not be possible to turn on any trace 
 logging to get more details given the intermittent nature of such issues
 {noformat}
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakState0001: 
 Unresolved conflicts in /content/dam/news/images/2015/06/27/blue-sky.jpg
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.failOnMergeConflict(ConflictValidator.java:84)
   at 
 org.apache.jackrabbit.oak.plugins.commit.ConflictValidator.propertyAdded(ConflictValidator.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.propertyAdded(CompositeEditor.java:83)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.propertyAdded(EditorDiff.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:375)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
   at 
 org.apache.jackrabbit.oak.plugins.memory.ModifiedNodeState.compareAgainstBaseState(ModifiedNodeState.java:396)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorHook.processCommit(EditorHook.java:54)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeHook.processCommit(CompositeHook.java:60)
   at 
 

[jira] [Commented] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619180#comment-14619180
 ] 

Tobias Bocanegra commented on OAK-2874:
---

also fixed in 1.2 branch, r1689937

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra
 Fix For: 1.4


 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-08 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra updated OAK-2874:
--
Fix Version/s: (was: 1.4)
   1.3.3
   1.2.3

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3


 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)