[jira] [Resolved] (OAK-3020) Async Update fails after IllegalArgumentException

2015-07-07 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-3020.

Resolution: Fixed

Thanks [~chetanm] for the feedback.

Changes committed with:
* trunk - http://svn.apache.org/r1687175, http://svn.apache.org/r1689577
* 1.0 -  http://svn.apache.org/r1689409, http://svn.apache.org/r1689578
* 1.2 - http://svn.apache.org/r1689579

 Async Update fails after IllegalArgumentException
 -

 Key: OAK-3020
 URL: https://issues.apache.org/jira/browse/OAK-3020
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.2.2
Reporter: Julian Sedding
Assignee: Amit Jain
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-3020-stacktrace.txt, OAK-3020.patch, 
 OAK-3020.test.patch


 The async index update can fail due to a mismatch between an index definition 
 and the actual content. If that is the case, it seems that it can no longer 
 make any progress. Instead it re-indexes the latest changes over and over 
 again until it hits the problematic property.
 Discussion at http://markmail.org/thread/42bixzkrkwv4s6tq
 Stacktrace attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3055) Improve segment cache in SegmentTracker

2015-07-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3055:
---
Attachment: OAK-3055.patch

Tentative patch. This patch has probably the same problem as the one for 
OAK-3007 re. the {{IAE}} when specifying a cache size of 0 (i.e. no chache). 

[~tmueller], WDYT?

 Improve segment cache in SegmentTracker
 ---

 Key: OAK-3055
 URL: https://issues.apache.org/jira/browse/OAK-3055
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: resilience, scalability
 Fix For: 1.3.5

 Attachments: OAK-3055.patch


 The hand crafted segment cache in {{SegmentTracker}} is prone to lock 
 contentions in concurrent access scenarios. As {{SegmentNodeStore#merge}} 
 might also end up acquiring this lock while holding the commit semaphore the 
 situation can easily lead to many threads being blocked on the commit 
 semaphore. The {{SegmentTracker}} cache doesn't differentiate between read 
 and write access, which means that reader threads can block writer threads. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3007) SegmentStore cache does not take string map into account

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616318#comment-14616318
 ] 

Michael Dürig commented on OAK-3007:


In some ad-hoc testing this patch together with the one from OAK-3055 proved 
beneficial. Pending the failing ITs and your review of OAK-3055 I think we 
should apply the patches to trunk. As soon as we have enough longevity coverage 
we should also merge them to the branches. 

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.3

 Attachments: OAK-3007-2.patch, OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3076) Compaction should trace log the current processed path

2015-07-07 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616334#comment-14616334
 ] 

Alex Parvulescu commented on OAK-3076:
--

bq. the if clause should refer to the passed argument, not to the instance 
field.
ouch, good catch!
bq. Also I'm not sure about the '@' notation for the logged property events.
I'm open to alternatives, I've been struggling with this when adding logs 
around Oak in general, is there a more standardized way to represent node and 
properties?

 Compaction should trace log the current processed path
 --

 Key: OAK-3076
 URL: https://issues.apache.org/jira/browse/OAK-3076
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
  Labels: gc
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3076.patch


 A lot of SegmentNotFoundExceptions happen during compaction (offline or 
 online) and these are repeatable exceptions. I would like to add a 'trace' 
 log to make it easy to follow up on the progress and get an indication of 
 where the SNFE is happening, without relying on offline intervention (via 
 oak-run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3076) Compaction should trace log the current processed path

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616342#comment-14616342
 ] 

Michael Dürig commented on OAK-3076:


I'd just use a slash here as the information re. node vs. property is already 
logged (e.g. as 'propertyAdded'). In other cases I used {{path = value}} for 
properties. But logging the property values here would be an overkill. 

 Compaction should trace log the current processed path
 --

 Key: OAK-3076
 URL: https://issues.apache.org/jira/browse/OAK-3076
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
  Labels: gc
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3076.patch


 A lot of SegmentNotFoundExceptions happen during compaction (offline or 
 online) and these are repeatable exceptions. I would like to add a 'trace' 
 log to make it easy to follow up on the progress and get an indication of 
 where the SNFE is happening, without relying on offline intervention (via 
 oak-run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-07-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-2991.
-
Resolution: Cannot Reproduce

i will add the test-case(s) for the sake of extending our test-coverage but 
otherwise i will resolve the issue. we can still reopen it if we find it was 
really an oak issue... not sure about that.

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor
 Attachments: OAK-2991-testcase2.patch, OAK-2991_testcase.patch


 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3007) SegmentStore cache does not take string map into account

2015-07-07 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616304#comment-14616304
 ] 

Thomas Mueller commented on OAK-3007:
-

 Thomas Mueller, could you also add a unit test for StringCache?

Sure.

 I believe that the root cause for the compaction case filling up the cache is 
 is OAK-3075. but this patch should be applied nonetheless.

As far as I understand, OAK-3075 is not the reason, but yes I understand that 
some other issue caused that this many strings are loaded. It would be nice to 
find the root cause. In order to find the root cause, the patch here (OAK-3007) 
could be applied and the compaction test could be re-run. That should no longer 
result in out-of-memory in SegmentStore cache, but in slow compaction, which in 
turn can be analyzed using a profiler. Or in yet another out-of-memory (but in 
a different place), which in turn can be analyzed... So the current patch of 
OAK-3007 should help to understand the _real_ problem.

 but this patch should be applied nonetheless.

Yes, I think applying this patch eventually to trunk (after writing more unit 
tests) would be good. Backporting it might not be all that urgent I guess.



 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.3

 Attachments: OAK-3007-2.patch, OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2859) Test failure: OrderableNodesTest

2015-07-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2859:
---
Description: 
{{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
running the {{DOCUMENT_RDB}} fixture.

{noformat}
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec  
FAILURE!
orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
elapsed: 3.858 sec   ERROR!
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:57)
at 
org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder(OrderableNodesTest.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
{noformat}

Failure seen at builds: 81, 87, 92, 95, 96, 114, 120, 128, 134, 186, 243

See e.g. 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/128/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/console

  was:
{{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
running the {{DOCUMENT_RDB}} fixture.

{noformat}
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec  
FAILURE!
orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
elapsed: 3.858 sec   ERROR!
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
  

[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-07-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.trivialUpdates | 236 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | 
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | 
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | 
DOCUMENT_RDB | 1.6 |


  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| 

[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-07-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.trivialUpdates | 236 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder | 134, 243 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | 
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | 
| org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
| org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | 
DOCUMENT_RDB | 1.6 |


  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52, 181 |  DOCUMENT_NS | 1.7 |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| 

[jira] [Commented] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616391#comment-14616391
 ] 

Stefan Egli commented on OAK-2131:
--

Just for the record: this OAK-2131 results in the LastRevRecoveryAgent to 
potentially update child nodes and not the parent/root node, like so (this is 
from a manual test):
{code}
11:04:52.747 INFO  [lastRevThread[cid=1]] LastRevRecoveryAgent.java:248 Updated 
lastRev of [3] documents while performing lastRev recovery for cluster node 
[2]: {/2/stuffForRecovery=r14e67c2b522-0-2, /2/2=r14e67c2b44c-0-2, 
/2/2/57=r14e67c2b44c-0-2}
{code}
The odd thing at first glance is that you'd expect the LastRevRecoveryAgent to 
always update the root whenever i has to update a child wrt the {{_lastRev}}. 
But that's not the case as with OAK-2131 the {{Commit}} does not always update 
the {{_lastRev}}, eg for nodes that have content changes. Afaics the behavior 
is still correct though, the only misleading part is the log message above.
(/cc [~mreutegg]: fyi)

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2999) Index updation fails on updating multivalued property

2015-07-07 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2999.

Resolution: Fixed

Thanks [~chetanm] for review. Incorporated the suggested changes.

* trunk - http://svn.apache.org/r1689581
* 1.0 - http://svn.apache.org/r1689583
* 1.2 - http://svn.apache.org/r1689584

 Index updation fails on updating multivalued property
 -

 Key: OAK-2999
 URL: https://issues.apache.org/jira/browse/OAK-2999
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Affects Versions: 1.2, 1.0.15
Reporter: Rishabh Maurya
Assignee: Amit Jain
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-2999-v2.patch, OAK-2999.patch


 On emptying a multivalued property, fulltext index updation fails and one can 
 search on old values. Following test demonstrates the issue.
 Added below test in 
 [LuceneIndexQueryTest.java|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/plugins/index/lucene/LuceneIndexQueryTest.java]
  which should pass - 
 {code}
 @Test
 public void testMultiValuedPropUpdate() throws Exception {
 Tree test = root.getTree(/).addChild(test);
 String child = child;
 String mulValuedProp = prop;
 test.addChild(child).setProperty(mulValuedProp, of(foo,bar), 
 Type.STRINGS);
 root.commit();
 assertQuery(
 /jcr:root//*[jcr:contains(@ + mulValuedProp + , 'foo')],
 xpath, ImmutableList.of(/test/ + child));
 test.getChild(child).setProperty(mulValuedProp, new 
 ArrayListString(), Type.STRINGS);
 root.commit();
 assertQuery(
 /jcr:root//*[jcr:contains(@ + mulValuedProp + , 'foo')],
 xpath, new ArrayListString());
 test.getChild(child).setProperty(mulValuedProp, of(bar), 
 Type.STRINGS);
 root.commit();
 assertQuery(
 /jcr:root//*[jcr:contains(@ + mulValuedProp + , 'foo')],
 xpath, new ArrayListString());
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3007) SegmentStore cache does not take string map into account

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616306#comment-14616306
 ] 

Michael Dürig edited comment on OAK-3007 at 7/7/15 7:22 AM:


A problem I encountered with the patch is {{FileStore#FileStore(File, 
NodeState, int)}} throwing an {{IAE}}. That constructor should create a 
{{FileStore}} without cache. So far it did so by specifying a cache size of 0 
to the {{SegmentTracker}}. With the LIRS cache this stopped working. 
Just run {{CompactionAndCleanupTest#propertyRetention}} to see this. 


was (Author: mduerig):
A problem I encountered with the patch is {{FileStore#FileStore(java.io.File, 
org.apache.jackrabbit.oak.spi.state.NodeState, int)}} throwing an {{ISAE}}. 
That constructor should create a {{FileStore}} without cache. So far it did so 
by specifying a cache size of 0 to the {{SegmentTracker}}. With the LIRS cache 
this stopped working. 
Just run {{CompactionAndCleanupTest#propertyRetention}} to see this. 

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.3

 Attachments: OAK-3007-2.patch, OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3007) SegmentStore cache does not take string map into account

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616306#comment-14616306
 ] 

Michael Dürig commented on OAK-3007:


A problem I encountered with the patch is {{FileStore#FileStore(java.io.File, 
org.apache.jackrabbit.oak.spi.state.NodeState, int)}} throwing an {{ISAE}}. 
That constructor should create a {{FileStore}} without cache. So far it did so 
by specifying a cache size of 0 to the {{SegmentTracker}}. With the LIRS cache 
this stopped working. 
Just run {{CompactionAndCleanupTest#propertyRetention}} to see this. 

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.3

 Attachments: OAK-3007-2.patch, OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3076) Compaction should trace log the current processed path

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616329#comment-14616329
 ] 

Michael Dürig commented on OAK-3076:


Patch looks good AFICS apart from an issue with the null check in private 
constructor of {{CompactDiff}}: the patch in the if clause should refer to the 
passed argument, not to the instance field. Also I'm not sure about the '@' 
notation for the logged property events. But that's a matter of taste though. 

 Compaction should trace log the current processed path
 --

 Key: OAK-3076
 URL: https://issues.apache.org/jira/browse/OAK-3076
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
  Labels: gc
 Fix For: 1.2.3, 1.3.3

 Attachments: OAK-3076.patch


 A lot of SegmentNotFoundExceptions happen during compaction (offline or 
 online) and these are repeatable exceptions. I would like to add a 'trace' 
 log to make it easy to follow up on the progress and get an indication of 
 where the SNFE is happening, without relying on offline intervention (via 
 oak-run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2913) TokenLoginModule should clear state in case of a login exception

2015-07-07 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616825#comment-14616825
 ] 

Alex Parvulescu commented on OAK-2913:
--

merged to 1.2 branch with rev 1689695.

the 1.0 branch merge doesn't apply cleanly. [~anchela] if the merge is really 
needed please take a look.

 TokenLoginModule should clear state in case of a login exception
 

 Key: OAK-2913
 URL: https://issues.apache.org/jira/browse/OAK-2913
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, security
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.3.0, 1.2.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3064) Oak Run Main.java passing values in the wrong order

2015-07-07 Thread Jason E Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616831#comment-14616831
 ] 

Jason E Bailey commented on OAK-3064:
-

When I run the command:

{noformat}
java -jar oak-run-1.2.2-jar upgrade C:\Program 
Files\Adobe\crx-quickstart\repository  mongodb://dublin.unx.sas.com:27017
{noformat}

{code:java}
Exception in thread main 
org.apache.jackrabbit.core.config.ConfigurationException: Repository directory 
C:\Program Files\Adobe\crx-quickstart\repository\repository.xml does not exist
at 
org.apache.jackrabbit.core.config.RepositoryConfig.create(RepositoryConfig.java:251)
at org.apache.jackrabbit.oak.run.Main.upgrade(Main.java:950)
at org.apache.jackrabbit.oak.run.Main.main(Main.java:167)
{code}

the test case is to attempt to upgrade a repository. If you notice the error 
message is stating the directory does not exist. Then printing out a *file* 
path. That's because it's checking the File object that represents the 
repository.xml to see if it's a directory, which it isn't,

This works, if I swap around the values for the method call and recompile.


 Oak Run Main.java passing values in the wrong order
 ---

 Key: OAK-3064
 URL: https://issues.apache.org/jira/browse/OAK-3064
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: upgrade
Reporter: Jason E Bailey
Priority: Minor

 in org.apache.jackrabbit.oak.run.Main.java on line 952 the following method 
 is being called:
 RepositoryContext.create(RepositoryConfig.create(dir, xml));
 The RepositoryConfig method signature is `create(File xml, File dir)` this is 
 creating an error when running the upgrade process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3032) LDAP test failures

2015-07-07 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra reassigned OAK-3032:
-

Assignee: angela  (was: Tobias Bocanegra)

something changed how the principal provider is updated.
the new external user is synchronized with the IDP during the login() call, but 
is then later not visible to the principal provider during the commit() call 
when the auth info is generated.

[~anchela] do you know if something changed here?

 LDAP test failures
 --

 Key: OAK-3032
 URL: https://issues.apache.org/jira/browse/OAK-3032
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Marcel Reutegger
Assignee: angela
 Fix For: 1.3.3


 There are various test failures in the oak-auth-ldap module:
 Failed tests:   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.DefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.GuestTokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
   
 testPrincipalsFromAuthInfo(org.apache.jackrabbit.oak.security.authentication.ldap.TokenDefaultLdapLoginModuleTest):
  
 expected:[org.apache.jackrabbit.oak.security.user.UserPrincipalProvider$GroupPrincipal:cn=foobargroup,ou=groups,ou=system,
  org.apache.jackrabbit.oak.security.user.TreeBasedPrincipal:cn=Foo 
 Bar,ou=users,ou=system, everyone principal] but was:[]
 The tests also fail on travis. E.g.: 
 https://s3.amazonaws.com/archive.travis-ci.org/jobs/68124560/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2874) [ldap] enable listUsers to work for more than 1000 external users

2015-07-07 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617645#comment-14617645
 ] 

Tobias Bocanegra commented on OAK-2874:
---

added test case in r1689773

 [ldap] enable listUsers to work for more than 1000 external users
 -

 Key: OAK-2874
 URL: https://issues.apache.org/jira/browse/OAK-2874
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Affects Versions: 1.2.1
Reporter: Nicolas Peltier

 LDAP servers are usually limited to return 1000 search results. Currently 
 LdapIdentityProvider.listUsers() doesn't take care of that limitation and 
 prevent the client user to retrieve more.(cc [~tripod]) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-07 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617644#comment-14617644
 ] 

Tobias Bocanegra commented on OAK-3026:
---

separated the 2 conditions into distinct profiles: r1689774

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616440#comment-14616440
 ] 

Stefan Egli edited comment on OAK-2131 at 7/7/15 9:50 AM:
--

bq. Or do you mean it does actually happen, but when LastRevRecoveryAgent is 
triggered, it updates _lastRevs that aren't really necessary?
yes. The root is updated at an earlier stage since {{Commit}} does not filter 
out the root (and parents that have no content changes). So as I understand the 
rational behind OAK-2131 it tries to reduce the number of {{_lastRev}} changes 
by using the fact that if there is a content change that will be visible in the 
parent anyway via the commitRoot - so it is using that instead of explicitly 
updating {{_lastRev}}. However, the {{LastRevRecoveryAgent}} does not do this 
optimization (which I think it is correct to not do) and updates all 
{{_lastRevs}} that the {{Commit}} has optimized-away
bq.  In any case, can you please file a new issue and provide information about 
how to reproduce? 
yes, I can do that


was (Author: egli):
bq. Or do you mean it does actually happen, but when LastRevRecoveryAgent is 
triggered, it updates _lastRevs that aren't really necessary?
yes. The root is updated at an earlier stage since {{Commit}} does not filter 
out the root (and parents that have no content changes). So as I understand the 
rational behind OAK-2131 it tries to reduce the number of {{_lastRev}} changes 
by using the fact that if there is a content change that will be visible in the 
parent anyway via the commitRoot - so it is using that instead of explicitly 
updating {{_lastRev}}. However, the {{LastRevRecoveryAgent}} does not do this 
optimization (which I think it is correct to not do) and updates all 
{{_lastRev}}s that the {{Commit}} has optimized-away
bq.  In any case, can you please file a new issue and provide information about 
how to reproduce? 
yes, I can do that

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616582#comment-14616582
 ] 

Michael Dürig commented on OAK-3052:


Patch looks good, thanks for taking this up!

Just a minor nitpick, let's remove 'public' from the new interface methods as 
this is implied. 

Regarding the compaction estimate value: this is a percentage right? Is that 
clear enough or do we need to state this more explicitly (e.g. in 
{{SegmentNodeStoreService}})?

 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3077) Skip maven deployment for oak-exercies

2015-07-07 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-3077.
---
Resolution: Fixed

 Skip maven deployment for oak-exercies
 --

 Key: OAK-3077
 URL: https://issues.apache.org/jira/browse/OAK-3077
 Project: Jackrabbit Oak
  Issue Type: Task
Affects Versions: 1.3.2
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor
 Fix For: 1.3.3


 the module oak-exercise is not useful on maven. Skip the deployment to save 
 space and time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3077) Skip maven deployment for oak-exercies

2015-07-07 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616424#comment-14616424
 ] 

Davide Giannella commented on OAK-3077:
---

this should do it. In trunk at http://svn.apache.org/r1689618



 Skip maven deployment for oak-exercies
 --

 Key: OAK-3077
 URL: https://issues.apache.org/jira/browse/OAK-3077
 Project: Jackrabbit Oak
  Issue Type: Task
Affects Versions: 1.3.2
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor
 Fix For: 1.3.3


 the module oak-exercise is not useful on maven. Skip the deployment to save 
 space and time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3077) Skip maven deployment for oak-exercies

2015-07-07 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3077:
-

 Summary: Skip maven deployment for oak-exercies
 Key: OAK-3077
 URL: https://issues.apache.org/jira/browse/OAK-3077
 Project: Jackrabbit Oak
  Issue Type: Task
Affects Versions: 1.3.2
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Minor
 Fix For: 1.3.3


the module oak-exercise is not useful on maven. Skip the deployment to save 
space and time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616440#comment-14616440
 ] 

Stefan Egli commented on OAK-2131:
--

bq. Or do you mean it does actually happen, but when LastRevRecoveryAgent is 
triggered, it updates _lastRevs that aren't really necessary?
yes. The root is updates at an earlier stage since {{Commit}} does not filter 
out the root (and parents that have no content changes). So as I understand the 
rational behind OAK-2131 it tries to reduce the number of {{_lastRev}} changes 
by using the fact that if there is a content change that will be visible in the 
parent anyway via the commitRoot - so it is using that instead of explicitly 
updating {{_lastRev}}. However, the {{LastRevRecoveryAgent}} does not do this 
optimization (which I think it is correct to not do) and updates all 
{{_lastRev}}s that the {{Commit}} has optimized-away
bq.  In any case, can you please file a new issue and provide information about 
how to reproduce? 
yes, I can do that

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616440#comment-14616440
 ] 

Stefan Egli edited comment on OAK-2131 at 7/7/15 9:49 AM:
--

bq. Or do you mean it does actually happen, but when LastRevRecoveryAgent is 
triggered, it updates _lastRevs that aren't really necessary?
yes. The root is updated at an earlier stage since {{Commit}} does not filter 
out the root (and parents that have no content changes). So as I understand the 
rational behind OAK-2131 it tries to reduce the number of {{_lastRev}} changes 
by using the fact that if there is a content change that will be visible in the 
parent anyway via the commitRoot - so it is using that instead of explicitly 
updating {{_lastRev}}. However, the {{LastRevRecoveryAgent}} does not do this 
optimization (which I think it is correct to not do) and updates all 
{{_lastRev}}s that the {{Commit}} has optimized-away
bq.  In any case, can you please file a new issue and provide information about 
how to reproduce? 
yes, I can do that


was (Author: egli):
bq. Or do you mean it does actually happen, but when LastRevRecoveryAgent is 
triggered, it updates _lastRevs that aren't really necessary?
yes. The root is updates at an earlier stage since {{Commit}} does not filter 
out the root (and parents that have no content changes). So as I understand the 
rational behind OAK-2131 it tries to reduce the number of {{_lastRev}} changes 
by using the fact that if there is a content change that will be visible in the 
parent anyway via the commitRoot - so it is using that instead of explicitly 
updating {{_lastRev}}. However, the {{LastRevRecoveryAgent}} does not do this 
optimization (which I think it is correct to not do) and updates all 
{{_lastRev}}s that the {{Commit}} has optimized-away
bq.  In any case, can you please file a new issue and provide information about 
how to reproduce? 
yes, I can do that

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3076) Compaction should trace log the current processed path

2015-07-07 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3076.
--
   Resolution: Fixed
Fix Version/s: 1.0.17

fixed
 * on trunk http://svn.apache.org/r1689623
 * on 1.2 http://svn.apache.org/r1689624
 * on 1.0 http://svn.apache.org/r1689625

 Compaction should trace log the current processed path
 --

 Key: OAK-3076
 URL: https://issues.apache.org/jira/browse/OAK-3076
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
  Labels: gc
 Fix For: 1.2.3, 1.3.3, 1.0.17

 Attachments: OAK-3076.patch


 A lot of SegmentNotFoundExceptions happen during compaction (offline or 
 online) and these are repeatable exceptions. I would like to add a 'trace' 
 log to make it easy to follow up on the progress and get an indication of 
 where the SNFE is happening, without relying on offline intervention (via 
 oak-run).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616427#comment-14616427
 ] 

Marcel Reutegger commented on OAK-2131:
---

To me it looks like the behaviour is wrong. If a modification happens on some 
non-root document (either with content changes or just a _lastRev update), then 
eventually the root document must also be updated. Or do you mean it does 
actually happen, but when LastRevRecoveryAgent is triggered, it updates 
_lastRevs that aren't really necessary? In any case, can you please file a new 
issue and provide information about how to reproduce? 

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-07 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3052:
-
Attachment: OAK-3052.patch

very useful for testing too :)

attaching [proposed patch|^OAK-3052.patch]. [~mduerig] can you take a look?

 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2844) Introducing a simple document-based discovery-light service (to circumvent documentMk's eventual consistency delays)

2015-07-07 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2844:
-
Attachment: OAK-2844.v3.patch

Attached [^OAK-2844.v3.patch] which is the latest version of the 'discovery 
lite':
* changes the API completely: instead of listeners the polling approach 
suggested by [~tmueller] has been implemented: upper layers can poll the new, 
suggested 'oak.discoverylite.clusterview' repository descriptor to get the 
clusterView. The clusterView is cached and only updated when something really 
changed, so the polling is cheap (and can be done every second)
* thorough junit/load tests added
* ready for 'final' review
/cc [~mreutegg], [~chetanm]

 Introducing a simple document-based discovery-light service (to circumvent 
 documentMk's eventual consistency delays)
 

 Key: OAK-2844
 URL: https://issues.apache.org/jira/browse/OAK-2844
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.4

 Attachments: InstanceStateChangeListener.java, OAK-2844.WIP-02.patch, 
 OAK-2844.patch, OAK-2844.v3.patch


 When running discovery.impl on a mongoMk-backed jcr repository, there are 
 risks of hitting problems such as described in SLING-3432 
 pseudo-network-partitioning: this happens when a jcr-level heartbeat does 
 not reach peers within the configured heartbeat timeout - it then treats that 
 affected instance as dead, removes it from the topology, and continues with 
 the remainings, potentially electing a new leader, running the risk of 
 duplicate leaders. This happens when delays in mongoMk grow larger than the 
 (configured) heartbeat timeout. These problems ultimately are due to the 
 'eventual consistency' nature of, not only mongoDB, but more so of mongoMk. 
 The only alternative so far is to increase the heartbeat timeout to match the 
 expected or measured delays that mongoMk can produce (under say given 
 load/performance scenarios).
 Assuming that mongoMk will always carry a risk of certain delays and a 
 maximum, reasonable (for discovery.impl timeout that is) maximum cannot be 
 guaranteed, a better solution is to provide discovery with more 'real-time' 
 like information and/or privileged access to mongoDb.
 Here's a summary of alternatives that have so far been floating around as a 
 solution to circumvent eventual consistency:
  # expose existing (jmx) information about active 'clusterIds' - this has 
 been proposed in SLING-4603. The pros: reuse of existing functionality. The 
 cons: going via jmx, binding of exposed functionality as 'to be maintained 
 API'
  # expose a plain mongo db/collection (via osgi injection) such that a higher 
 (sling) level discovery could directly write heartbeats there. The pros: 
 heartbeat latency would be minimal (assuming the collection is not sharded). 
 The cons: exposes a mongo db/collection potentially also to anyone else, with 
 the risk of opening up to unwanted possibilities
  # introduce a simple 'discovery-light' API to oak which solely provides 
 information about which instances are active in a cluster. The implementation 
 of this is not exposed. The pros: no need to expose a mongoDb/collection, 
 allows any other jmx-functionality to remain unchanged. The cons: a new API 
 that must be maintained
 This ticket is about the 3rd option, about a new mongo-based discovery-light 
 service that is introduced to oak. The functionality in short:
  * it defines a 'local instance id' that is non-persisted, ie can change at 
 each bundle activation.
  * it defines a 'view id' that uniquely identifies a particular incarnation 
 of a 'cluster view/state' (which is: a list of active instance ids)
  * and it defines a list of active instance ids
  * the above attributes are passed to interested components via a listener 
 that can be registered. that listener is called whenever the discovery-light 
 notices the cluster view has changed.
 While the actual implementation could in fact be based on the existing 
 {{getActiveClusterNodes()}} {{getClusterId()}} of the 
 {{DocumentNodeStoreMBean}}, the suggestion is to not fiddle with that part, 
 as that has dependencies to other logic. But instead, the suggestion is to 
 create a dedicated, other, collection ('discovery') where heartbeats as well 
 as the currentView are stored.
 Will attach a suggestion for an initial version of this for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3078) AccessControlAction: Omit setup for administrative principals

2015-07-07 Thread angela (JIRA)
angela created OAK-3078:
---

 Summary: AccessControlAction: Omit setup for administrative 
principals
 Key: OAK-3078
 URL: https://issues.apache.org/jira/browse/OAK-3078
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela


currently the {{AccessControlAction}} omits the permission setup for service 
users. in addition we should also omit the setup for configured administrative 
principals as it doesn't make sense to provide an dedicated setup for those 
upon creation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3078) AccessControlAction: Omit setup for administrative principals

2015-07-07 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3078.
-
   Resolution: Fixed
Fix Version/s: 1.3.3

Committed revision 1689688.


 AccessControlAction: Omit setup for administrative principals
 -

 Key: OAK-3078
 URL: https://issues.apache.org/jira/browse/OAK-3078
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
 Fix For: 1.3.3


 currently the {{AccessControlAction}} omits the permission setup for service 
 users. in addition we should also omit the setup for configured 
 administrative principals as it doesn't make sense to provide an dedicated 
 setup for those upon creation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-07 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616712#comment-14616712
 ] 

Alex Parvulescu commented on OAK-3052:
--

bq. Just a minor nitpick, let's remove 'public' from the new interface methods 
as this is implied.
oups, copy/paste error, will fix it

bq. Is that clear enough or do we need to state this more explicitly
hmm, good point I'll add some clarifications then.


 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2714) Test failures on Jenkins

2015-07-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617947#comment-14617947
 ] 

Chetan Mehrotra commented on OAK-2714:
--

Saw following failure

{noformat}
java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:886)
at java.util.ArrayList$Itr.next(ArrayList.java:836)
at java.util.AbstractCollection.toString(AbstractCollection.java:461)
at 
org.apache.jackrabbit.oak.jcr.RepositoryTest.largeMultiValueProperty(RepositoryTest.java:2168)
{noformat}

Did a fix in http://svn.apache.org/r1689791

 Test failures on Jenkins
 

 Key: OAK-2714
 URL: https://issues.apache.org/jira/browse/OAK-2714
 Project: Jackrabbit Oak
  Issue Type: Bug
 Environment: Jenkins, Ubuntu: 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: CI, Jenkins
 Fix For: 1.4


 This issue is for tracking test failures seen at our Jenkins instance that 
 might yet be transient. Once a failure happens too often we should remove it 
 here and create a dedicated issue for it. 
 || Test   
 || Builds || Fixture  || JVM ||
 | 
 org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
| 81  | DOCUMENT_RDB | 1.7   |
 | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035
  | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
 | 
 org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
| 64  | ?| ? |
 | 
 org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
  | 52, 181 |  DOCUMENT_NS | 1.7 |
 | 
 org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
| 41  | ?| ? |
 | 
 org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
   | 29  | ?| ? |
 | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite   
  | 35  | SEGMENT_MK   | ? |
 | 
 org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
  | 90 | DOCUMENT_RDB | 1.6 |
 | 
 org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
 | 94 | DOCUMENT_RDB | 1.6 |
 | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
 121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
 | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 
 | DOCUMENT_RDB | 1.6 |
 | org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
 DOCUMENT_RDB | 1.7 |
 | org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
 DOCUMENT_RDB | 1.7 |
 | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.trivialUpdates | 236 | 
 DOCUMENT_RDB | 1.6 |
 | org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
 SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
 | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
 142, 186, 191 | DOCUMENT_RDB, SEGMENT_MK | 1.6 | 
 | org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 
 148, 151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
 | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 
 | DOCUMENT_RDB | 1.8 |
 | org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 
 |
 | org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 
 163, 164 | SEGMENT_MK, DOCUMENT_RDB | 1.7, 1.8 | 
 | org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 |
 | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243 | 
 DOCUMENT_RDB | 1.6 |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617988#comment-14617988
 ] 

Chetan Mehrotra commented on OAK-3026:
--

Now looks like we have failure on [JDK version 
again|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/253/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.security.authentication.ldap/LargeLdapProviderTest/org_apache_jackrabbit_oak_security_authentication_ldap_LargeLdapProviderTest/]

{noformat}
java.lang.UnsupportedClassVersionError: 
org/apache/directory/server/core/api/DirectoryService : Unsupported major.minor 
version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at 
org.apache.jackrabbit.oak.security.authentication.ldap.LargeLdapProviderTest.clinit(LargeLdapProviderTest.java:46)
{noformat}

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3080) JournalTest.cleanupTest failure on CI

2015-07-07 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3080:


 Summary: JournalTest.cleanupTest failure on CI
 Key: OAK-3080
 URL: https://issues.apache.org/jira/browse/OAK-3080
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Stefan Egli
Priority: Minor
 Fix For: 1.3.3


Following test failure was seen 
[here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/248/jdk=jdk1.8.0_11,label=Ubuntu,nsfixtures=SEGMENT_MK,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.plugins.document/JournalTest/cleanupTest/]

{noformat}
java.lang.AssertionError: expected:0 but was:1
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.jackrabbit.oak.plugins.document.JournalTest.cleanupTest(JournalTest.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-07 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617957#comment-14617957
 ] 

Amit Jain commented on OAK-3026:


Thanks [~tripod], the relevant tests are ignored now on windows.

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-07 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618006#comment-14618006
 ] 

Amit Jain commented on OAK-3026:


Maybe this test class (LargeLdapProviderTest) additionally needs to be ignored 
for 1.6.

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3026) test failures for oak-auth-ldap on Windows

2015-07-07 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618009#comment-14618009
 ] 

Tobias Bocanegra commented on OAK-3026:
---

fixed it in r1689793; forgot to add new test to the exclusion list

 test failures for oak-auth-ldap on Windows
 --

 Key: OAK-3026
 URL: https://issues.apache.org/jira/browse/OAK-3026
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: auth-ldap
Reporter: Amit Jain
Assignee: Tobias Bocanegra
 Fix For: 1.2.3, 1.3.3, 1.0.17


 testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest)
   Time elapsed: 0.01 sec   ERROR!
 java.io.IOException: Unable to delete file: 
 target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
 at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
 at 
 org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183)
 at 
 org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33)
 etc...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3079) LastRevRecoveryAgent can update _lastRev of children but not the root

2015-07-07 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-3079:


 Summary: LastRevRecoveryAgent can update _lastRev of children but 
not the root
 Key: OAK-3079
 URL: https://issues.apache.org/jira/browse/OAK-3079
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.3.2
Reporter: Stefan Egli


As mentioned in 
[OAK-2131|https://issues.apache.org/jira/browse/OAK-2131?focusedCommentId=14616391page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14616391]
 there can be a situation wherein the LastRevRecoveryAgent updates some nodes 
in the tree but not the root. This seems to happen due to OAK-2131's change in 
the Commit.applyToCache (where paths to update are collected via 
tracker.track): in that code, paths which are non-root and for which no content 
has changed (and mind you, a content change includes adding _deleted, which 
happens by default for nodes with children) are not 'tracked', ie for those the 
_lastRev is not update by subsequent backgroundUpdate operations - leaving them 
'old/out-of-date'. This seems correct as per description/intention of OAK-2131 
where the last revision can be determined via the commitRoot of the parent. But 
it has the effect that the LastRevRecoveryAgent then finds those intermittent 
nodes to be updated while as the root has already been updated (which is at 
first glance non-intuitive).

I'll attach a test case to reproduce this.

Perhaps this is a bug, perhaps it's ok. [~mreutegg] wdyt?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-07 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-3052:
-
Attachment: OAK-3052-v2.patch

attaching improved version. 
the issue I'm seeing now is with the OSGi package export versions:
{code}
[ERROR] org.apache.jackrabbit.oak.plugins.segment: Version increase required; 
detected 3.0.0, suggested 4.0.0
{code}
I don't want to bump up to 4.0.0 as this will probably break a lot of existing 
importers. also this version increase will introduce a big issue with 
backporting.

[~mduerig], [~chetanm] any ideas?


 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052-v2.patch, OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3052) Make compaction gain estimate threshold configurable

2015-07-07 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616805#comment-14616805
 ] 

Chetan Mehrotra commented on OAK-3052:
--

Check target/baseline.xml. It has detailed explanation on why bnd thinks that 
version increase is required

 Make compaction gain estimate threshold configurable
 

 Key: OAK-3052
 URL: https://issues.apache.org/jira/browse/OAK-3052
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.3

 Attachments: OAK-3052-v2.patch, OAK-3052.patch


 Currently compaction is skipped if the compaction gain estimator determines 
 that less than 10% of space could be reclaimed. Instead of relying on a hard 
 coded value of 10% we should make this configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3058) Backport OAK-2872 to 1.0 and 1.2 branches

2015-07-07 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616808#comment-14616808
 ] 

angela commented on OAK-3058:
-

yes

 Backport OAK-2872 to 1.0 and 1.2 branches
 -

 Key: OAK-3058
 URL: https://issues.apache.org/jira/browse/OAK-3058
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: auth-external
Reporter: Michael Dürig
Assignee: Alex Parvulescu
 Fix For: 1.2.3, 1.0.17


 Our customer confirmed that OAK-2872 fixed a {{SNFE}} on their system. We 
 thus need to back port the fix to the branches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3079) LastRevRecoveryAgent can update _lastRev of children but not the root

2015-07-07 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3079:
-
Attachment: NonRootUpdatingLastRevRecoveryTest.java

Attached [^NonRootUpdatingLastRevRecoveryTest.java] which illustrates what was 
described in the ticket. Note that the test succeed - which is why I derive 
that this is not a bug. The test produces log info of the following kind:
{code}
17:03:28.848 INFO  [main] LastRevRecoveryAgent.java:248 Updated lastRev of 
[6] documents while performing lastRev recovery for cluster node [1]: 
{/foo/bar/5/6=r14e690b0947-0-1, /foo/bar/5=r14e690b0947-0-1, 
/foo/bar/1/2=r14e690b0940-0-1, /foo/bar/1=r14e690b0940-0-1, 
/foo/bar/1/2/3=r14e690b0940-0-1, /foo/bar/5/6/7=r14e690b0947-0-1}
{code}
which shows that root was not updated .. to see this, enable logback-test.xml 
console logger.

 LastRevRecoveryAgent can update _lastRev of children but not the root
 -

 Key: OAK-3079
 URL: https://issues.apache.org/jira/browse/OAK-3079
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.3.2
Reporter: Stefan Egli
 Attachments: NonRootUpdatingLastRevRecoveryTest.java


 As mentioned in 
 [OAK-2131|https://issues.apache.org/jira/browse/OAK-2131?focusedCommentId=14616391page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14616391]
  there can be a situation wherein the LastRevRecoveryAgent updates some nodes 
 in the tree but not the root. This seems to happen due to OAK-2131's change 
 in the Commit.applyToCache (where paths to update are collected via 
 tracker.track): in that code, paths which are non-root and for which no 
 content has changed (and mind you, a content change includes adding _deleted, 
 which happens by default for nodes with children) are not 'tracked', ie for 
 those the _lastRev is not update by subsequent backgroundUpdate operations - 
 leaving them 'old/out-of-date'. This seems correct as per 
 description/intention of OAK-2131 where the last revision can be determined 
 via the commitRoot of the parent. But it has the effect that the 
 LastRevRecoveryAgent then finds those intermittent nodes to be updated while 
 as the root has already been updated (which is at first glance non-intuitive).
 I'll attach a test case to reproduce this.
 Perhaps this is a bug, perhaps it's ok. [~mreutegg] wdyt?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2913) TokenLoginModule should clear state in case of a login exception

2015-07-07 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu updated OAK-2913:
-
Fix Version/s: 1.2.3

 TokenLoginModule should clear state in case of a login exception
 

 Key: OAK-2913
 URL: https://issues.apache.org/jira/browse/OAK-2913
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, security
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.3.0, 1.2.3






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2131) Reduce usage of _lastRev

2015-07-07 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616822#comment-14616822
 ] 

Stefan Egli commented on OAK-2131:
--

bq. yes I can do that
done in OAK-3079

 Reduce usage of _lastRev
 

 Key: OAK-2131
 URL: https://issues.apache.org/jira/browse/OAK-2131
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.1.3

 Attachments: OAK-2131.patch


 As described in OAK-1768 the usage of _lastRev must be reduced to better 
 handle large transaction. The short term solution implemented for OAK-1768 
 relies on MapDB to temporarily store the pending modifications to a temp 
 file. This solution prevents an OOME when there is a large commit, but can 
 take quite a while to update the _lastRev fields after the branch is merged. 
 This has an impact on the visibility of changes in a cluster because any 
 further changes done after the merge only become visible when the large 
 update of the _lastRev fields went through.
 Delayed propagation of changes can become a problem in a cluster when some 
 components rely on a timely visibility of changes. One example is the Apache 
 Sling Topology, which sends heartbeats through the cluster by updating 
 properties in the repository. The topology implementation will consider a 
 cluster node dead if those updates are not propagated in the cluster within a 
 given timeout. Ultimately this may lead to a cluster with multiple leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)