[jira] [Commented] (OAK-1510) MongoDB / DocumentNodeStore DataStore GC performance

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940301#comment-13940301
 ] 

Thomas Mueller commented on OAK-1510:
-

The second patch OAK-1510-add-on.patch also broke a test case, 
MongoDataStoreBlobStoreTest.delete(). The reason is the change in 
DataStoreBlobStore.deleteChunks. I have fixed this now to:

{noformat}
public boolean deleteChunks(ListString chunkIds, long 
maxLastModifiedTime) throws Exception {
if (delegate instanceof MultiDataStoreAware) {
for (String chunkId : chunkIds) {
DataIdentifier identifier = new DataIdentifier(chunkId);
DataRecord dataRecord = delegate.getRecord(identifier);
boolean success = (maxLastModifiedTime = 0)
|| dataRecord.getLastModified() = maxLastModifiedTime;
if (!success) {
return false;
}
((MultiDataStoreAware) delegate).deleteRecord(identifier);
}
}
return true;
{noformat}

 MongoDB / DocumentNodeStore DataStore GC performance
 

 Key: OAK-1510
 URL: https://issues.apache.org/jira/browse/OAK-1510
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 0.19

 Attachments: OAK-1510-add-on.patch, OAK-1510.patch


 We currently don't know what is the performance of the DSGC, and we might 
 need to improve performance of this task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1510) MongoDB / DocumentNodeStore DataStore GC performance

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940313#comment-13940313
 ] 

Thomas Mueller commented on OAK-1510:
-

Revision 1579175 (patch 2, after fixing it)

 MongoDB / DocumentNodeStore DataStore GC performance
 

 Key: OAK-1510
 URL: https://issues.apache.org/jira/browse/OAK-1510
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 0.19

 Attachments: OAK-1510-add-on.patch, OAK-1510.patch


 We currently don't know what is the performance of the DSGC, and we might 
 need to improve performance of this task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1510) MongoDB / DocumentNodeStore DataStore GC performance

2014-03-19 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-1510.
-

Resolution: Fixed

 MongoDB / DocumentNodeStore DataStore GC performance
 

 Key: OAK-1510
 URL: https://issues.apache.org/jira/browse/OAK-1510
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 0.19

 Attachments: OAK-1510-add-on.patch, OAK-1510.patch


 We currently don't know what is the performance of the DSGC, and we might 
 need to improve performance of this task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1510) MongoDB / DocumentNodeStore DataStore GC performance

2014-03-19 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940318#comment-13940318
 ] 

Amit Jain commented on OAK-1510:


Ahh! Thanks Thomas!!

 MongoDB / DocumentNodeStore DataStore GC performance
 

 Key: OAK-1510
 URL: https://issues.apache.org/jira/browse/OAK-1510
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 0.19

 Attachments: OAK-1510-add-on.patch, OAK-1510.patch


 We currently don't know what is the performance of the DSGC, and we might 
 need to improve performance of this task.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1569) ClusterPermissionsTest#testAclPropagation occasionally fails on Windows

2014-03-19 Thread JIRA
Michael Dürig created OAK-1569:
--

 Summary: ClusterPermissionsTest#testAclPropagation occasionally 
fails on Windows
 Key: OAK-1569
 URL: https://issues.apache.org/jira/browse/OAK-1569
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: {noformat}
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
NUMBER_OF_PROCESSORS=2,
PROCESSOR_LEVEL=6
OS=Windows_NT
PROCESSOR_REVISION=2e06, 
JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
PROCESSOR_ARCHITECTURE=AMD64
BUILD_ID=2014-03-19_09-30-38
{noformat}
Reporter: Michael Dürig
 Fix For: 0.20


{{ClusterPermissionsTest#testAclPropagation}} regularly fails on Windows:

{noformat}
testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
  Time elapsed: 11.097 sec   ERROR!
java.lang.RuntimeException: 
org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
at 
org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1569) ClusterPermissionsTest occasionally fails on Windows

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1569:
---

Summary: ClusterPermissionsTest occasionally fails on Windows  (was: 
ClusterPermissionsTest#testAclPropagation occasionally fails on Windows)

 ClusterPermissionsTest occasionally fails on Windows
 

 Key: OAK-1569
 URL: https://issues.apache.org/jira/browse/OAK-1569
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: {noformat}
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 NUMBER_OF_PROCESSORS=2,
 PROCESSOR_LEVEL=6
 OS=Windows_NT
 PROCESSOR_REVISION=2e06, 
 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 PROCESSOR_ARCHITECTURE=AMD64
 BUILD_ID=2014-03-19_09-30-38
 {noformat}
Reporter: Michael Dürig
 Fix For: 0.20


 {{ClusterPermissionsTest#testAclPropagation}} regularly fails on Windows:
 {noformat}
 testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
   Time elapsed: 11.097 sec   ERROR!
 java.lang.RuntimeException: 
 org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
   at 
 org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1569) ClusterPermissionsTest occasionally fails on Windows

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1569:
---

Description: 
{{ClusterPermissionsTest}} often fails on Windows:

{noformat}
testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
  Time elapsed: 11.097 sec   ERROR!
java.lang.RuntimeException: 
org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
at 
org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
{noformat}

  was:
{{ClusterPermissionsTest#testAclPropagation}} regularly fails on Windows:

{noformat}
testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
  Time elapsed: 11.097 sec   ERROR!
java.lang.RuntimeException: 
org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
at 
org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
OakMerge0002: Conflicting concurrent change. Update operation failed: key: 0:/ 
update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
_collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 279031870, 
_revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
_collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
_revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
_collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
at 
org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
at 
org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
{noformat}


 ClusterPermissionsTest occasionally fails on Windows
 

 Key: OAK-1569
 URL: https://issues.apache.org/jira/browse/OAK-1569
 Project: Jackrabbit Oak
  Issue Type: Bug
  

[jira] [Commented] (OAK-1569) ClusterPermissionsTest occasionally fails on Windows

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940326#comment-13940326
 ] 

Michael Dürig commented on OAK-1569:


It is actually the setup method that fails. Adjusted title and description 
accordingly. 

 ClusterPermissionsTest occasionally fails on Windows
 

 Key: OAK-1569
 URL: https://issues.apache.org/jira/browse/OAK-1569
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: {noformat}
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 NUMBER_OF_PROCESSORS=2,
 PROCESSOR_LEVEL=6
 OS=Windows_NT
 PROCESSOR_REVISION=2e06, 
 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 PROCESSOR_ARCHITECTURE=AMD64
 BUILD_ID=2014-03-19_09-30-38
 {noformat}
Reporter: Michael Dürig
 Fix For: 0.20


 {{ClusterPermissionsTest}} often fails on Windows:
 {noformat}
 testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
   Time elapsed: 11.097 sec   ERROR!
 java.lang.RuntimeException: 
 org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
   at 
 org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1569) ClusterPermissionsTest occasionally fails on Windows

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940332#comment-13940332
 ] 

Michael Dürig commented on OAK-1569:


Disabled the test until this is sorted out: http://svn.apache.org/r1579178

 ClusterPermissionsTest occasionally fails on Windows
 

 Key: OAK-1569
 URL: https://issues.apache.org/jira/browse/OAK-1569
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: {noformat}
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 NUMBER_OF_PROCESSORS=2,
 PROCESSOR_LEVEL=6
 OS=Windows_NT
 PROCESSOR_REVISION=2e06, 
 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 PROCESSOR_ARCHITECTURE=AMD64
 BUILD_ID=2014-03-19_09-30-38
 {noformat}
Reporter: Michael Dürig
 Fix For: 0.20


 {{ClusterPermissionsTest}} often fails on Windows:
 {noformat}
 testAclPropagation(org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest)
   Time elapsed: 11.097 sec   ERROR!
 java.lang.RuntimeException: 
 org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:48)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:481)
   at 
 org.apache.jackrabbit.oak.security.authorization.permission.ClusterPermissionsTest.before(ClusterPermissionsTest.java:119)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0002: 
 OakMerge0002: Conflicting concurrent change. Update operation failed: key: 
 0:/ update {_revisions.r144d5fa2113-2-2=SET_MAP_ENTRY c-r144d5fa49bd-4-2, 
 _collisions.r144d5fa2181-2-2=CONTAINS_MAP_ENTRY false, _modified=SET 
 279031870, _revisions.r144d5fa49ae-4-2=SET_MAP_ENTRY c-r144d5fa49bd-6-2, 
 _collisions.r144d5fa2113-2-2=CONTAINS_MAP_ENTRY false, 
 _revisions.r144d5fa2181-2-2=SET_MAP_ENTRY c-r144d5fa49bd-5-2, 
 _collisions.r144d5fa49ae-4-2=CONTAINS_MAP_ENTRY false} (retries 4, 4853 ms)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:304)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:143)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:147)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1212)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:46)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940347#comment-13940347
 ] 

Thomas Mueller commented on OAK-333:


I see. It looks like I didn't run this test... Suggested patch:

{noformat}
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
===
--- 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
(revision 1579181)
+++ 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
(working copy)
@@ -41,6 +41,7 @@
 import org.apache.jackrabbit.oak.plugins.document.CachedNodeDocument;
 import org.apache.jackrabbit.oak.plugins.document.Collection;
 import org.apache.jackrabbit.oak.plugins.document.Document;
+import org.apache.jackrabbit.oak.plugins.document.NodeDocument;
 import org.apache.jackrabbit.oak.plugins.document.util.Utils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -276,8 +277,12 @@
 //check them
 //TODO Need to determine way to determine if the
 //key is referring to a split document
-String path = Utils.getPathFromId(e.getKey().toString());
 result.cacheSize++;
+CachedNodeDocument doc = e.getValue();
+if (doc == NodeDocument.NULL) {
+continue;
+}
+String path = e.getValue().getPath();
 for (String name : PathUtils.elements(path)) {
 current = current.child(name);
 }
{noformat}

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1511) S3 DataStore in Oak (including GC)

2014-03-19 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940348#comment-13940348
 ] 

Amit Jain commented on OAK-1511:


I have validated that S3 datastore works with Oak. GC test case also passes for 
S3 Data Store.

 S3 DataStore in Oak (including GC)
 --

 Key: OAK-1511
 URL: https://issues.apache.org/jira/browse/OAK-1511
 Project: Jackrabbit Oak
  Issue Type: New Feature
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 0.19


 Oak can use Jackrabbit 2.x DataStores, but we need to make sure the S3 
 datastore really works with Oak (including garbage collection, a test case, 
 documentation)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-333.


Resolution: Fixed

Revision 1579182

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1571) OSGi Configuration for Query Limits

2014-03-19 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-1571:
---

 Summary: OSGi Configuration for Query Limits
 Key: OAK-1571
 URL: https://issues.apache.org/jira/browse/OAK-1571
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Thomas Mueller
Priority: Minor
 Fix For: 0.20


In OAK-1395 we added limits for long running queries. The limits can be changed 
with system properties, now we should make the settings changeable using OSGi



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1500) Verify restore to revision on MongoNS

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1500:
---

Fix Version/s: (was: 0.19)
   0.20

 Verify restore to revision on MongoNS
 -

 Key: OAK-1500
 URL: https://issues.apache.org/jira/browse/OAK-1500
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Alex Parvulescu
Assignee: Chetan Mehrotra
 Fix For: 0.20


 Following the discussion on OAK-1339, the backup will be controlled from 
 Mongo directly, but we still need to verify 2 things:
  - ongoing transactions will be discarded on restore, the head has to point 
 to the latest stable revision
  - as an added benefit, the restore could happen to an older revision (see 
 the sharded setup where a node can get ahead of the others between the moment 
 the backup starts and when it will finish across the board)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1295) Recovery for missing _lastRev updates

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1295:
---

Fix Version/s: (was: 0.19)
   0.20

 Recovery for missing _lastRev updates
 -

 Key: OAK-1295
 URL: https://issues.apache.org/jira/browse/OAK-1295
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Chetan Mehrotra
 Fix For: 0.20


 The _lastRev fields are periodically written to MongoDB with a background 
 thread.
 This means there may be missing _lastRev updates when an Oak instance crashes.
 There must be a recovery mechanism in place when an Oak instance starts up. 
 MongoMK needs to read the most recent _lastRev for the local cluster id and 
 get
 documents with a _modified timestamp at _lastRev or newer. These are 
 candidates
 for missing _lastRev updates. The _lastRev must only be updated for committed
 non-branch changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1465) performance degradation with growing index size on Oak-Mongo

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1465:
---

Fix Version/s: (was: 0.19)
   0.20

 performance degradation with growing index size on Oak-Mongo
 

 Key: OAK-1465
 URL: https://issues.apache.org/jira/browse/OAK-1465
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.17.1
Reporter: Stefan Egli
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 0.20

 Attachments: CreateManyIndexedNodesTest.java


 Tested with an oak-snapshot of Monday Feb 24, 10AM EST.
 Noticed that when the amount of nodes indexed - eg wrt a particular property 
 - the adding of nodes becomes slower and slower.
 Will attach a oak-run benchmark to underline this. Basically the scenario 
 where this occurred was:
  * have a number of level 1 nodes (eg 100)
  * under those level 1 nodes, add a growing list of children, each with a 
 property that is indexed (ie that index is actually growing and is probably 
 causing the slowdown).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1342) Cascading document history

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1342:
---

Fix Version/s: (was: 0.19)
   0.20

 Cascading document history
 --

 Key: OAK-1342
 URL: https://issues.apache.org/jira/browse/OAK-1342
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 0.20


 The document history is current structured in a flat list of references from 
 the main document to the previous documents (the _prev sub document).
 Garbage collection should make sure this list is kept within bounds, but 
 there will still be documents with many valid changes that cannot be garbage 
 collected easily. One example is the root document, which acts as the commit 
 root for branches and their merge commits.
 To support many more changes on a document, the _prev list should be 
 structured more similar to a b-tree with intermediate documents that can span 
 multiple previous documents or other intermediate documents.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1561) Implement optimised range queries

2014-03-19 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-1561:
--

Attachment: OAK-1561-r1.patch

Attaching a first release for the range queries on top of the 
OrderedPropertyIndex. [Pull 
request|https://github.com/apache/jackrabbit-oak/pull/13] as well in case.

 Implement optimised range queries
 -

 Key: OAK-1561
 URL: https://issues.apache.org/jira/browse/OAK-1561
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Davide Giannella
 Attachments: OAK-1561-r1.patch


 Add in OAK the ability to perform range queries on ordered indexes so that it 
 will be possible to return results for something like {{SELECT * FROM 
 [nt:base] WHERE lastModified  cast('2014-03-05T14:36:52.215Z' as date)}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1564) ClockTest on Windows fails

2014-03-19 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940453#comment-13940453
 ] 

Julian Reschke commented on OAK-1564:
-

Maybe different behavior between 1.6 and 1.7?

 ClockTest on Windows fails
 --

 Key: OAK-1564
 URL: https://issues.apache.org/jira/browse/OAK-1564
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 NUMBER_OF_PROCESSORS=2
 OS=Windows_NT
 PROCESSOR_ARCHITECTURE=AMD64
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 PROCESSOR_LEVEL=6
 PROCESSOR_REVISION=2e06
Reporter: Michael Dürig
 Fix For: 0.20


 {{ClockTest.testClockDrift}} fails on Windows:
 {noformat}
 junit.framework.AssertionFailedError: unexpected drift: 26 (limit 20)
 at junit.framework.Assert.fail(Assert.java:50)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at org.apache.jackrabbit.oak.stats.ClockTest.testClockDrift(ClockTest.java:72)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (OAK-1564) ClockTest on Windows fails

2014-03-19 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940453#comment-13940453
 ] 

Julian Reschke edited comment on OAK-1564 at 3/19/14 1:44 PM:
--

I had similar issues like that before Jukka's fix, but not anymore (*). Tested 
with both JDK 6 and 7.

(*) Well, once with a reported drift of 20.


was (Author: reschke):
Maybe different behavior between 1.6 and 1.7?

 ClockTest on Windows fails
 --

 Key: OAK-1564
 URL: https://issues.apache.org/jira/browse/OAK-1564
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 NUMBER_OF_PROCESSORS=2
 OS=Windows_NT
 PROCESSOR_ARCHITECTURE=AMD64
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 PROCESSOR_LEVEL=6
 PROCESSOR_REVISION=2e06
Reporter: Michael Dürig
 Fix For: 0.20


 {{ClockTest.testClockDrift}} fails on Windows:
 {noformat}
 junit.framework.AssertionFailedError: unexpected drift: 26 (limit 20)
 at junit.framework.Assert.fail(Assert.java:50)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at org.apache.jackrabbit.oak.stats.ClockTest.testClockDrift(ClockTest.java:72)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1317) OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1317:
---

Priority: Minor  (was: Major)

 OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString
 --

 Key: OAK-1317
 URL: https://issues.apache.org/jira/browse/OAK-1317
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: commons
Affects Versions: 0.14
Reporter: Tobias Bocanegra
Priority: Minor
 Fix For: 0.20


 running {{org.apache.jackrabbit.oak.jcr.security.user.UserImportTest}} 
 produces a OOME in the jsop builder:
 {noformat}
 java.lang.OutOfMemoryError: Java heap space
   at java.util.Arrays.copyOfRange(Arrays.java:3209)
   at java.lang.String.init(String.java:215)
   at java.lang.StringBuilder.toString(StringBuilder.java:430)
   at 
 org.apache.jackrabbit.mk.json.JsopBuilder.toString(JsopBuilder.java:220)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder$AddNode.asDiff(CommitBuilder.java:312)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:157)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:93)
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.commit(MicroKernelImpl.java:516)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.commit(KernelNodeStore.java:248)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:117)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:43)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:466)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:275)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.merge(KernelNodeStoreBranch.java:142)
   at 
 org.apache.jackrabbit.oak.kernel.KernelRootBuilder.merge(KernelRootBuilder.java:148)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.merge(KernelNodeStore.java:163)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:44)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:470)
   at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:164)
   at 
 org.apache.jackrabbit.oak.jcr.security.user.AbstractImportTest.before(AbstractImportTest.java:87)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1317) OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940477#comment-13940477
 ] 

Michael Dürig commented on OAK-1317:


The test case does not fail any more and has been re-enabled at 1572173. 
Lowering priority thus. 

 OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString
 --

 Key: OAK-1317
 URL: https://issues.apache.org/jira/browse/OAK-1317
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: commons
Affects Versions: 0.14
Reporter: Tobias Bocanegra
 Fix For: 0.20


 running {{org.apache.jackrabbit.oak.jcr.security.user.UserImportTest}} 
 produces a OOME in the jsop builder:
 {noformat}
 java.lang.OutOfMemoryError: Java heap space
   at java.util.Arrays.copyOfRange(Arrays.java:3209)
   at java.lang.String.init(String.java:215)
   at java.lang.StringBuilder.toString(StringBuilder.java:430)
   at 
 org.apache.jackrabbit.mk.json.JsopBuilder.toString(JsopBuilder.java:220)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder$AddNode.asDiff(CommitBuilder.java:312)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:157)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:93)
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.commit(MicroKernelImpl.java:516)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.commit(KernelNodeStore.java:248)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:117)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:43)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:466)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:275)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.merge(KernelNodeStoreBranch.java:142)
   at 
 org.apache.jackrabbit.oak.kernel.KernelRootBuilder.merge(KernelRootBuilder.java:148)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.merge(KernelNodeStore.java:163)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:44)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:470)
   at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:164)
   at 
 org.apache.jackrabbit.oak.jcr.security.user.AbstractImportTest.before(AbstractImportTest.java:87)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1484) additional observation queue jmx attributes

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-1484.


   Resolution: Fixed
Fix Version/s: (was: 0.20)
   0.19

Fixed at http://svn.apache.org/r1579234

I added a time series for the maximum number of revisions in the observation 
queue. This should give quite a good indication of how the queue develops over 
time. 

 additional observation queue jmx attributes
 ---

 Key: OAK-1484
 URL: https://issues.apache.org/jira/browse/OAK-1484
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: commons
Affects Versions: 0.17.1
Reporter: Stefan Egli
Assignee: Michael Dürig
 Fix For: 0.19


 The following jmx attributes would be very useful to have, when debugging 
 problems or checking the health of observation listeners and queues at 
 runtime:
  * length of an observation listener's revision queue
  * duration of stay of oldest revision in an observation listener's revision 
 queue
 Especially the second point can be used to judge if an observation listener 
 is 'healthy' or not: eg if the duration is a few seconds, things are probably 
 fine. If the duration is over a minute, then that could raise a flag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reopened OAK-333:



Cache invalidation is still broken 

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1572) Add checkpoint operation to the filestore jmx api

2014-03-19 Thread Alex Parvulescu (JIRA)
Alex Parvulescu created OAK-1572:


 Summary: Add checkpoint operation to the filestore jmx api
 Key: OAK-1572
 URL: https://issues.apache.org/jira/browse/OAK-1572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1572) Add checkpoint operation to the filestore jmx api

2014-03-19 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-1572.
--

   Resolution: Fixed
Fix Version/s: 0.19

fixed with rev http://svn.apache.org/r1579238

 Add checkpoint operation to the filestore jmx api
 -

 Key: OAK-1572
 URL: https://issues.apache.org/jira/browse/OAK-1572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1572) Add checkpoint operation to the filestore jmx api

2014-03-19 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940503#comment-13940503
 ] 

Jukka Zitting commented on OAK-1572:


What's the use case? I mean, without additional functionality that makes a 
checkpoint accessible to a higher level client, there isn't much point in 
creating the checkpoint.

 Add checkpoint operation to the filestore jmx api
 -

 Key: OAK-1572
 URL: https://issues.apache.org/jira/browse/OAK-1572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940505#comment-13940505
 ] 

Thomas Mueller commented on OAK-333:


I think all negative cache entries with a long path need to be invalidated. 
That should be relatively simple to implement.

Unit test:

{noformat}
Index: 
src/test/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidationIT.java
===
--- 
src/test/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidationIT.java
 (revision 1579222)
+++ 
src/test/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidationIT.java
 (working copy)
@@ -41,6 +41,8 @@
 
 import static 
org.apache.jackrabbit.oak.plugins.document.mongo.CacheInvalidator.InvalidationResult;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 public class CacheInvalidationIT extends AbstractMongoConnectionTest {
 
@@ -135,6 +137,43 @@
 }
 
 @Test
+public void testCacheInvalidationHierarchicalNotExist()
+throws CommitFailedException {
+
+NodeBuilder b2 = getRoot(c2).builder();
+// we create x/other, so that x is known to have a child node
+b2.child(x).child(other);
+b2.child(y);
+c2.merge(b2, EmptyHook.INSTANCE, CommitInfo.EMPTY);
+c2.runBackgroundOperations();
+c1.runBackgroundOperations();
+
+// we check for the existence of x/futureX, which
+// should create a negative entry in the cache
+NodeState x = getRoot(c1).getChildNode(x);
+assertTrue(x.exists());
+assertFalse(x.getChildNode(futureX).exists());
+// we don't check for the existence of y/futureY
+NodeState y = getRoot(c1).getChildNode(y);
+assertTrue(y.exists());
+
+// now we add both futureX and futureY
+// in the other cluster node
+b2.child(x).child(futureX).setProperty(z, 1);
+c2.merge(b2, EmptyHook.INSTANCE, CommitInfo.EMPTY);
+b2.child(y).child(futureY).setProperty(z, 2);
+c2.merge(b2, EmptyHook.INSTANCE, CommitInfo.EMPTY);
+
+c2.runBackgroundOperations();
+c1.runBackgroundOperations();
+
+// both nodes should now be visible
+
assertTrue(getRoot(c1).getChildNode(y).getChildNode(futureY).exists());
+
assertTrue(getRoot(c1).getChildNode(x).getChildNode(futureX).exists());
+
+}
+
+@Test
 public void testCacheInvalidationLinear() throws CommitFailedException {
 final int totalPaths = createScenario();
{noformat}

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1572) Add checkpoint operation to the filestore jmx api

2014-03-19 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940509#comment-13940509
 ] 

Alex Parvulescu commented on OAK-1572:
--

the usecase is having a backup started from an external process (OS level 
copy), and checkpointing the repo first to work around potential content that 
is not flushed yet.

 Add checkpoint operation to the filestore jmx api
 -

 Key: OAK-1572
 URL: https://issues.apache.org/jira/browse/OAK-1572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1572) Add checkpoint operation to the filestore jmx api

2014-03-19 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940511#comment-13940511
 ] 

Alex Parvulescu commented on OAK-1572:
--

it would probably make sense to add the #retrieve available too, not sure.

 Add checkpoint operation to the filestore jmx api
 -

 Key: OAK-1572
 URL: https://issues.apache.org/jira/browse/OAK-1572
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, segmentmk
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940524#comment-13940524
 ] 

Thomas Mueller commented on OAK-333:


Proposed patch (excluding the test above):

{noformat}
Index: 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
===
--- 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
(revision 1579182)
+++ 
src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/CacheInvalidator.java
(working copy)
@@ -279,13 +279,24 @@
 //key is referring to a split document
 result.cacheSize++;
 CachedNodeDocument doc = e.getValue();
+String path;
 if (doc == NodeDocument.NULL) {
- // we only need to process documents that exist
-continue;
+String id = e.getKey().toString();
+if (Utils.isIdFromLongPath(id)) {
+LOG.debug(Negative cache entry with long path {}. 
Invalidating, id);
+documentStore.invalidateCache(Collection.NODES, id);
+path = null;
+} else {
+path = Utils.getPathFromId(id);
+}
+} else {
+path = doc.getPath();
 }
-String path = doc.getPath();
-for (String name : PathUtils.elements(path)) {
-current = current.child(name);
+if (path != null) {
+for (String name : PathUtils.elements(path)) {
+current = current.child(name);
+}
 }
 }
 return root;
Index: src/main/java/org/apache/jackrabbit/oak/plugins/document/util/Utils.java
===
--- src/main/java/org/apache/jackrabbit/oak/plugins/document/util/Utils.java
(revision 1579181)
+++ src/main/java/org/apache/jackrabbit/oak/plugins/document/util/Utils.java
(working copy)
@@ -260,12 +260,17 @@
 }
 return true;
 }
+
+public static boolean isIdFromLongPath(String id) {
+int index = id.indexOf(':');
+return id.charAt(index + 1) == 'h';
+}
 
 public static String getPathFromId(String id) {
-int index = id.indexOf(':');
-if (id.charAt(index + 1) == 'h') {
+if (isIdFromLongPath(id)) {
 throw new IllegalArgumentException(Id is hashed:  + id);
 }
+int index = id.indexOf(':');
 return id.substring(index + 1);
 }
{noformat}

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1153) Write test to measure throughput on Mongo with multiple reader and writer

2014-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated OAK-1153:
---

Fix Version/s: (was: 0.20)
   0.19

 Write test to measure throughput on Mongo with multiple reader and writer
 -

 Key: OAK-1153
 URL: https://issues.apache.org/jira/browse/OAK-1153
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.19


 To check how raw access to Mongo performs with multiple reader and writers we 
 need to have some throughput test.
 It would also server to decide if writing blob in separate databse provides 
 any benefit or not. Refer to [1] for discussion around this concern
 [1] http://markmail.org/thread/aukqziaq3hryvtaz



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940542#comment-13940542
 ] 

Thomas Mueller commented on OAK-333:


Revision 1579250

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-333) 1000 byte path limit in MongoMK

2014-03-19 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940541#comment-13940541
 ] 

Chetan Mehrotra commented on OAK-333:
-

+1 Patch look fine 

 1000 byte path limit in MongoMK
 ---

 Key: OAK-333
 URL: https://issues.apache.org/jira/browse/OAK-333
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.5
Reporter: Mete Atamel
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.19

 Attachments: NodeNameLimitsTest.java, OAK-333.patch


 In an infinite loop try to add nodes one under another to have N0/N1/N2...NN. 
 At some point, the current parent node will not be found and the current 
 commit will fail. I think this happens when the path length exceeds 1000 
 characters. Is this enough for a path? I was able to create this way only 222 
 levels in the tree (and my node names were really short N1, N2 ...)
 There's an automated tests for this: NodeExistsCommandMongoTest.testTreeDepth



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1356) Expose the preferred transient space size as repository descriptor

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1356:
---

Fix Version/s: (was: 0.20)
   1.1

 Expose the preferred transient space size as repository descriptor 
 ---

 Key: OAK-1356
 URL: https://issues.apache.org/jira/browse/OAK-1356
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
 Fix For: 1.1


 The problem is that the different stores have different transient space 
 characteristics. for example the MongoMK is very slow when handling large 
 saves.
 suggest to expose a repository descriptor that can be used to estimate the 
 preferred transient space, for example when importing content.
 so either a boolean like: 
   {{option.infinite.transientspace}}
 or a number like:
   {{option.transientspace.preferred.size}}
 the later would denote the average number of modified node states that should 
 be put in the transient space before the persistence starts to degrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1573) Document External Login and LDAP specifically

2014-03-19 Thread angela (JIRA)
angela created OAK-1573:
---

 Summary: Document External Login and LDAP specifically
 Key: OAK-1573
 URL: https://issues.apache.org/jira/browse/OAK-1573
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core
Reporter: angela
Assignee: Tobias Bocanegra
 Fix For: 0.20


now that that LDAP integration issue is resolved we should also make sure it's 
documented and the existing documentation wrt external login modules reflects 
the current state of the code. afaik this is really outdated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1557) Mark documents as deleted

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1557:
---

Priority: Blocker  (was: Critical)

 Mark documents as deleted
 -

 Key: OAK-1557
 URL: https://issues.apache.org/jira/browse/OAK-1557
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Marcel Reutegger
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 0.20


 This is an improvement to make a certain use case more efficient. When there 
 is a parent node with frequently added and removed child nodes, the reading 
 of the current list of child nodes becomes inefficient because the decision 
 whether a node exists at a certain revision is done in the DocumentNodeStore 
 and no filtering is done on the MongoDB side.
 So far we figured this would be solved automatically by the MVCC garbage 
 collection, when documents for deleted nodes are removed. However for 
 locations in the repository where nodes are added and deleted again 
 frequently (think of a temp folder), the issue pops up before the GC had a 
 chance to clean up.
 The Document should have an additional field, which is set when the node is 
 deleted in the most recent revision. Based on this field the 
 DocumentNodeStore can limit the query to MongoDB to documents that are not 
 deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1500) Verify restore to revision on MongoNS

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1500:
---

Priority: Critical  (was: Major)

 Verify restore to revision on MongoNS
 -

 Key: OAK-1500
 URL: https://issues.apache.org/jira/browse/OAK-1500
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Alex Parvulescu
Assignee: Chetan Mehrotra
Priority: Critical
 Fix For: 0.20


 Following the discussion on OAK-1339, the backup will be controlled from 
 Mongo directly, but we still need to verify 2 things:
  - ongoing transactions will be discarded on restore, the head has to point 
 to the latest stable revision
  - as an added benefit, the restore could happen to an older revision (see 
 the sharded setup where a node can get ahead of the others between the moment 
 the backup starts and when it will finish across the board)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1295) Recovery for missing _lastRev updates

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1295:
---

Priority: Critical  (was: Major)

 Recovery for missing _lastRev updates
 -

 Key: OAK-1295
 URL: https://issues.apache.org/jira/browse/OAK-1295
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Chetan Mehrotra
Priority: Critical
 Fix For: 0.20


 The _lastRev fields are periodically written to MongoDB with a background 
 thread.
 This means there may be missing _lastRev updates when an Oak instance crashes.
 There must be a recovery mechanism in place when an Oak instance starts up. 
 MongoMK needs to read the most recent _lastRev for the local cluster id and 
 get
 documents with a _modified timestamp at _lastRev or newer. These are 
 candidates
 for missing _lastRev updates. The _lastRev must only be updated for committed
 non-branch changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1200) occasional test failure in org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1200:
---

Fix Version/s: (was: 0.20)

 occasional test failure in 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 --

 Key: OAK-1200
 URL: https://issues.apache.org/jira/browse/OAK-1200
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.12
Reporter: Julian Reschke
Assignee: Dominique Pfister

 Occasionally seen on W7 64bit:
 Tests in error:
   
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
 : java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
 ---
 Test set: org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 ---
 Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.377 sec 
  FAILURE!
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
   Time elapsed: 0.151 sec   ERROR!
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.merge(MicroKernelImpl.java:557)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest.testConcurrentMergeGC(DefaultRevisionStoreTest.java:193)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:172)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:104)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:70)
 Caused by: java.lang.RuntimeException: 
 java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.model.TraversingNodeDiffHandler.childNodeChanged(TraversingNodeDiffHandler.java:53)
   at 
 org.apache.jackrabbit.mk.model.DiffBuilder$1.childNodeChanged(DiffBuilder.java:182)
   at 
 

[jira] [Updated] (OAK-1200) occasional test failure in org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1200:
---

Priority: Minor  (was: Major)

 occasional test failure in 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 --

 Key: OAK-1200
 URL: https://issues.apache.org/jira/browse/OAK-1200
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.12
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 Occasionally seen on W7 64bit:
 Tests in error:
   
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
 : java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
 ---
 Test set: org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 ---
 Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.377 sec 
  FAILURE!
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
   Time elapsed: 0.151 sec   ERROR!
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.merge(MicroKernelImpl.java:557)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest.testConcurrentMergeGC(DefaultRevisionStoreTest.java:193)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:172)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:104)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:70)
 Caused by: java.lang.RuntimeException: 
 java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.model.TraversingNodeDiffHandler.childNodeChanged(TraversingNodeDiffHandler.java:53)
   at 
 org.apache.jackrabbit.mk.model.DiffBuilder$1.childNodeChanged(DiffBuilder.java:182)
   at 
 

[jira] [Updated] (OAK-1174) Inconsistent handling of invalid names

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1174:
---

Priority: Minor  (was: Major)

 Inconsistent handling of invalid names
 --

 Key: OAK-1174
 URL: https://issues.apache.org/jira/browse/OAK-1174
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: jcr
Reporter: Michael Dürig
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 0.20


 Passing an invalid name to a JCR method might or might not throw a 
 {{RepositoryException}} depending on whether name re-mappings exist or not:
 {code}
 session.itemExists(/jcr:cont]ent);
 {code}
 returns {{false}} if no name re-mappings exist but throws a 
 {{RepositoryException}} otherwise. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1414) Copying a large subtrees does not scale as expected in the number of copied nodes on document node stores

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1414:
---

Assignee: Michael Dürig  (was: Marcel Reutegger)

 Copying a large subtrees does not scale as expected in the number of copied 
 nodes on document node stores
 -

 Key: OAK-1414
 URL: https://issues.apache.org/jira/browse/OAK-1414
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Critical
 Fix For: 0.20


 {{org.apache.jackrabbit.oak.jcr.LargeOperationIT#largeCopy}} does not scale 
 linearly with the number of copied nodes in the case of document:
 {code}
 quotients: 1.1803765754650501, 0.33168970806199227, 0.8160753821577706, 
 1.4619226777374157, 1.9342440796702938
 {code}
 Note how copying trees with 32768 and with 131072 nodes each took super 
 linearly longer than copying the respective previous amount of nodes. See 
 also OAK-1413 on how to read these numbers. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-168) Basic JCR VersionManager support

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-168:
--

Fix Version/s: (was: 0.20)
   0.19

 Basic JCR VersionManager support
 

 Key: OAK-168
 URL: https://issues.apache.org/jira/browse/OAK-168
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: jcr
Reporter: Jukka Zitting
Assignee: Marcel Reutegger
 Fix For: 0.19


 Versioning is a highly useful feature for many applications, so we definitely 
 should support that in Oak.
 We could start by adding a basic JCR VersionManager implementation that 
 simply implements checkin operations by copying content from a node to the 
 respective version history under {{/jcr:system/jcr:versionStorage}}.
 The next step would then be figuring out whether we want to expose such an 
 operation directly in the Oak API, or if a separate versioning plugin and an 
 associated validator for changes in the {{/jcr:system/jcr:versionStorage}} 
 subtree works better.
 Based on that we can then proceed to implement more of the JCR versioning 
 features.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-650) Move SegmentNodeStoreBranch.RebaseDiff to utility package

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-650:
--

Fix Version/s: (was: 0.20)
   0.19

 Move SegmentNodeStoreBranch.RebaseDiff to utility package
 -

 Key: OAK-650
 URL: https://issues.apache.org/jira/browse/OAK-650
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 0.19


 Moving the class to a utility package would allow KernelNodeStore to reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-650) Move SegmentNodeStoreBranch.RebaseDiff to utility package

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth resolved OAK-650.
---

Resolution: Fixed

 Move SegmentNodeStoreBranch.RebaseDiff to utility package
 -

 Key: OAK-650
 URL: https://issues.apache.org/jira/browse/OAK-650
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 0.19


 Moving the class to a utility package would allow KernelNodeStore to reuse it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1550) Incorrect handling of addExistingNode conflict in NodeStore

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1550:
---

Assignee: Marcel Reutegger

 Incorrect handling of addExistingNode conflict in NodeStore
 ---

 Key: OAK-1550
 URL: https://issues.apache.org/jira/browse/OAK-1550
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 0.20


 {{MicroKernel.rebase}} says: addExistingNode: node has been added that is 
 different from a node of them same name that has been added to the trunk.
 However, the {{NodeStore}} implementation
 # throws a {{CommitFailedException}} itself instead of annotating the 
 conflict,
 # also treats the equal childs with the same name as a conflict. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-322) XPath to SQL-2 transformation error for //*[0] (same name sibling index)

2014-03-19 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-322.
-

   Resolution: Won't Fix
Fix Version/s: (was: 0.20)

 XPath to SQL-2 transformation error for //*[0] (same name sibling index)
 --

 Key: OAK-322
 URL: https://issues.apache.org/jira/browse/OAK-322
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, query
Reporter: Alex Parvulescu
Priority: Minor

 Test class XPathAxisTest: tests #testIndex0Descendant to 
 #testIndex3Descendant.
 The used xpath query is 
 {noformat}
 /jcr:root/testroot//*[0]
 {noformat}
 which apparently gets transformed into a bad SQL2 query, that yields the 
 following error at execute time:
 {noformat}
 Caused by: java.text.ParseException: Query: select [jcr:path], [jcr:score], * 
 from [nt:base] as a where 0(*)is not null and isdescendantnode(a, 
 '/testroot'); expected: NOT, (
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1560) Expose RevisionGCMBean for supported NodeStores

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1560:
---

Assignee: Michael Dürig

 Expose RevisionGCMBean for supported NodeStores 
 

 Key: OAK-1560
 URL: https://issues.apache.org/jira/browse/OAK-1560
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 {{NodeStore}} implementations should expose the {{RevisionGCMBean}} in order 
 to be interoperable with {{RepositoryManagementMBean}}. See OAK-1160.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-319) Similar (rep:similar) support

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-319:
--

Priority: Critical  (was: Major)

 Similar (rep:similar) support
 -

 Key: OAK-319
 URL: https://issues.apache.org/jira/browse/OAK-319
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr, query
Reporter: Alex Parvulescu
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 Test class is: SimilarQueryTest
 Trace:
 {noformat}
 Caused by: java.text.ParseException: Query:
 //*[rep:similar(.(*), '/testroot')]; expected: rep:similar is not supported
   at 
 org.apache.jackrabbit.oak.query.XPathToSQL2Converter.getSyntaxError(XPathToSQL2Converter.java:963)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1458) Out-of-process text extraction for better protection agains JVM/memory/CPU problems

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1458:
---

Priority: Major  (was: Critical)

 Out-of-process text extraction for better protection agains JVM/memory/CPU 
 problems
 ---

 Key: OAK-1458
 URL: https://issues.apache.org/jira/browse/OAK-1458
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Jukka Zitting
  Labels: production, resilience
 Fix For: 1.1


 This is a tracking / collection bug for solving problems with text extraction 
 of
 documents (very large, broken, malicious, etc), causing JVM crashes, memory
 problems, excessive CPU usage.
 The basic TIKA feature to enable this fix is TIKA-416 [1]
 [1] https://issues.apache.org/jira/browse/TIKA-416



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1458) Out-of-process text extraction for better protection agains JVM/memory/CPU problems

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1458:
---

Assignee: Jukka Zitting

 Out-of-process text extraction for better protection agains JVM/memory/CPU 
 problems
 ---

 Key: OAK-1458
 URL: https://issues.apache.org/jira/browse/OAK-1458
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Jukka Zitting
Priority: Critical
  Labels: production, resilience
 Fix For: 1.1


 This is a tracking / collection bug for solving problems with text extraction 
 of
 documents (very large, broken, malicious, etc), causing JVM crashes, memory
 problems, excessive CPU usage.
 The basic TIKA feature to enable this fix is TIKA-416 [1]
 [1] https://issues.apache.org/jira/browse/TIKA-416



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1571) OSGi Configuration for Query Limits

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1571:
---

Assignee: Thomas Mueller

 OSGi Configuration for Query Limits
 ---

 Key: OAK-1571
 URL: https://issues.apache.org/jira/browse/OAK-1571
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 0.20


 In OAK-1395 we added limits for long running queries. The limits can be 
 changed with system properties, now we should make the settings changeable 
 using OSGi



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1554) Clarify behaviour for BlobStore api for invalid arguments

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1554:
---

Assignee: Chetan Mehrotra

 Clarify behaviour for BlobStore api for invalid arguments
 -

 Key: OAK-1554
 URL: https://issues.apache.org/jira/browse/OAK-1554
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 0.20


 Currently testcase in AbstractBlobStore test has following check
 {code:java}
 @Test
 public void testEmptyIdentifier() throws Exception {
 byte[] data = new byte[1];
 assertEquals(-1, store.readBlob(, 0, data, 0, 1));
 assertEquals(0, store.getBlobLength());
 }
 {code}
 This fails for DataStore based BlobStore as the blodId are invalid.
 So need to clarify the behaviour around handling of invalid blobId. Should 
 they return result or throw exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-262) Query: support pseudo properties like jcr:score() and rep:excerpt()

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-262:
--

Priority: Critical  (was: Major)

 Query: support pseudo properties like jcr:score() and rep:excerpt()
 ---

 Key: OAK-262
 URL: https://issues.apache.org/jira/browse/OAK-262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 0.20


 The query engine currently only supports properties that are stored within a 
 node. It doesn't currently support pseudo-properties that are provided by 
 an index, for example jcr:score() and rep:excerpt(). 
 To support such properties, I suggest to change the Cursor interface to 
 return an IndexRow (a new class that can return such pseudo-properties as 
 well as the path) instead of just the path.
 This may also speed up queries that don't require to load the node itself (if 
 access rights can be checked efficiently or don't need to be checked for a 
 given query).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-1553:
--

Assignee: Michael Dürig

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig

 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1454) Warn on huge multi-valued properties

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-1454.


   Resolution: Fixed
Fix Version/s: (was: 0.20)
   0.19

Fixed at http://svn.apache.org/r1579286

 Warn on huge multi-valued properties
 

 Key: OAK-1454
 URL: https://issues.apache.org/jira/browse/OAK-1454
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Michael Marth
Assignee: Michael Dürig
Priority: Trivial
  Labels: production, resilience
 Fix For: 0.19


 It is an explicit design non-goal of Oak to support huge amounts of values in 
 multi-valued properties. If a user still tries to create these we should at 
 least throw a WARN in the logs to indicate that usage of MVPs is wrong.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1517) Creating an property index definition with an illegal namespace should fail

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1517:


Component/s: core

 Creating an property index definition with an illegal namespace should fail
 ---

 Key: OAK-1517
 URL: https://issues.apache.org/jira/browse/OAK-1517
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Tobias Bocanegra





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1317) OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1317:
---

Assignee: Michael Dürig

 OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString
 --

 Key: OAK-1317
 URL: https://issues.apache.org/jira/browse/OAK-1317
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: commons
Affects Versions: 0.14
Reporter: Tobias Bocanegra
Assignee: Michael Dürig
Priority: Minor
 Fix For: 0.20


 running {{org.apache.jackrabbit.oak.jcr.security.user.UserImportTest}} 
 produces a OOME in the jsop builder:
 {noformat}
 java.lang.OutOfMemoryError: Java heap space
   at java.util.Arrays.copyOfRange(Arrays.java:3209)
   at java.lang.String.init(String.java:215)
   at java.lang.StringBuilder.toString(StringBuilder.java:430)
   at 
 org.apache.jackrabbit.mk.json.JsopBuilder.toString(JsopBuilder.java:220)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder$AddNode.asDiff(CommitBuilder.java:312)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:157)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:93)
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.commit(MicroKernelImpl.java:516)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.commit(KernelNodeStore.java:248)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:117)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:43)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:466)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:275)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.merge(KernelNodeStoreBranch.java:142)
   at 
 org.apache.jackrabbit.oak.kernel.KernelRootBuilder.merge(KernelRootBuilder.java:148)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.merge(KernelNodeStore.java:163)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:44)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:470)
   at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:164)
   at 
 org.apache.jackrabbit.oak.jcr.security.user.AbstractImportTest.before(AbstractImportTest.java:87)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1268) Add support for composite authorization setup

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1268:


Fix Version/s: 1.1

 Add support for composite authorization setup
 -

 Key: OAK-1268
 URL: https://issues.apache.org/jira/browse/OAK-1268
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: angela
Assignee: angela
 Fix For: 1.1


 it would be desirable if was not only be able to replace the default 
 authorization setup but instead being able to plug custom implementations 
 such that they can be combined with existing implementation(s).
 while this is more or less straight forward for the access control management 
 part, we need some further refinement that allows to define how different 
 permission providers will interact.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1317) OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940675#comment-13940675
 ] 

Michael Dürig commented on OAK-1317:


[~tripod], do you have more information, can you still reproduce it, do you 
have a heap dump?

Otherwise I assume this is known limitation of MicroKernel based NodeStores 
where a large commit needs to be serialised into a string. The native NodeStore 
implementations do not suffer from this. If this is the case I suggest we solve 
this as won't fix.



 OutOfMemoryError in org.apache.jackrabbit.mk.json.JsopBuilder.toString
 --

 Key: OAK-1317
 URL: https://issues.apache.org/jira/browse/OAK-1317
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: commons
Affects Versions: 0.14
Reporter: Tobias Bocanegra
Assignee: Michael Dürig
Priority: Minor
 Fix For: 0.20


 running {{org.apache.jackrabbit.oak.jcr.security.user.UserImportTest}} 
 produces a OOME in the jsop builder:
 {noformat}
 java.lang.OutOfMemoryError: Java heap space
   at java.util.Arrays.copyOfRange(Arrays.java:3209)
   at java.lang.String.init(String.java:215)
   at java.lang.StringBuilder.toString(StringBuilder.java:430)
   at 
 org.apache.jackrabbit.mk.json.JsopBuilder.toString(JsopBuilder.java:220)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder$AddNode.asDiff(CommitBuilder.java:312)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:157)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.doCommit(CommitBuilder.java:93)
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.commit(MicroKernelImpl.java:516)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.commit(KernelNodeStore.java:248)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:117)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.persist(KernelNodeStoreBranch.java:43)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch$InMemory.merge(AbstractNodeStoreBranch.java:466)
   at 
 org.apache.jackrabbit.oak.spi.state.AbstractNodeStoreBranch.merge(AbstractNodeStoreBranch.java:275)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.merge(KernelNodeStoreBranch.java:142)
   at 
 org.apache.jackrabbit.oak.kernel.KernelRootBuilder.merge(KernelRootBuilder.java:148)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.merge(KernelNodeStore.java:163)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:44)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:470)
   at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:164)
   at 
 org.apache.jackrabbit.oak.jcr.security.user.AbstractImportTest.before(AbstractImportTest.java:87)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1559) Expose BlobGCMBean for supported NodeStores

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940682#comment-13940682
 ] 

Michael Dürig commented on OAK-1559:


[~chetanm] as you might have a better overview of the whole DI story, could you 
make a suggestion how implementations could best expose such an MBean? Note 
that {{BlobGC}} is a default implementation that could be used.

 Expose BlobGCMBean for supported NodeStores
 ---

 Key: OAK-1559
 URL: https://issues.apache.org/jira/browse/OAK-1559
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 {{NodeStore}} implementations should expose the {{BlobGCMBean}} in order to 
 be interoperable with {{RepositoryManagementMBean}}. See OAK-1160.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1564) ClockTest on Windows fails

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1564:
---

Assignee: Julian Reschke

 ClockTest on Windows fails
 --

 Key: OAK-1564
 URL: https://issues.apache.org/jira/browse/OAK-1564
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
 Environment: JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
 NUMBER_OF_PROCESSORS=2
 OS=Windows_NT
 PROCESSOR_ARCHITECTURE=AMD64
 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 46 Stepping 6, GenuineIntel
 PROCESSOR_LEVEL=6
 PROCESSOR_REVISION=2e06
Reporter: Michael Dürig
Assignee: Julian Reschke
 Fix For: 0.20


 {{ClockTest.testClockDrift}} fails on Windows:
 {noformat}
 junit.framework.AssertionFailedError: unexpected drift: 26 (limit 20)
 at junit.framework.Assert.fail(Assert.java:50)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at org.apache.jackrabbit.oak.stats.ClockTest.testClockDrift(ClockTest.java:72)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1511) S3 DataStore in Oak (including GC)

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1511:


Component/s: core

 S3 DataStore in Oak (including GC)
 --

 Key: OAK-1511
 URL: https://issues.apache.org/jira/browse/OAK-1511
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 0.19


 Oak can use Jackrabbit 2.x DataStores, but we need to make sure the S3 
 datastore really works with Oak (including garbage collection, a test case, 
 documentation)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1503) Improve implementation of Node.rename

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1503:


Fix Version/s: 1.1

 Improve implementation of Node.rename
 -

 Key: OAK-1503
 URL: https://issues.apache.org/jira/browse/OAK-1503
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Michael Dürig
Priority: Minor
 Fix For: 1.1


 OAK-770 introduced a preliminary implementation for {{Node.rename}} on top of 
 the Oak API. I.e. by moving the node and conditionally reordering again. 
 We should think about whether we want to have deeper support for renaming 
 nodes also from the Oak API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1575) DocumentNS: Implement refined conflict resolution for addExistingNode conflicts

2014-03-19 Thread JIRA
Michael Dürig created OAK-1575:
--

 Summary: DocumentNS: Implement refined conflict resolution for 
addExistingNode conflicts
 Key: OAK-1575
 URL: https://issues.apache.org/jira/browse/OAK-1575
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mongomk
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 0.20


Implement refined conflict resolution for addExistingNode conflicts as defined 
in the parent issue for the document NS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1576) SegementMK: Implement refined conflict resolution for addExistingNode conflicts

2014-03-19 Thread JIRA
Michael Dürig created OAK-1576:
--

 Summary: SegementMK: Implement refined conflict resolution for 
addExistingNode conflicts
 Key: OAK-1576
 URL: https://issues.apache.org/jira/browse/OAK-1576
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Jukka Zitting
 Fix For: 0.20


Implement refined conflict resolution for addExistingNode conflicts as defined 
in the parent issue for the SegementMK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1503) Improve implementation of Node.rename

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1503:


Fix Version/s: (was: 1.1)

 Improve implementation of Node.rename
 -

 Key: OAK-1503
 URL: https://issues.apache.org/jira/browse/OAK-1503
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Michael Dürig
Priority: Minor

 OAK-770 introduced a preliminary implementation for {{Node.rename}} on top of 
 the Oak API. I.e. by moving the node and conditionally reordering again. 
 We should think about whether we want to have deeper support for renaming 
 nodes also from the Oak API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1577) H2MK: Implement refined conflict resolution for addExistingNode conflicts

2014-03-19 Thread JIRA
Michael Dürig created OAK-1577:
--

 Summary: H2MK: Implement refined conflict resolution for 
addExistingNode conflicts
 Key: OAK-1577
 URL: https://issues.apache.org/jira/browse/OAK-1577
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: mk
Reporter: Michael Dürig
Assignee: Stefan Guggisberg


Implement refined conflict resolution for addExistingNode conflicts as defined 
in the parent issue for the H2 MK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1553:
---

Fix Version/s: 0.20

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1200) occasional test failure in org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940555#comment-13940555
 ] 

Michael Dürig commented on OAK-1200:


Minor as H2 is not the default back-end any more

 occasional test failure in 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 --

 Key: OAK-1200
 URL: https://issues.apache.org/jira/browse/OAK-1200
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.12
Reporter: Julian Reschke
Assignee: Dominique Pfister
Priority: Minor

 Occasionally seen on W7 64bit:
 Tests in error:
   
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
 : java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
 ---
 Test set: org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest
 ---
 Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 11.377 sec 
  FAILURE!
 testConcurrentMergeGC(org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest)
   Time elapsed: 0.151 sec   ERROR!
 org.apache.jackrabbit.mk.api.MicroKernelException: 
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.merge(MicroKernelImpl.java:557)
   at 
 org.apache.jackrabbit.mk.store.DefaultRevisionStoreTest.testConcurrentMergeGC(DefaultRevisionStoreTest.java:193)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:172)
   at 
 org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:104)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:70)
 Caused by: java.lang.RuntimeException: 
 java.util.concurrent.ExecutionException: 
 org.apache.jackrabbit.mk.store.NotFoundException: 
 0a9706e6ec0320c779cd6f775910826e79c97f58
   at 
 org.apache.jackrabbit.mk.model.TraversingNodeDiffHandler.childNodeChanged(TraversingNodeDiffHandler.java:53)
   at 
 

[jira] [Updated] (OAK-1541) Concurrent creation of users chokes on intermediate paths

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1541:


Fix Version/s: 1.0

 Concurrent creation of users chokes on intermediate paths
 -

 Key: OAK-1541
 URL: https://issues.apache.org/jira/browse/OAK-1541
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Tobias Bocanegra
Assignee: Michael Dürig
 Fix For: 1.0


 The tests in 
 {{org.apache.jackrabbit.oak.security.authentication.ldap.LdapLoginTestBase#testConcurrentLogin}}
  fail because of a conflict when creating the intermediate nodes, although 
 they are identical for all concurrent updates.
 strange enough, the error is completely different:
 Cannot create user/group: Intermediate folders must be of type
 rep:AuthorizableFolder.
 this is reported on the addExistingNode parent, because the User validator 
 traverses up the following tree:
 /rep:security/rep:authorizables/rep:users/:conflict/addExistingNode/u/us/user-5



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1458) Out-of-process text extraction for better protection agains JVM/memory/CPU problems

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1458:
---

Fix Version/s: (was: 0.20)
   1.1

 Out-of-process text extraction for better protection agains JVM/memory/CPU 
 problems
 ---

 Key: OAK-1458
 URL: https://issues.apache.org/jira/browse/OAK-1458
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Assignee: Jukka Zitting
Priority: Critical
  Labels: production, resilience
 Fix For: 1.1


 This is a tracking / collection bug for solving problems with text extraction 
 of
 documents (very large, broken, malicious, etc), causing JVM crashes, memory
 problems, excessive CPU usage.
 The basic TIKA feature to enable this fix is TIKA-416 [1]
 [1] https://issues.apache.org/jira/browse/TIKA-416



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-850) Degrade gracefully when :childOrder is out of sync

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-850:
--

Assignee: Jukka Zitting

 Degrade gracefully when :childOrder is out of sync
 --

 Key: OAK-850
 URL: https://issues.apache.org/jira/browse/OAK-850
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 0.20


 One of the possible side effects of OAK-846 (before the fix) was a 
 :childOrder, which is out of sync with the actual child nodes. This usually 
 cannot happen, because both modification to the :childOrder and the child 
 nodes are atomic. Oak should degrade gracefully if it happens anyway and not 
 fail with an unspecified exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1512) Jackrabbit 2.x DataStore GC for SegmentNodeStore

2014-03-19 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1512:


Component/s: core

 Jackrabbit 2.x DataStore GC for SegmentNodeStore
 

 Key: OAK-1512
 URL: https://issues.apache.org/jira/browse/OAK-1512
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Thomas Mueller
Assignee: Jukka Zitting
 Fix For: 0.19


 The SegmentNodeStore (TarNodeStore) needs to be able to work with legacy data 
 store, and needs to be able to run garbage collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1349) No warning on unclosed sessions in Oak

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1349:
---

Fix Version/s: (was: 0.20)

 No warning on unclosed sessions in Oak
 --

 Key: OAK-1349
 URL: https://issues.apache.org/jira/browse/OAK-1349
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Affects Versions: 0.14
Reporter: Konrad Windszus
Priority: Minor

 In contrast to Jackrabbit 2 Oak does not log any warning on unclosed 
 sessions. Its SessionImpl class does not implement the finalize method (which 
 is good), but also not any other method of detecting unclosed sessions (e.g. 
 PhantomReferences). Please also compare with JCR-2768.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1349) No warning on unclosed sessions in Oak

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1349:
---

Priority: Minor  (was: Major)

 No warning on unclosed sessions in Oak
 --

 Key: OAK-1349
 URL: https://issues.apache.org/jira/browse/OAK-1349
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Affects Versions: 0.14
Reporter: Konrad Windszus
Priority: Minor
 Fix For: 0.20


 In contrast to Jackrabbit 2 Oak does not log any warning on unclosed 
 sessions. Its SessionImpl class does not implement the finalize method (which 
 is good), but also not any other method of detecting unclosed sessions (e.g. 
 PhantomReferences). Please also compare with JCR-2768.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1541) Concurrent creation of users chokes on intermediate paths

2014-03-19 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940639#comment-13940639
 ] 

angela commented on OAK-1541:
-

i tentatively added 1.0 as this is something that will for sure cause troubles.

 Concurrent creation of users chokes on intermediate paths
 -

 Key: OAK-1541
 URL: https://issues.apache.org/jira/browse/OAK-1541
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Tobias Bocanegra
Assignee: Michael Dürig
 Fix For: 1.0


 The tests in 
 {{org.apache.jackrabbit.oak.security.authentication.ldap.LdapLoginTestBase#testConcurrentLogin}}
  fail because of a conflict when creating the intermediate nodes, although 
 they are identical for all concurrent updates.
 strange enough, the error is completely different:
 Cannot create user/group: Intermediate folders must be of type
 rep:AuthorizableFolder.
 this is reported on the addExistingNode parent, because the User validator 
 traverses up the following tree:
 /rep:security/rep:authorizables/rep:users/:conflict/addExistingNode/u/us/user-5



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-850) Degrade gracefully when :childOrder is out of sync

2014-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved OAK-850.
---

   Resolution: Fixed
Fix Version/s: (was: 0.20)
   0.19

Done in http://svn.apache.org/r1579349.

When reading, the child order list is automatically synced with the actual set 
of child nodes by removing all names that are no longer present and adding (to 
the end of the list in undefined order) all new names that are not yet ordered.

When writing (i.e. adding, removing or ordering child nodes), the child order 
list is updated to match the above order to prevent the list from diverging 
further over time.

 Degrade gracefully when :childOrder is out of sync
 --

 Key: OAK-850
 URL: https://issues.apache.org/jira/browse/OAK-850
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 0.19


 One of the possible side effects of OAK-846 (before the fix) was a 
 :childOrder, which is out of sync with the actual child nodes. This usually 
 cannot happen, because both modification to the :childOrder and the child 
 nodes are atomic. Oak should degrade gracefully if it happens anyway and not 
 fail with an unspecified exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1559) Expose BlobGCMBean for supported NodeStores

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1559:
---

Assignee: Michael Dürig

 Expose BlobGCMBean for supported NodeStores
 ---

 Key: OAK-1559
 URL: https://issues.apache.org/jira/browse/OAK-1559
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 {{NodeStore}} implementations should expose the {{BlobGCMBean}} in order to 
 be interoperable with {{RepositoryManagementMBean}}. See OAK-1160.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1555) Inefficient node state diff with old revisions

2014-03-19 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-1555.
---

Resolution: Fixed

Implemented a journal like diff cache on MongoDB with a capped collection: 
http://svn.apache.org/r1579378

 Inefficient node state diff with old revisions
 --

 Key: OAK-1555
 URL: https://issues.apache.org/jira/browse/OAK-1555
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.18
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 0.19


 As part of OAK-1429 a number of improvements were implemented but one issue 
 remains when a node state diff is done with older revisions.
 The DocumentNodeStore keeps a modified timestamp on each document and updates 
 it whenever the document is explicitly modified or implicitly when a 
 descendant document is updated. With this timestamp the store is able to tell 
 when a subtree was last modified. The diff implementation gets inefficient 
 when the two revisions to compare are older than the modified timestamp of a 
 document tree. In this case the implementation tends to read many more nodes 
 than were actually modified because it cannot exactly tell when a subtree was 
 modified.
 Improvements from OAK-1394 and OAK-1429 helped quite a bit because the diff 
 cache in the DocumentNodeStore is pro-actively filled by the commits. 
 However, in addition to the observation listeners that perform diffs there is 
 also the async index update, which periodically performs a diff. Those diff 
 usually go further back in time and are the ones that are inefficient and 
 also have a negative impact on the diff cache.
 A solution to this problem was already discussed in a recent oak conf call. 
 The DocumentNodeStore keeps a journal of commits and uses it to answer node 
 state diff calls. With this journal the store should also be able to 
 efficiently diff across multiple commits. A number of options were discusses, 
 whether to implemented the journal with a local file or a capped MongoDB 
 collection.
 Ideas for alternative solutions are welcome...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1579) ConcurrentAddNodesClusterIT.addNodes2() fails on travis

2014-03-19 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-1579:
-

 Summary: ConcurrentAddNodesClusterIT.addNodes2() fails on travis
 Key: OAK-1579
 URL: https://issues.apache.org/jira/browse/OAK-1579
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.19
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 0.20


The test fails frequently on travis with:

{noformat}
javax.jcr.RepositoryException: OakMerge0002: OakMerge0002: Conflicting 
concurrent change. Update operation failed: key: 0:/ update 
{_collisions.r144dbf0beb7-0-1=CONTAINS_MAP_ENTRY false, _modified=SET 
279051881, _collisions.r144dbf0ba51-0-1=CONTAINS_MAP_ENTRY false, 
_collisions.r144dbf0ba9c-0-1=CONTAINS_MAP_ENTRY false, 
_collisions.r144dbf0ba7b-0-1=CONTAINS_MAP_ENTRY false, 
_collisions.r144dbf0bebd-0-1=CONTAINS_MAP_ENTRY false, 
_revisions.r144dbf0fbba-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-5-1, 
_revisions.r144dbf0ba7b-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-1-1, 
_collisions.r144dbf0fbba-0-1=CONTAINS_MAP_ENTRY false, 
_revisions.r144dbf0bebd-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-4-1, 
_revisions.r144dbf0ba51-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-0-1, 
_revisions.r144dbf0ba9c-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-2-1, 
_revisions.r144dbf0beb7-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-3-1} (retries 4, 7365 
ms)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (OAK-1578) Configurable size of capped collection used by MongoDiffCache

2014-03-19 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-1578:
-

 Summary: Configurable size of capped collection used by 
MongoDiffCache
 Key: OAK-1578
 URL: https://issues.apache.org/jira/browse/OAK-1578
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor


The size of the capped collection used by the MongoDiffCache introduced in 
OAK-1555 is currently not configurable but hard-coded to 256 MB. This should be 
configurable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1579) ConcurrentAddNodesClusterIT.addNodes2() fails on travis

2014-03-19 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940933#comment-13940933
 ] 

Marcel Reutegger commented on OAK-1579:
---

So far I'm not able to reproduce it, but will retry later on a different 
machine...

I disabled the test for now: http://svn.apache.org/r1579385

 ConcurrentAddNodesClusterIT.addNodes2() fails on travis
 ---

 Key: OAK-1579
 URL: https://issues.apache.org/jira/browse/OAK-1579
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Affects Versions: 0.19
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
 Fix For: 0.20


 The test fails frequently on travis with:
 {noformat}
 javax.jcr.RepositoryException: OakMerge0002: OakMerge0002: Conflicting 
 concurrent change. Update operation failed: key: 0:/ update 
 {_collisions.r144dbf0beb7-0-1=CONTAINS_MAP_ENTRY false, _modified=SET 
 279051881, _collisions.r144dbf0ba51-0-1=CONTAINS_MAP_ENTRY false, 
 _collisions.r144dbf0ba9c-0-1=CONTAINS_MAP_ENTRY false, 
 _collisions.r144dbf0ba7b-0-1=CONTAINS_MAP_ENTRY false, 
 _collisions.r144dbf0bebd-0-1=CONTAINS_MAP_ENTRY false, 
 _revisions.r144dbf0fbba-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-5-1, 
 _revisions.r144dbf0ba7b-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-1-1, 
 _collisions.r144dbf0fbba-0-1=CONTAINS_MAP_ENTRY false, 
 _revisions.r144dbf0bebd-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-4-1, 
 _revisions.r144dbf0ba51-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-0-1, 
 _revisions.r144dbf0ba9c-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-2-1, 
 _revisions.r144dbf0beb7-0-1=SET_MAP_ENTRY c-r144dbf0fcb4-3-1} (retries 4, 
 7365 ms)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1550) Incorrect handling of addExistingNode conflict in NodeStore

2014-03-19 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940947#comment-13940947
 ] 

Marcel Reutegger commented on OAK-1550:
---

The missing conflict annotation part is already tracked with OAK-1185.

 Incorrect handling of addExistingNode conflict in NodeStore
 ---

 Key: OAK-1550
 URL: https://issues.apache.org/jira/browse/OAK-1550
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Michael Dürig
Assignee: Marcel Reutegger
 Fix For: 0.20


 {{MicroKernel.rebase}} says: addExistingNode: node has been added that is 
 different from a node of them same name that has been added to the trunk.
 However, the {{NodeStore}} implementation
 # throws a {{CommitFailedException}} itself instead of annotating the 
 conflict,
 # also treats the equal childs with the same name as a conflict. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (OAK-484) QueryResultTest failing because JCR 1.0 6.6.4.2 Document Order is not supported

2014-03-19 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth reassigned OAK-484:
-

Assignee: Alex Parvulescu

[~alex.parvulescu], docu issue?

 QueryResultTest failing because JCR 1.0 6.6.4.2 Document Order is not 
 supported
 -

 Key: OAK-484
 URL: https://issues.apache.org/jira/browse/OAK-484
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: jcr
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 0.20


 The {{QueryResultTest}} has some tests that are failing:
   {{testGetSize}} Wrong size of NodeIterator in result expected:55 but 
 was:3
   {{testGetSizeOrderByScore}} Wrong size of NodeIterator in result 
 expected:55 but was:3
   {{testIteratorNext}} Wrong size of NodeIterator in result expected:55 but 
 was:3
   {{testIteratorNextOrderByScore}} Wrong size of NodeIterator in result 
 expected:55 but was:3
   {{testSkip}} Wrong node after skip() expected:0 but was:1
   {{testSkipOrderByProperty}} Wrong node after skip() expected:2 but 
 was:10
   
 Full trace here 
 {code}
 org.apache.jackrabbit.core.query.QueryResultTest
 testGetSize(org.apache.jackrabbit.core.query.QueryResultTest)
 junit.framework.AssertionFailedError: Wrong size of NodeIterator in result 
 expected:55 but was:3
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:134)
   at 
 org.apache.jackrabbit.core.query.QueryResultTest.testGetSize(QueryResultTest.java:47)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at 
 org.apache.jackrabbit.test.ConcurrentTestSuite.access$001(ConcurrentTestSuite.java:29)
   at 
 org.apache.jackrabbit.test.ConcurrentTestSuite$2.run(ConcurrentTestSuite.java:67)
   at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown 
 Source)
   at java.lang.Thread.run(Thread.java:662)
 testGetSizeOrderByScore(org.apache.jackrabbit.core.query.QueryResultTest)
 junit.framework.AssertionFailedError: Wrong size of NodeIterator in result 
 expected:55 but was:3
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:134)
   at 
 org.apache.jackrabbit.core.query.QueryResultTest.testGetSizeOrderByScore(QueryResultTest.java:60)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at 
 org.apache.jackrabbit.test.ConcurrentTestSuite.access$001(ConcurrentTestSuite.java:29)
   at 
 org.apache.jackrabbit.test.ConcurrentTestSuite$2.run(ConcurrentTestSuite.java:67)
   at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown 
 Source)
   at java.lang.Thread.run(Thread.java:662)
 

[jira] [Commented] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941059#comment-13941059
 ] 

Michael Dürig commented on OAK-1553:


Maybe a better approach is not to specify the exact merging behaviour in the 
contract but leave the details to the implementations:

{noformat}
addExistingNode: A node has been added that can't be merged with a node of them 
same name 
that has been added to the trunk. How and whether merging takes place is up to 
the implementation. 
Merging must not cause data to be lost however.
{noformat}

At least document and segment nodes stores and {{AbstractRebaseDiff}} should 
still implement the algorithm sketch in my previous comment.

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1553:
---

Attachment: OAK-1553.patch

Attached patch implements the more sophisticated merging for 
{{AbstractRebaseDiff}} (in memory implementation) as sketched above. All test 
cases still pass.

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20

 Attachments: OAK-1553.patch


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1574) AbstractRebaseDiff: Implement refined conflict resolution for addExistingNode conflicts

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941068#comment-13941068
 ] 

Michael Dürig commented on OAK-1574:


Parent issue has a patch implementing this

 AbstractRebaseDiff: Implement refined conflict resolution for addExistingNode 
 conflicts
 ---

 Key: OAK-1574
 URL: https://issues.apache.org/jira/browse/OAK-1574
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20


 Implement refined conflict resolution for addExistingNode conflicts as 
 defined in the parent issue for the {{AbstractRebaseDiff}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941067#comment-13941067
 ] 

Michael Dürig edited comment on OAK-1553 at 3/19/14 10:10 PM:
--

Attached patch implements the more sophisticated merging for 
{{AbstractRebaseDiff}} (in memory implementation) as sketched above. All test 
cases still pass and {{UserManagerImplTest#testConcurrentCreateUser}} from 
OAK-1541 also passes.


was (Author: mduerig):
Attached patch implements the more sophisticated merging for 
{{AbstractRebaseDiff}} (in memory implementation) as sketched above. All test 
cases still pass.

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20

 Attachments: OAK-1553.patch


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2014-03-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941067#comment-13941067
 ] 

Michael Dürig edited comment on OAK-1553 at 3/19/14 10:14 PM:
--

Attached patch implements the more sophisticated merging for 
{{AbstractRebaseDiff}} (in memory implementation) as sketched above. All test 
cases still pass and {{UserManagerImplTest#testConcurrentCreateUser}} from 
OAK-1541 also passes.

[~jukkaz], [~mreutegg] do you think this can also be implemented for segment, 
document node store?


was (Author: mduerig):
Attached patch implements the more sophisticated merging for 
{{AbstractRebaseDiff}} (in memory implementation) as sketched above. All test 
cases still pass and {{UserManagerImplTest#testConcurrentCreateUser}} from 
OAK-1541 also passes.

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 0.20

 Attachments: OAK-1553.patch


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1512) Jackrabbit 2.x DataStore GC for SegmentNodeStore

2014-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved OAK-1512.


Resolution: Fixed

Done in revision 1579537 by adding a {{SegmentTracker.collectBlobReferences()}} 
method that will find and report all external blob references within all 
accessible segments.

 Jackrabbit 2.x DataStore GC for SegmentNodeStore
 

 Key: OAK-1512
 URL: https://issues.apache.org/jira/browse/OAK-1512
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Thomas Mueller
Assignee: Jukka Zitting
 Fix For: 0.19


 The SegmentNodeStore (TarNodeStore) needs to be able to work with legacy data 
 store, and needs to be able to run garbage collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (OAK-1511) S3 DataStore in Oak (including GC)

2014-03-19 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941397#comment-13941397
 ] 

Chetan Mehrotra commented on OAK-1511:
--

Documentation would be covered by OAK-1543

 S3 DataStore in Oak (including GC)
 --

 Key: OAK-1511
 URL: https://issues.apache.org/jira/browse/OAK-1511
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 0.19


 Oak can use Jackrabbit 2.x DataStores, but we need to make sure the S3 
 datastore really works with Oak (including garbage collection, a test case, 
 documentation)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-1566) ArrayIndexOutOfBoundsException in Segment.getRefId()

2014-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated OAK-1566:
---

Priority: Critical  (was: Major)

 ArrayIndexOutOfBoundsException in Segment.getRefId()
 

 Key: OAK-1566
 URL: https://issues.apache.org/jira/browse/OAK-1566
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Critical
 Fix For: 0.20


 It looks like there is some SegmentMK bug that causes the 
 {{Segment.getRefId()}} to throw an {{ArrayIndexOutOfBoundsException}} in some 
 fairly rare corner cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (OAK-1511) S3 DataStore in Oak (including GC)

2014-03-19 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-1511.
--

Resolution: Fixed

 S3 DataStore in Oak (including GC)
 --

 Key: OAK-1511
 URL: https://issues.apache.org/jira/browse/OAK-1511
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Thomas Mueller
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 0.19


 Oak can use Jackrabbit 2.x DataStores, but we need to make sure the S3 
 datastore really works with Oak (including garbage collection, a test case, 
 documentation)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (OAK-631) SegmentMK: Implement garbage collection

2014-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated OAK-631:
--

Fix Version/s: (was: 0.19)
   0.20

 SegmentMK: Implement garbage collection
 ---

 Key: OAK-631
 URL: https://issues.apache.org/jira/browse/OAK-631
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 0.20


 The SegmentMK architecture allows segment-level garbage collection by looking 
 at the cross-segment reference graph. A simple garbage collector would list 
 all segments, remove those that are reachable through the graph starting from 
 one of the journals, and finally remove all remaining segments.
 Some don't collect flag or minimum lifetime information should be added to 
 segments that are currently being written so they don't get removed before 
 they get committed and thus referenced by a journal.



--
This message was sent by Atlassian JIRA
(v6.2#6252)