[jira] [Resolved] (OAK-2827) [oak-blob-cloud] Test Failures: Add joda-time dependency explicitly with definite version range

2015-06-01 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain resolved OAK-2827.

Resolution: Fixed

Resolving as with the latest fix haven't seen the error for close to 2 weeks.

> [oak-blob-cloud] Test Failures: Add joda-time dependency explicitly with 
> definite version range
> ---
>
> Key: OAK-2827
> URL: https://issues.apache.org/jira/browse/OAK-2827
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Reporter: Amit Jain
>Assignee: Amit Jain
>  Labels: CI, Jenkins
> Fix For: 1.3.0
>
>
> AWS sdk jar - com.amazonaws:aws-java-sdk-core has an open range dependency on 
> joda-time [2.2,) which causes the build to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.4:process (default) 
> on project oak-blob-cloud: Failed to resolve dependencies for one or more 
> projects in the reactor. Reason: No versions are present in the repository 
> for the artifact with a range [2.2,)
> [ERROR] joda-time:joda-time:jar:null
> [ERROR]
> [ERROR] from the specified remote repositories:
> [ERROR] Nexus (http://repository.apache.org/snapshots, releases=false, 
> snapshots=true),
> [ERROR] central (http://repo.maven.apache.org/maven2, releases=true, 
> snapshots=false)
> [ERROR] Path to dependency:
> [ERROR] 1) org.apache.jackrabbit:oak-blob-cloud:bundle:1.4-SNAPSHOT
> [ERROR] 2) com.amazonaws:aws-java-sdk:jar:1.9.11
> [ERROR] 3) com.amazonaws:aws-java-sdk-support:jar:1.9.11
> [ERROR] 4) com.amazonaws:aws-java-sdk-core:jar:1.9.11
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :oak-blob-cloud
> Build step 'Invoke top-level Maven targets' marked build as failure
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Recording test results
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2921) RDB: Scalability tests for large read/write scenarios

2015-06-01 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568648#comment-14568648
 ] 

Amit Jain commented on OAK-2921:


Added read/write tests with http://svn.apache.org/r1683052 on trunk.

Will report the observed numbers reported after running tests.

> RDB: Scalability tests for large read/write scenarios
> -
>
> Key: OAK-2921
> URL: https://issues.apache.org/jira/browse/OAK-2921
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: run
>Reporter: Amit Jain
>Assignee: Amit Jain
>  Labels: rdb, scalability
>
> Create scalability tests to test out the performance of the read/writes for 
> RDB DocumentStore on a large repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2944) Support merge iterator for union order by queries

2015-06-01 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2944:
---
Attachment: OAK-2618.patch

[~tmueller]

Could you please review the patch. This patch builds over the patch for 
OAK-2943.

> Support merge iterator for union order by queries
> -
>
> Key: OAK-2944
> URL: https://issues.apache.org/jira/browse/OAK-2944
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: query
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.3.0
>
> Attachments: OAK-2618.patch
>
>
> Currently, any order by union queries (including optimized OR XPATH) scan a 
> much larger set when returning the results even when the individual queries 
> are sorted by the index itself. 
> We should have a merge iterator which would scan a much smaller set as the 
> individual queries would be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2943) Support measure for union queries

2015-06-01 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2943:
---
Attachment: OAK-2943.patch

[~tmueller] 

Could you please review the attached patch.

> Support measure for union queries
> -
>
> Key: OAK-2943
> URL: https://issues.apache.org/jira/browse/OAK-2943
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: query
>Reporter: Amit Jain
>Assignee: Amit Jain
> Fix For: 1.3.0
>
> Attachments: OAK-2943.patch
>
>
> Currently, the {{measure}} for union queries does not take into consideration 
> the optimizations done by the query engine and returns the scan count for 
> each individual query.
> It would be better if the optimizations and the actual iterations on the 
> underlying left/right queries are taken into account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2944) Support merge iterator for union order by queries

2015-06-01 Thread Amit Jain (JIRA)
Amit Jain created OAK-2944:
--

 Summary: Support merge iterator for union order by queries
 Key: OAK-2944
 URL: https://issues.apache.org/jira/browse/OAK-2944
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: query
Reporter: Amit Jain
Assignee: Amit Jain


Currently, any order by union queries (including optimized OR XPATH) scan a 
much larger set when returning the results even when the individual queries are 
sorted by the index itself. 
We should have a merge iterator which would scan a much smaller set as the 
individual queries would be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2943) Support measure for union queries

2015-06-01 Thread Amit Jain (JIRA)
Amit Jain created OAK-2943:
--

 Summary: Support measure for union queries
 Key: OAK-2943
 URL: https://issues.apache.org/jira/browse/OAK-2943
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: query
Reporter: Amit Jain
Assignee: Amit Jain
 Fix For: 1.3.0


Currently, the {{measure}} for union queries does not take into consideration 
the optimizations done by the query engine and returns the scan count for each 
individual query.
It would be better if the optimizations and the actual iterations on the 
underlying left/right queries are taken into account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1963) Expose URL for Blob source

2015-06-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1963:
-
Description: 
In certain scenarios for performance reasons its desirable to have direct 
access to the Blob source. 

For e.g. if using a FileDataStore having a direct access to the native file 
system path of the blob (if not stored in chunks) is more useful than 
repository path e.g. native tools don't understand repository path, instead 
file system path can be passed directly to native tools for processing binary.

Another usecase being ability exposed signed S3 url which would allow access to 
binary content directly

  was:In some situations direct file system path is more useful than repository 
path e.g. native tools don't understand repository path, instead file system 
path can be passed directly to native tools for processing binary.


> Expose URL for Blob source 
> ---
>
> Key: OAK-1963
> URL: https://issues.apache.org/jira/browse/OAK-1963
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Pralaypati Ta
>Assignee: Chetan Mehrotra
>  Labels: datastore
> Fix For: 1.4
>
>
> In certain scenarios for performance reasons its desirable to have direct 
> access to the Blob source. 
> For e.g. if using a FileDataStore having a direct access to the native file 
> system path of the blob (if not stored in chunks) is more useful than 
> repository path e.g. native tools don't understand repository path, instead 
> file system path can be passed directly to native tools for processing binary.
> Another usecase being ability exposed signed S3 url which would allow access 
> to binary content directly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1963) Expose URL for Blob source

2015-06-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1963:
-
Summary: Expose URL for Blob source   (was: Expose file system path of Blob)

> Expose URL for Blob source 
> ---
>
> Key: OAK-1963
> URL: https://issues.apache.org/jira/browse/OAK-1963
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Pralaypati Ta
>Assignee: Chetan Mehrotra
>  Labels: datastore
> Fix For: 1.4
>
>
> In some situations direct file system path is more useful than repository 
> path e.g. native tools don't understand repository path, instead file system 
> path can be passed directly to native tools for processing binary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2933) AccessDenied when modifying transiently moved item with too many ACEs

2015-06-01 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra updated OAK-2933:
--
Assignee: (was: Tobias Bocanegra)

> AccessDenied when modifying transiently moved item with too many ACEs
> -
>
> Key: OAK-2933
> URL: https://issues.apache.org/jira/browse/OAK-2933
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.13
>Reporter: Tobias Bocanegra
>
> If at least the following preconditions are fulfilled, saving a moved item 
> fails with access denied:
> 1. there are more PermissionEntries in the PermissionEntryCache than the 
> configured EagerCacheSize
> 2. an node is moved to a location where the user has write access through a 
> group membership
> 3. a property is added to the transiently moved item
> For example:
> 1. set the *eagerCacheSize* to '0'
> 2. create new group *testgroup* and user *testuser*
> 3. make *testuser* member of *testgroup*
> 4. create nodes {{/testroot/a}} and {{/testroot/a/b}} and {{/testroot/a/c}}
> 5. allow *testgroup* {{rep:write}} on {{/testroot/a}}
> 6. as *testuser* create {{/testroot/a/b/item}} (to verify that the user has 
> write access)
> 7. as *testuser* move {{/testroot/a/b/item}} to {{/testroot/a/c/item}}
> 8. {{save()}} -> works
> 9. as *testuser* move {{/testroot/a/c/item}} back to {{/testroot/a/b/item}} 
> AND add new property to the transient {{/testroot/a/b/item}}
> 10. {{save()}} -> access denied



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2933) AccessDenied when modifying transiently moved item with too many ACEs

2015-06-01 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568034#comment-14568034
 ] 

Tobias Bocanegra commented on OAK-2933:
---

the problem happens in the 
{{PermissionValidator#checkPermissions(ImmutableTree, PropertyState, long)}} 
where it checks the permission for the added property:

{code}
 isGranted = parentPermission.isGranted(toTest, property);
{code}

the parentPermission still holds the source tree 
{{/testroot/node1/node2/node3}} which does not have add-property permissions. I 
think MoveAwarePermissionValidator needs to know the parentPermission of the 
before and after trees.

The MoveAwarePermissionValidator is broken such as it does not use the correct 
TreePermissions.

The PermissionEntryProviderImpl with cache works, because it can lookup the 
entries in the wrong tree. i.e. the validator checks of add_property on 
{{/testroot/node1/node2/node3}} instead of {{/testroot/node2/destination}}. 
coincidentally, {{/testroot/node1}} has a rep:write but the source tree is only 
partially loaded, for example a {{/testroot/node1.hasChild(REP_POLICY)}} fails.

i.e the changing the following works:
{code}
@@ -134,9 +136,7 @@ class PermissionEntryProviderImpl implements 
PermissionEntryProvider {
 Collection entries = 
pathEntryMap.get(accessControlledTree.getPath());
 return (entries != null) ? entries : 
Collections.emptyList();
 } else {
-return 
(accessControlledTree.hasChild(AccessControlConstants.REP_POLICY)) ?
-loadEntries(accessControlledTree.getPath()) :
-Collections.emptyList();
+return loadEntries(accessControlledTree.getPath());
 }
 }
{code}



> AccessDenied when modifying transiently moved item with too many ACEs
> -
>
> Key: OAK-2933
> URL: https://issues.apache.org/jira/browse/OAK-2933
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.13
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>
> If at least the following preconditions are fulfilled, saving a moved item 
> fails with access denied:
> 1. there are more PermissionEntries in the PermissionEntryCache than the 
> configured EagerCacheSize
> 2. an node is moved to a location where the user has write access through a 
> group membership
> 3. a property is added to the transiently moved item
> For example:
> 1. set the *eagerCacheSize* to '0'
> 2. create new group *testgroup* and user *testuser*
> 3. make *testuser* member of *testgroup*
> 4. create nodes {{/testroot/a}} and {{/testroot/a/b}} and {{/testroot/a/c}}
> 5. allow *testgroup* {{rep:write}} on {{/testroot/a}}
> 6. as *testuser* create {{/testroot/a/b/item}} (to verify that the user has 
> write access)
> 7. as *testuser* move {{/testroot/a/b/item}} to {{/testroot/a/c/item}}
> 8. {{save()}} -> works
> 9. as *testuser* move {{/testroot/a/c/item}} back to {{/testroot/a/b/item}} 
> AND add new property to the transient {{/testroot/a/b/item}}
> 10. {{save()}} -> access denied



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2933) AccessDenied when modifying transiently moved item with too many ACEs

2015-06-01 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567918#comment-14567918
 ] 

Tobias Bocanegra commented on OAK-2933:
---

this can be reproduced in oak-master by:
- setting the {{EAGER_CACHE_SIZE_PARAM}} to 0 (or by manually set the maxSize 
to 0 in 
{{org.apache.jackrabbit.oak.security.authorization.permission.PermissionEntryProviderImpl#PermissionEntryProviderImpl}}
- executing 
org.apache.jackrabbit.oak.jcr.security.authorization.SessionMoveTest#testMoveAndAddProperty2

(Note: what is the correct way to execute the JCR tests with different 
repository configurations?)

> AccessDenied when modifying transiently moved item with too many ACEs
> -
>
> Key: OAK-2933
> URL: https://issues.apache.org/jira/browse/OAK-2933
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.13
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>
> If at least the following preconditions are fulfilled, saving a moved item 
> fails with access denied:
> 1. there are more PermissionEntries in the PermissionEntryCache than the 
> configured EagerCacheSize
> 2. an node is moved to a location where the user has write access through a 
> group membership
> 3. a property is added to the transiently moved item
> For example:
> 1. set the *eagerCacheSize* to '0'
> 2. create new group *testgroup* and user *testuser*
> 3. make *testuser* member of *testgroup*
> 4. create nodes {{/testroot/a}} and {{/testroot/a/b}} and {{/testroot/a/c}}
> 5. allow *testgroup* {{rep:write}} on {{/testroot/a}}
> 6. as *testuser* create {{/testroot/a/b/item}} (to verify that the user has 
> write access)
> 7. as *testuser* move {{/testroot/a/b/item}} to {{/testroot/a/c/item}}
> 8. {{save()}} -> works
> 9. as *testuser* move {{/testroot/a/c/item}} back to {{/testroot/a/b/item}} 
> AND add new property to the transient {{/testroot/a/b/item}}
> 10. {{save()}} -> access denied



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2933) AccessDenied when modifying transiently moved item with too many ACEs

2015-06-01 Thread Tobias Bocanegra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tobias Bocanegra reassigned OAK-2933:
-

Assignee: Tobias Bocanegra

> AccessDenied when modifying transiently moved item with too many ACEs
> -
>
> Key: OAK-2933
> URL: https://issues.apache.org/jira/browse/OAK-2933
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.13
>Reporter: Tobias Bocanegra
>Assignee: Tobias Bocanegra
>
> If at least the following preconditions are fulfilled, saving a moved item 
> fails with access denied:
> 1. there are more PermissionEntries in the PermissionEntryCache than the 
> configured EagerCacheSize
> 2. an node is moved to a location where the user has write access through a 
> group membership
> 3. a property is added to the transiently moved item
> For example:
> 1. set the *eagerCacheSize* to '0'
> 2. create new group *testgroup* and user *testuser*
> 3. make *testuser* member of *testgroup*
> 4. create nodes {{/testroot/a}} and {{/testroot/a/b}} and {{/testroot/a/c}}
> 5. allow *testgroup* {{rep:write}} on {{/testroot/a}}
> 6. as *testuser* create {{/testroot/a/b/item}} (to verify that the user has 
> write access)
> 7. as *testuser* move {{/testroot/a/b/item}} to {{/testroot/a/c/item}}
> 8. {{save()}} -> works
> 9. as *testuser* move {{/testroot/a/c/item}} back to {{/testroot/a/b/item}} 
> AND add new property to the transient {{/testroot/a/b/item}}
> 10. {{save()}} -> access denied



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2942:
---
Fix Version/s: 1.3.3

> IllegalStateException thrown in Segment.pos()
> -
>
> Key: OAK-2942
> URL: https://issues.apache.org/jira/browse/OAK-2942
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Affects Versions: 1.2.2
>Reporter: Francesco Mari
> Fix For: 1.3.3
>
> Attachments: ObservationBusyTest.java
>
>
> When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
> {{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace 
> is the following:
> {noformat}
> java.lang.IllegalStateException
>   at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
>   at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
>   at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
>   at 
> org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
>   ... 6 more
> {noformat}
> In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:
> {noformat}
> Exception in thread "TarMK flush thread 
> [/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
> active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms" 
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
>   at java.lang.Thread.run(Thread.java:695)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
> {noformat}
> The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
> consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-06-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567872#comment-14567872
 ] 

Michael Dürig commented on OAK-2942:


The OOME might be a side effect of OAK-2896. The tar files containing many 
small (< 10k) segments is an indication here. 

The ISE is more concerning. I think we should try to simplify the test case 
such that we can reproduce it on the node store level.

> IllegalStateException thrown in Segment.pos()
> -
>
> Key: OAK-2942
> URL: https://issues.apache.org/jira/browse/OAK-2942
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Affects Versions: 1.2.2
>Reporter: Francesco Mari
> Fix For: 1.3.3
>
> Attachments: ObservationBusyTest.java
>
>
> When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
> {{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace 
> is the following:
> {noformat}
> java.lang.IllegalStateException
>   at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
>   at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
>   at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
>   at 
> org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
>   ... 6 more
> {noformat}
> In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:
> {noformat}
> Exception in thread "TarMK flush thread 
> [/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
> active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms" 
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
>   at java.lang.Thread.run(Thread.java:695)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
> {noformat}
> The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
> consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1842) ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1842:
---
Assignee: Francesco Mari

> ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used
> 
>
> Key: OAK-1842
> URL: https://issues.apache.org/jira/browse/OAK-1842
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.0
>Reporter: Konrad Windszus
>Assignee: Francesco Mari
> Fix For: 1.3.0
>
>
> The stacktrace of the call shows something like
> {code}
> 20.05.2014 11:13:07.428 *ERROR* [OsgiInstallerImpl] 
> com.adobe.granite.installer.factory.packages.impl.PackageTransformer Error 
> while processing install task.
> java.lang.IllegalStateException: Unexpected value record type: f2
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.length(SegmentBlob.java:101)
> at 
> org.apache.jackrabbit.oak.plugins.value.BinaryImpl.getSize(BinaryImpl.java:74)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:435)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:376)
> at 
> org.apache.jackrabbit.vault.packaging.impl.JcrPackageImpl.getPackage(JcrPackageImpl.java:324)
> {code}
> The blob store was configured correctly and according to the log also 
> correctly initialized
> {code}
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService 
> Initializing SegmentNodeStore with BlobStore 
> [org.apache.jackrabbit.oak.spi.blob.FileBlobStore@7e3dec43]
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService Component 
> still not activated. Ignoring the initialization call
> 20.05.2014 11:11:07.077 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK opened: 
> crx-quickstart/repository/segmentstore (mmap=true)
> {code}
> Under which circumstances can the length within the SegmentBlob be invalid?
> This only happens if a File Blob Store is configured 
> (http://jackrabbit.apache.org/oak/docs/osgi_config.html). If a file datastore 
> is used, there is no such exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2731) NPE when calling Event.getInfo()

2015-06-01 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567597#comment-14567597
 ] 

Francesco Mari commented on OAK-2731:
-

The error experienced while running {{ObservationBusyTest}} is described in 
OAK-2942.

> NPE when calling Event.getInfo()
> 
>
> Key: OAK-2731
> URL: https://issues.apache.org/jira/browse/OAK-2731
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.6
>Reporter: Dominique Pfister
>  Labels: observation
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
> Attachments: OAK-2731.txt, ObservationBusyTest.java
>
>
> On a very busy site, we're observing an NPE in the code that should gather 
> information about a JCR event for our custom event handler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-06-01 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2942:

Attachment: ObservationBusyTest.java

> IllegalStateException thrown in Segment.pos()
> -
>
> Key: OAK-2942
> URL: https://issues.apache.org/jira/browse/OAK-2942
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Affects Versions: 1.2.2
>Reporter: Francesco Mari
> Attachments: ObservationBusyTest.java
>
>
> When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
> {{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace 
> is the following:
> {noformat}
> java.lang.IllegalStateException
>   at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
>   at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
>   at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
>   at 
> org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
>   ... 6 more
> {noformat}
> In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:
> {noformat}
> Exception in thread "TarMK flush thread 
> [/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
> active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms" 
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
>   at java.lang.Thread.run(Thread.java:695)
>   at 
> org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
> {noformat}
> The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
> consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2942) IllegalStateException thrown in Segment.pos()

2015-06-01 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-2942:
---

 Summary: IllegalStateException thrown in Segment.pos()
 Key: OAK-2942
 URL: https://issues.apache.org/jira/browse/OAK-2942
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.2.2
Reporter: Francesco Mari
 Attachments: ObservationBusyTest.java

When I tried to put Oak under stress to reproduce OAK-2731, I experienced an 
{{IllegalStateException}} thrown by {{Segment.pos()}}. The full stack trace is 
the following:

{noformat}
java.lang.IllegalStateException
  at com.google.common.base.Preconditions.checkState(Preconditions.java:134)
  at org.apache.jackrabbit.oak.plugins.segment.Segment.pos(Segment.java:194)
  at 
org.apache.jackrabbit.oak.plugins.segment.Segment.readRecordId(Segment.java:337)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplateId(SegmentNodeState.java:70)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.getTemplate(SegmentNodeState.java:79)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:447)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.prepare(SegmentNodeStore.java:446)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.optimisticMerge(SegmentNodeStore.java:471)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.execute(SegmentNodeStore.java:527)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:205)
  at org.apache.jackrabbit.oak.core.MutableRoot.commit(MutableRoot.java:247)
  at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.commit(SessionDelegate.java:341)
  at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:487)
  at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
  at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
  at 
org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
  at 
org.apache.jackrabbit.oak.jcr.observation.ObservationBusyTest$1.run(ObservationBusyTest.java:145)
  ... 6 more
{noformat}

In addition, the TarMK flushing thread throws an {{OutOfMemoryError}}:

{noformat}
Exception in thread "TarMK flush thread 
[/var/folders/zw/qns3kln16ld99frxtp263c8cgn/T/junit2925373080495354479], 
active since Mon Jun 01 18:48:19 CEST 2015, previous max duration 302ms" 
java.lang.OutOfMemoryError: Java heap space
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.createNewBuffer(SegmentWriter.java:91)
  at 
org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.flush(SegmentWriter.java:240)
  at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore.flush(FileStore.java:596)
  at 
org.apache.jackrabbit.oak.plugins.segment.file.FileStore$1.run(FileStore.java:411)
  at java.lang.Thread.run(Thread.java:695)
  at 
org.apache.jackrabbit.oak.plugins.segment.file.BackgroundThread.run(BackgroundThread.java:70)
{noformat}

The attached test case {{ObservationBusyTest.java}} allows me to reproduce 
consistently the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1842) ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used

2015-06-01 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567509#comment-14567509
 ] 

Francesco Mari commented on OAK-1842:
-

[~mduerig], I can look into it. Please assign the issue to me.

> ISE: "Unexpected value record type: f2" is thrown when FileBlobStore is used
> 
>
> Key: OAK-1842
> URL: https://issues.apache.org/jira/browse/OAK-1842
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: blob
>Affects Versions: 1.0
>Reporter: Konrad Windszus
> Fix For: 1.3.0
>
>
> The stacktrace of the call shows something like
> {code}
> 20.05.2014 11:13:07.428 *ERROR* [OsgiInstallerImpl] 
> com.adobe.granite.installer.factory.packages.impl.PackageTransformer Error 
> while processing install task.
> java.lang.IllegalStateException: Unexpected value record type: f2
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.length(SegmentBlob.java:101)
> at 
> org.apache.jackrabbit.oak.plugins.value.BinaryImpl.getSize(BinaryImpl.java:74)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:435)
> at 
> org.apache.jackrabbit.oak.jcr.session.PropertyImpl.getLength(PropertyImpl.java:376)
> at 
> org.apache.jackrabbit.vault.packaging.impl.JcrPackageImpl.getPackage(JcrPackageImpl.java:324)
> {code}
> The blob store was configured correctly and according to the log also 
> correctly initialized
> {code}
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService 
> Initializing SegmentNodeStore with BlobStore 
> [org.apache.jackrabbit.oak.spi.blob.FileBlobStore@7e3dec43]
> 20.05.2014 11:11:07.029 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStoreService Component 
> still not activated. Ignoring the initialization call
> 20.05.2014 11:11:07.077 *INFO* [FelixStartLevel] 
> org.apache.jackrabbit.oak.plugins.segment.file.FileStore TarMK opened: 
> crx-quickstart/repository/segmentstore (mmap=true)
> {code}
> Under which circumstances can the length within the SegmentBlob be invalid?
> This only happens if a File Blob Store is configured 
> (http://jackrabbit.apache.org/oak/docs/osgi_config.html). If a file datastore 
> is used, there is no such exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2731) NPE when calling Event.getInfo()

2015-06-01 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2731:

Attachment: ObservationBusyTest.java

I tried to reproduce the issue with the attached {{ObservationBusyTest}}. Since 
the issue occurred during a reordering event ({{EventFactory$7.getInfo}}) on a 
very busy system, my first attempt is to generate a lot of reordering events 
while a listener is observing. I also supposed that the error was observed on a 
system using TAR persistence.

In my case, the system failed with an {{IllegalStateException}}, but I was 
unable to reproduce the reported issue neither on 1.1.6 nor on the latest 
trunk. [~marett], [~dpfister], can you verify if the attached test case is 
close to the real load on your system?

> NPE when calling Event.getInfo()
> 
>
> Key: OAK-2731
> URL: https://issues.apache.org/jira/browse/OAK-2731
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.6
>Reporter: Dominique Pfister
>  Labels: observation
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
> Attachments: OAK-2731.txt, ObservationBusyTest.java
>
>
> On a very busy site, we're observing an NPE in the code that should gather 
> information about a JCR event for our custom event handler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2941) RDBDocumentStore: avoid use of "GREATEST"

2015-06-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2941:

Summary: RDBDocumentStore: avoid use of "GREATEST"  (was: RDBDOcumentStore: 
avoid use of "GREATEST")

> RDBDocumentStore: avoid use of "GREATEST"
> -
>
> Key: OAK-2941
> URL: https://issues.apache.org/jira/browse/OAK-2941
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: rdbmk
>Affects Versions: 1.2.2, 1.0.14, 1.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> In the RDBDocumenStore we currently use "GREATEST" for conditional updates of 
> the MODIFIED column (implementing the "max" operation). This isn't supported 
> by SQLServer, thus requiring DB-specific code.
> It appears we can use something portable instead:
> "set MODIFIED = CASE WHEN ? > MODIFIED THEN ? ELSE MODIFIED END"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2941) RDBDOcumentStore: avoid use of "GREATEST"

2015-06-01 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-2941:
---

 Summary: RDBDOcumentStore: avoid use of "GREATEST"
 Key: OAK-2941
 URL: https://issues.apache.org/jira/browse/OAK-2941
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: rdbmk
Affects Versions: 1.0.14, 1.2.2, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor


In the RDBDocumenStore we currently use "GREATEST" for conditional updates of 
the MODIFIED column (implementing the "max" operation). This isn't supported by 
SQLServer, thus requiring DB-specific code.

It appears we can use something portable instead:

"set MODIFIED = CASE WHEN ? > MODIFIED THEN ? ELSE MODIFIED END"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2714) Test failures on Jenkins

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2714:
---
Description: 
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder | 134 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 142 
| DOCUMENT_RDB | 1.6| 
| org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 
151 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163 | SEGMENT_MK | 1.6 |
| org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT | 163 
| DOCUMENT_RDB | 1.8 | 

  was:
This issue is for tracking test failures seen at our Jenkins instance that 
might yet be transient. Once a failure happens too often we should remove it 
here and create a dedicated issue for it. 

|| Test 
  || Builds || Fixture  || JVM ||
| org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir  
   | 81  | DOCUMENT_RDB | 1.7   |
| org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035  
   | 76, 128 | SEGMENT_MK , DOCUMENT_RDB  | 1.6   |
| org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop  
   | 64  | ?| ? |
| 
org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore
 | 52 | ?  | ? |
| org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar 
   | 41  | ?| ? |
| 
org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded
  | 29  | ?| ? |
| org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite 
   | 35  | SEGMENT_MK   | ? |
| 
org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex
 | 90 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter 
| 94 | DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 
121, 157 | DOCUMENT_RDB | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110 | 
DOCUMENT_RDB | 1.6 |
| org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | 
DOCUMENT_RDB | 1.7 |
| org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | 
SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 |
| org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder | 134 | 
DOCUMENT_RDB | 1.8 |
| org.apache.jackrabbit.oak.plugins.segm

[jira] [Commented] (OAK-2915) add support for Apache Derby

2015-06-01 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567344#comment-14567344
 ] 

Julian Reschke commented on OAK-2915:
-

See 
http://stackoverflow.com/questions/30530970/equivalent-of-sql-greatest-function-for-apache-derby

> add support for Apache Derby
> 
>
> Key: OAK-2915
> URL: https://issues.apache.org/jira/browse/OAK-2915
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: OAK-2915.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2916) RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test coverage in unit tests

2015-06-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-2916.
-
Resolution: Fixed

> RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test 
> coverage in unit tests
> --
>
> Key: OAK-2916
> URL: https://issues.apache.org/jira/browse/OAK-2916
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Affects Versions: 1.2.2, 1.0.14, 1.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> (discovered while looking into Apache Derby support)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2916) RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test coverage in unit tests

2015-06-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2916:

Fix Version/s: (was: 1.23)
   1.0.15
   1.2.3

> RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test 
> coverage in unit tests
> --
>
> Key: OAK-2916
> URL: https://issues.apache.org/jira/browse/OAK-2916
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Affects Versions: 1.2.2, 1.0.14, 1.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> (discovered while looking into Apache Derby support)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2916) RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test coverage in unit tests

2015-06-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2916:

Affects Version/s: 1.3
   1.2.2
   1.0.14

> RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test 
> coverage in unit tests
> --
>
> Key: OAK-2916
> URL: https://issues.apache.org/jira/browse/OAK-2916
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Affects Versions: 1.2.2, 1.0.14, 1.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.0, 1.23
>
>
> (discovered while looking into Apache Derby support)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2916) RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test coverage in unit tests

2015-06-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2916:

Fix Version/s: 1.23

> RDBDocumentStore: use of "GREATEST" in SQL apparently doesn't have test 
> coverage in unit tests
> --
>
> Key: OAK-2916
> URL: https://issues.apache.org/jira/browse/OAK-2916
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: rdbmk
>Affects Versions: 1.2.2, 1.0.14, 1.3
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.0, 1.23
>
>
> (discovered while looking into Apache Derby support)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2940) RDBDocumentStore: "set" operation on _modified appears to be implemented as "max"

2015-06-01 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-2940:
---

 Summary: RDBDocumentStore: "set" operation on _modified appears to 
be implemented as "max"
 Key: OAK-2940
 URL: https://issues.apache.org/jira/browse/OAK-2940
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.0.14, 1.2.2, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2939) Make compaction gain estimate more accurate

2015-06-01 Thread JIRA
Michael Dürig created OAK-2939:
--

 Summary: Make compaction gain estimate more accurate
 Key: OAK-2939
 URL: https://issues.apache.org/jira/browse/OAK-2939
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 1.3.1


Currently the compaction gain estimation process only takes the current head 
into account when calculating the retained size. We could make it more accurate 
by also taking in memory references into account. This would prevent compaction 
from running when many in memory references would later on prevent segments 
from being cleaned up. 

Also for OAK-2862, we would need a way to include the segments used for the 
persisted compaction map in the retained size. 

While at it, we could try to improve logging so information on how much space 
is retained by the current head, in memory references and the persisted 
compaction map would be logged separately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2938) Estimation of required memory for compaction is off

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2938:
---
Component/s: (was: core)
 segmentmk

> Estimation of required memory for compaction is off
> ---
>
> Key: OAK-2938
> URL: https://issues.apache.org/jira/browse/OAK-2938
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> Currently compaction will be skipped if some rough estimation determines that 
> there is not  enough memory to run. That estimation however assumes that each 
> compaction cycle requires as much space as the compaction map already takes 
> up. This is too conservative. Instead the amount of memory taken up by the 
> last compaction cycle should be a better estimate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2938) Estimation of required memory for compaction is off

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-2938.

Resolution: Fixed

Fixed in trunk at http://svn.apache.org/r1682855
1.0 at http://svn.apache.org/r1682857
1.2 at http://svn.apache.org/r1682858

> Estimation of required memory for compaction is off
> ---
>
> Key: OAK-2938
> URL: https://issues.apache.org/jira/browse/OAK-2938
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> Currently compaction will be skipped if some rough estimation determines that 
> there is not  enough memory to run. That estimation however assumes that each 
> compaction cycle requires as much space as the compaction map already takes 
> up. This is too conservative. Instead the amount of memory taken up by the 
> last compaction cycle should be a better estimate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2938) Estimation of required memory for compaction is off

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2938:
---
Issue Type: Sub-task  (was: Bug)
Parent: OAK-2849

> Estimation of required memory for compaction is off
> ---
>
> Key: OAK-2938
> URL: https://issues.apache.org/jira/browse/OAK-2938
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> Currently compaction will be skipped if some rough estimation determines that 
> there is not  enough memory to run. That estimation however assumes that each 
> compaction cycle requires as much space as the compaction map already takes 
> up. This is too conservative. Instead the amount of memory taken up by the 
> last compaction cycle should be a better estimate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2938) Estimation of required memory for compaction is off

2015-06-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2938:
---
Fix Version/s: 1.0.15
   1.2.3
   1.3.0

> Estimation of required memory for compaction is off
> ---
>
> Key: OAK-2938
> URL: https://issues.apache.org/jira/browse/OAK-2938
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.0, 1.2.3, 1.0.15
>
>
> Currently compaction will be skipped if some rough estimation determines that 
> there is not  enough memory to run. That estimation however assumes that each 
> compaction cycle requires as much space as the compaction map already takes 
> up. This is too conservative. Instead the amount of memory taken up by the 
> last compaction cycle should be a better estimate. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2938) Estimation of required memory for compaction is off

2015-06-01 Thread JIRA
Michael Dürig created OAK-2938:
--

 Summary: Estimation of required memory for compaction is off
 Key: OAK-2938
 URL: https://issues.apache.org/jira/browse/OAK-2938
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Michael Dürig
Assignee: Michael Dürig


Currently compaction will be skipped if some rough estimation determines that 
there is not  enough memory to run. That estimation however assumes that each 
compaction cycle requires as much space as the compaction map already takes up. 
This is too conservative. Instead the amount of memory taken up by the last 
compaction cycle should be a better estimate. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2926) Fast result size estimate

2015-06-01 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567079#comment-14567079
 ] 

Davide Giannella commented on OAK-2926:
---

Linking the related OAK-2807

> Fast result size estimate
> -
>
> Key: OAK-2926
> URL: https://issues.apache.org/jira/browse/OAK-2926
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
>
> When asking for the correct result size of a query, the complete result needs 
> to be read, so that access rights checks are made, and (unless the index is 
> known to be up-to-date, and can process all conditions) so that the existence 
> and all query conditions are checked.
> Jackrabbit 2.x supports a fast way to get an estimate of the result size, 
> without doing access rights checks. See also JCR-3858.
> Please note that according to the JCR API, NodeIterator.getSize() may return 
> -1 (for "unknown"), and in Oak this is currently done if counting is slow. 
> This would also need to be disabled if a fast result size estimate is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2937) Remove code related to directmemory for off heap caching

2015-06-01 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2937:


 Summary: Remove code related to directmemory for off heap caching
 Key: OAK-2937
 URL: https://issues.apache.org/jira/browse/OAK-2937
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.3.2


DocumentNodeStore has some code related to off heap which makes use of Apache 
Directmemory (OAK-891). This feature was not much used and PersistentCache made 
this feature obsolete.

Recently it was mentioned on Directmemory that there is not much activity going 
on [1] in that project and it might be referred to attic. In light of that we 
should remove this feature from Oak

[1] http://markmail.org/thread/atia2ecaa2mugmjx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2936) PojoSR should use Felix Connect API instead of pojosr

2015-06-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2936:
-
Issue Type: Task  (was: Improvement)

> PojoSR should use Felix Connect API instead of pojosr
> -
>
> Key: OAK-2936
> URL: https://issues.apache.org/jira/browse/OAK-2936
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.1
>
>
> Recently Apache Felix Connect first version is released. Oak PojoSR module is 
> currently based on older pojosr module which has moved to Felix as Connect 
> submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 
> {code:xml}
> 
> org.apache.felix
> org.apache.felix.connect
> 0.1.0
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2936) PojoSR should use Felix Connect API instead of pojosr

2015-06-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-2936:
-
Description: 
Recently Apache Felix Connect first version is released. Oak PojoSR module is 
currently based on older pojosr module which has moved to Felix as Connect 
submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 

{code:xml}

org.apache.felix
org.apache.felix.connect
0.1.0

{code}

  was:Recently Apache Felix Connect first version is released. Oak PojoSR 
module is currently based on older pojosr module which has moved to Felix as 
Connect submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 


> PojoSR should use Felix Connect API instead of pojosr
> -
>
> Key: OAK-2936
> URL: https://issues.apache.org/jira/browse/OAK-2936
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: pojosr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.1
>
>
> Recently Apache Felix Connect first version is released. Oak PojoSR module is 
> currently based on older pojosr module which has moved to Felix as Connect 
> submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 
> {code:xml}
> 
> org.apache.felix
> org.apache.felix.connect
> 0.1.0
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2936) PojoSR should use Felix Connect API instead of pojosr

2015-06-01 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2936:


 Summary: PojoSR should use Felix Connect API instead of pojosr
 Key: OAK-2936
 URL: https://issues.apache.org/jira/browse/OAK-2936
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: pojosr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.3.1


Recently Apache Felix Connect first version is released. Oak PojoSR module is 
currently based on older pojosr module which has moved to Felix as Connect 
submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2844) Introducing a simple document-based discovery-light service (to circumvent documentMk's eventual consistency delays)

2015-06-01 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567058#comment-14567058
 ] 

Stefan Egli commented on OAK-2844:
--

I guess if the advantage of not having a new listener API outweighs the 
downside of having to poll, then this is a good idea. I'll come up with this 
and will attach a new version.

> Introducing a simple document-based discovery-light service (to circumvent 
> documentMk's eventual consistency delays)
> 
>
> Key: OAK-2844
> URL: https://issues.apache.org/jira/browse/OAK-2844
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: mongomk
>Reporter: Stefan Egli
> Fix For: 1.4
>
> Attachments: InstanceStateChangeListener.java, OAK-2844.WIP-02.patch, 
> OAK-2844.patch
>
>
> When running discovery.impl on a mongoMk-backed jcr repository, there are 
> risks of hitting problems such as described in "SLING-3432 
> pseudo-network-partitioning": this happens when a jcr-level heartbeat does 
> not reach peers within the configured heartbeat timeout - it then treats that 
> affected instance as dead, removes it from the topology, and continues with 
> the remainings, potentially electing a new leader, running the risk of 
> duplicate leaders. This happens when delays in mongoMk grow larger than the 
> (configured) heartbeat timeout. These problems ultimately are due to the 
> 'eventual consistency' nature of, not only mongoDB, but more so of mongoMk. 
> The only alternative so far is to increase the heartbeat timeout to match the 
> expected or measured delays that mongoMk can produce (under say given 
> load/performance scenarios).
> Assuming that mongoMk will always carry a risk of certain delays and a 
> maximum, reasonable (for discovery.impl timeout that is) maximum cannot be 
> guaranteed, a better solution is to provide discovery with more 'real-time' 
> like information and/or privileged access to mongoDb.
> Here's a summary of alternatives that have so far been floating around as a 
> solution to circumvent eventual consistency:
>  # expose existing (jmx) information about active 'clusterIds' - this has 
> been proposed in SLING-4603. The pros: reuse of existing functionality. The 
> cons: going via jmx, binding of exposed functionality as 'to be maintained 
> API'
>  # expose a plain mongo db/collection (via osgi injection) such that a higher 
> (sling) level discovery could directly write heartbeats there. The pros: 
> heartbeat latency would be minimal (assuming the collection is not sharded). 
> The cons: exposes a mongo db/collection potentially also to anyone else, with 
> the risk of opening up to unwanted possibilities
>  # introduce a simple 'discovery-light' API to oak which solely provides 
> information about which instances are active in a cluster. The implementation 
> of this is not exposed. The pros: no need to expose a mongoDb/collection, 
> allows any other jmx-functionality to remain unchanged. The cons: a new API 
> that must be maintained
> This ticket is about the 3rd option, about a new mongo-based discovery-light 
> service that is introduced to oak. The functionality in short:
>  * it defines a 'local instance id' that is non-persisted, ie can change at 
> each bundle activation.
>  * it defines a 'view id' that uniquely identifies a particular incarnation 
> of a 'cluster view/state' (which is: a list of active instance ids)
>  * and it defines a list of active instance ids
>  * the above attributes are passed to interested components via a listener 
> that can be registered. that listener is called whenever the discovery-light 
> notices the cluster view has changed.
> While the actual implementation could in fact be based on the existing 
> {{getActiveClusterNodes()}} {{getClusterId()}} of the 
> {{DocumentNodeStoreMBean}}, the suggestion is to not fiddle with that part, 
> as that has dependencies to other logic. But instead, the suggestion is to 
> create a dedicated, other, collection ('discovery') where heartbeats as well 
> as the currentView are stored.
> Will attach a suggestion for an initial version of this for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-575) Make it possible to extend oak-run's Main

2015-06-01 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved OAK-575.
-
Resolution: Won't Fix

does not make sense anymore as oak-run is a totally different beast today.

> Make it possible to extend oak-run's Main
> -
>
> Key: OAK-575
> URL: https://issues.apache.org/jira/browse/OAK-575
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: run
>Affects Versions: 0.5
>Reporter: Tommaso Teofili
>Priority: Minor
> Attachments: OAK-575.patch, OAK-575.patch
>
>
> In my opinion it'd be nice if we could make some small improvements to 
> oak-run's _Main_ in order to plug in custom Servlets, CommitHooks, etc.
> That would allow to create customized oak-run packages in a very short time 
> by just extending that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)