[jira] [Updated] (OAK-2929) Parent of unseen children must not be removable

2015-06-18 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2929:
--
Fix Version/s: (was: 1.3.1)
   1.3.2

 Parent of unseen children must not be removable
 ---

 Key: OAK-2929
 URL: https://issues.apache.org/jira/browse/OAK-2929
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.13, 1.2
Reporter: Vikas Saurabh
Assignee: Marcel Reutegger
  Labels: concurrency, technical_debt
 Fix For: 1.3.2

 Attachments: IgnoredTestCase.patch


 With OAK-2673, it's now possible to have hidden intermediate nodes created 
 concurrently.
 So, a scenario like:
 {noformat}
 start - /:hidden
 N1 creates /:hiddent/parent/node1
 N2 creates /:hidden/parent/node2
 {noformat}
 is allowed.
 But, if N2's creation of {{parent}} got persisted later than that on N1, then 
 N2 is currently able to delete {{parent}} even though there's {{node1}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2829) Comparing node states for external changes is too slow

2015-06-18 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2829:
--
Fix Version/s: (was: 1.3.1)
   1.3.2

 Comparing node states for external changes is too slow
 --

 Key: OAK-2829
 URL: https://issues.apache.org/jira/browse/OAK-2829
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Blocker
  Labels: scalability
 Fix For: 1.2.3, 1.3.2

 Attachments: CompareAgainstBaseStateTest.java, 
 OAK-2829-JournalEntry.patch, OAK-2829-gc-bug.patch, 
 OAK-2829-improved-doc-cache-invaliation.2.patch, 
 OAK-2829-improved-doc-cache-invaliation.patch, graph-1.png, graph.png


 Comparing node states for local changes has been improved already with 
 OAK-2669. But in a clustered setup generating events for external changes 
 cannot make use of the introduced cache and is therefore slower. This can 
 result in a growing observation queue, eventually reaching the configured 
 limit. See also OAK-2683.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2622) dynamic cache allocation

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2622:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 dynamic cache allocation
 

 Key: OAK-2622
 URL: https://issues.apache.org/jira/browse/OAK-2622
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Affects Versions: 1.0.12
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.3.2


 At the moment mongoMk's various caches are configurable (OAK-2546) but other 
 than that static in terms of size. Different use-cases might require 
 different allocations of the sub caches though. And it might not always be 
 possible to find a good configuration upfront for all use cases. 
 We might be able to come up with dynamically allocating the overall cache 
 size to the different sub-caches, based on which cache is how heavily loaded 
 or how well performing for example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2739) take appropriate action when lease cannot be renewed (in time)

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2739:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 take appropriate action when lease cannot be renewed (in time)
 --

 Key: OAK-2739
 URL: https://issues.apache.org/jira/browse/OAK-2739
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: mongomk
Affects Versions: 1.2
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.3.2


 Currently, in an oak-cluster when (e.g.) one oak-client stops renewing its 
 lease (ClusterNodeInfo.renewLease()), this will be eventually noticed by the 
 others in the same oak-cluster. Those then mark this client as {{inactive}} 
 and start recoverying and subsequently removing that node from any further 
 merge etc operation.
 Now, whatever the reason was why that client stopped renewing the lease 
 (could be an exception, deadlock, whatever) - that client itself still 
 considers itself as {{active}} and continues to take part in the cluster 
 action.
 This will result in a unbalanced situation where that one client 'sees' 
 everybody as {{active}} while the others see this one as {{inactive}}.
 If this ClusterNodeInfo state should be something that can be built upon, and 
 to avoid any inconsistency due to unbalanced handling, the inactive node 
 should probably retire gracefully - or any other appropriate action should be 
 taken, other than just continuing as today.
 This ticket is to keep track of ideas and actions taken wrt this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2556) do intermediate commit during async indexing

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2556:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 do intermediate commit during async indexing
 

 Key: OAK-2556
 URL: https://issues.apache.org/jira/browse/OAK-2556
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Affects Versions: 1.0.11
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.3.2


 A recent issue found at a customer unveils a potential issue with the async 
 indexer. Reading the AsyncIndexUpdate.updateIndex it looks like it is doing 
 the entire update of the async indexer *in one go*, ie in one commit.
 When there is - for some reason - however, a huge diff that the async indexer 
 has to process, the 'one big commit' can become gigantic. There is no limit 
 to the size of the commit in fact.
 So the suggestion is to do intermediate commits while the async indexer is 
 going on. The reason this is acceptable is the fact that by doing async 
 indexing, that index is anyway not 100% up-to-date - so it would not make 
 much of a difference if it would commit after every 100 or 1000 changes 
 either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2682) Introduce time difference detection for DocumentNodeStore

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2682:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 Introduce time difference detection for DocumentNodeStore
 -

 Key: OAK-2682
 URL: https://issues.apache.org/jira/browse/OAK-2682
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Stefan Egli
  Labels: resilience
 Fix For: 1.3.2


 Currently the lease mechanism in DocumentNodeStore/mongoMk is based on the 
 assumption that the clocks are in perfect sync between all nodes of the 
 cluster. The lease is valid for 60sec with a timeout of 30sec. If clocks are 
 off by too much, and background operations happen to take couple seconds, you 
 run the risk of timing out a lease. So introducing a check which WARNs if the 
 clocks in a cluster are off by too much (1st threshold, eg 5sec?) would help 
 increase awareness. Further drastic measure could be to prevent a startup of 
 Oak at all if the difference is for example higher than a 2nd threshold 
 (optional I guess, but could be 20sec?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2613) Do versionGC more frequently and at adjusted speed

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2613:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 Do versionGC more frequently and at adjusted speed
 --

 Key: OAK-2613
 URL: https://issues.apache.org/jira/browse/OAK-2613
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Affects Versions: 1.0.12
Reporter: Stefan Egli
  Labels: observation, resilience
 Fix For: 1.3.2


 This is a follow-up ticket from 
 [here|https://issues.apache.org/jira/browse/OAK-2557?focusedCommentId=14355322page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14355322]
  mixed with an offline discussion with [~mreutegg]:
  * we could make the VersionGC play nicely to existing load on the system: it 
 could progress slower if the load is higher and vice-verca. One simple 
 measure could be: if the observation queue is small (eg below 10) then the 
 load is low and it could progress full-speed. Otherwise it could add some 
 artificial sleeping in between.
  * we could run versionGC more regularly than once a day but instead kind of 
 'continuously' let it run in the background. While the speed of the gc would 
 be adjusted to the load - it also would have to be assured that it doesn't 
 run too slow (and would never finish if instance is under some constant load)
 Note that 'adjusted speed' would also imply some intelligence about the 
 system load, as pointed out by [~chetanm] on OAK-2557:
 {quote}Version GC currently ensures that query fired is made against the 
 Secondary (if present). However having some throttling in such background 
 task would be good thing to have. But first we need to have some 
 SystemLoadIndicator notion in Oak which can be provide details say in 
 percentage 1..100 about system load. We can then expose configurable 
 threshold which VersionGC would listen for and adjust its working accordingly.
 It can be a JMX bean which emits notification and we have our components 
 listen to those notification (or use OSGi SR/Events). That can be used in 
 other places like Observation processing, Blob GC etc
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2688) Segment.readString optimization

2015-06-18 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-2688.
-
Resolution: Duplicate

OAK-3007 is not a complete duplicate, but solves a larger problem.

 Segment.readString optimization
 ---

 Key: OAK-2688
 URL: https://issues.apache.org/jira/browse/OAK-2688
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Thomas Mueller
Assignee: Thomas Mueller
  Labels: performance
 Fix For: 1.4

 Attachments: OAK-2688.patch


 The method Segment.readString is called a lot, and even a small optimization 
 would improve performance for some use cases. Currently it uses a concurrent 
 hash map to cache strings. It might be possible to speed up this cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2986:

Fix Version/s: (was: 1.3.1)
   1.3.2

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Slings datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2922) enhance LastRevRecoveryAgent diagnostics

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2922:

Fix Version/s: (was: 1.3.1)

 enhance LastRevRecoveryAgent diagnostics
 

 Key: OAK-2922
 URL: https://issues.apache.org/jira/browse/OAK-2922
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.0, 1.0.14
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor
  Labels: tooling

 Depending on persistence type and repo size, the LastRevRecoveryAgent might 
 run for a long time. Enhance the logging so that the admin knows what's going 
 on (say, once per minute?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1266) DocumentStore implementation for relational databases

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-1266:

Fix Version/s: (was: 1.3.1)

 DocumentStore implementation for relational databases
 -

 Key: OAK-1266
 URL: https://issues.apache.org/jira/browse/OAK-1266
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk, rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke

 There should be an alternative DocumentStore implementation that persists to 
 SQL databases rather than MongoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2807) Improve getSize performance for public content

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591761#comment-14591761
 ] 

angela edited comment on OAK-2807 at 6/18/15 1:19 PM:
--

hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this only returns {{true}} for 
the administrative access is, that i didn't see a scalable way to predict this!

{quote}
In order to avoid information leakage the index would have to be marked 
invalid as soon as one node in that sub-tree is not readable by everyone 
anymore. (could be checked through a commit hook)
{quote}

if that was _really_ feasible... why not... but so far i don't see how this 
would work reliably for _every_ combination of principals, policies and special 
node types (like e.g. access control content). 
as you can see in {{TreePermission}} I added this shortcut because I thought 
that it might be doable but so far I didn't find a solution for this except 
for the the trivial case.

{quote}
Maybe this concept could even be generalized later to work with other 
principals than everyone.
{quote}

the problem is not _everyone_ versus some other principals... if we have a 
concept that _really_ works, it doesn't matter on whether it's everyone or some 
other principal.

btw: the same suggestion has been made by david ages ago, because he just made 
the same assumptions that this is easy to achieve. but unfortunately it's not 
if you look at it from a repository point of view and not from a demo-hack 
point of view.

kind regards
angela


was (Author: anchela):
hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this only returns {{true}} for 
the administrative access is, that i didn't see a scalable way to predict this!

{quote}
In order to avoid information leakage the index would have to be marked 
invalid as soon as one node in that sub-tree is not readable by everyone 
anymore. (could be checked through a commit hook)
{quote}

if that was _really_ feasible... why not... but so far i 

[jira] [Comment Edited] (OAK-2807) Improve getSize performance for public content

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591761#comment-14591761
 ] 

angela edited comment on OAK-2807 at 6/18/15 1:24 PM:
--

hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be _no_ simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this only returns {{true}} for 
the administrative access is, that i didn't see a scalable way to predict this! 
in the multiplexing setup this would mean that _all_ authorization models 
plugged into the system must return {{TreePermission.canReadAll()}} to return 
true (you wouldn't need to do that yourself as the multiplexer would hide the 
different implementations for you; just meant to illustrate that it might be 
tricky to ever get {{true}} for non-administrative sessions).

{quote}
In order to avoid information leakage the index would have to be marked 
invalid as soon as one node in that sub-tree is not readable by everyone 
anymore. (could be checked through a commit hook)
{quote}

if that was _really_ feasible... why not... but so far i don't see how this 
would work reliably for _every_ combination of principals, policies and special 
node types (like e.g. access control content). 
as you can see in {{TreePermission}} I added this shortcut because I thought 
that it might be doable but so far I didn't find a solution for this except 
for the the trivial case.

{quote}
Maybe this concept could even be generalized later to work with other 
principals than everyone.
{quote}

the problem is not _everyone_ versus some other principals... if we have a 
concept that _really_ works, it doesn't matter on whether it's everyone or some 
other principal.

btw: the same suggestion has been made by david ages ago, because he just made 
the same assumptions that this is easy to achieve. but unfortunately it's not 
if you look at it from a repository point of view and not from a demo-hack 
point of view.

kind regards
angela


was (Author: anchela):
hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this 

[jira] [Updated] (OAK-2878) Test failure: AutoCreatedItemsTest.autoCreatedItems

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2878:

Fix Version/s: (was: 1.3.1)

 Test failure: AutoCreatedItemsTest.autoCreatedItems
 ---

 Key: OAK-2878
 URL: https://issues.apache.org/jira/browse/OAK-2878
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: Jenkins: https://builds.apache.org/job/Apache Jackrabbit 
 Oak matrix/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: CI, jenkins

 {{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
 on Jenkins: No default node type available for /testdata/property
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata/property
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
   at 
 org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
 {noformat}
 Failure seen at builds: 130, 134, 138
 See 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2747) Admin cannot create versions on a locked page by itself

2015-06-18 Thread Robert Munteanu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591665#comment-14591665
 ] 

Robert Munteanu commented on OAK-2747:
--

It seems that the Oak implementation of the {{LockManager}} prevents a checkout 
operation is the Node is locked. The Jackrabbit implementation allows a 
checkout if the node is not locked or if the user is the lock owner ( that is 
true for all operations as far as I can tell, not only for versioning related 
ones ).

My suggested approach would be:

* expose the logic for for {{canUnlock}} from Oak's {{LockManagerImpl}}, either 
as a util class or as as Oak-internal interface
* make use of that logic in {{VersionManagerImpl.checkNotLocked}}

{code:java|title=LockManagerImpl.java}
if (sessionContext.getSessionScopedLocks().contains(path)
|| sessionContext.getOpenScopedLocks().contains(path)) {
return true;
} else if (sessionContext.getAttributes().get(RELAXED_LOCKING) 
== TRUE) {
String user = 
sessionContext.getSessionDelegate().getAuthInfo().getUserID();
return node.isLockOwner(user) || isAdmin(sessionContext, 
user);
} else {
return false;
}
{code}

 Admin cannot create versions on a locked page by itself
 ---

 Key: OAK-2747
 URL: https://issues.apache.org/jira/browse/OAK-2747
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, jcr
Affects Versions: 1.2.1
Reporter: Marius Petria
 Fix For: 1.4


 Admin cannot create versions even if it is the lockowner. 
 This is a test to go in VersionManagementTest that shows the issue. 
 The general questions are:
 - should de lockowner be able to create versions?
 - should admin be able to create versions even if it is not the lockowner?
 {code}
  @Test
 public void testCheckInCheckoutLocked() throws Exception {
 Node n = createVersionableNode(superuser.getNode(path));
 n.addMixin(mixLockable);
 superuser.save();
 n = superuser.getNode(n.getPath());
 n.lock(true, false);
 testSession.save();
 n = superuser.getNode(n.getPath());
 n.checkin();
 n.checkout();
 }
 {code}
 fails with 
 {noformat}
 javax.jcr.lock.LockException: Node at /testroot/node1/node1 is locked
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2857) Run background read and write operation concurrently

2015-06-18 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2857:
--
Fix Version/s: (was: 1.3.1)
   1.3.2

 Run background read and write operation concurrently
 

 Key: OAK-2857
 URL: https://issues.apache.org/jira/browse/OAK-2857
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
  Labels: technical_debt
 Fix For: 1.3.2


 OAK-2624 decoupled the background read from the background write but the 
 methods implementing the operations are synchronized. This means they cannot 
 run at the same time and e.g. an expensive background write may unnecessarily 
 block a background read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2793) Time limit for HierarchicalInvalidator

2015-06-18 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2793:
--
Fix Version/s: (was: 1.3.1)
   1.3.2

 Time limit for HierarchicalInvalidator
 --

 Key: OAK-2793
 URL: https://issues.apache.org/jira/browse/OAK-2793
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
  Labels: resilience
 Fix For: 1.3.2

 Attachments: OAK-2793-Time-limit-for-HierarchicalInvalidator.patch


 This issue is related to OAK-2646. Every now and then I see reports of 
 background reads with a cache invalidation that takes a rather long time. 
 Sometimes minutes. It would be good to give the HierarchicalInvalidator an 
 upper limit for the time it may take to perform the invalidation. When the 
 time is up, the implementation should simply invalidate the remaining 
 documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2807) Improve getSize performance for public content

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591761#comment-14591761
 ] 

angela commented on OAK-2807:
-

hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this only returns {{true}} for 
the administrative access is, that i didn't see a scalable way to predict this!

{quote}
In order to avoid information leakage the index would have to be marked 
invalid as soon as one node in that sub-tree is not readable by everyone 
anymore. (could be checked through a commit hook)
{quote}

if that was _really_ feasible... why not... but so far i don't see how this 
would work reliably for _every_ combination of principals. 
as you can see in {{TreePermission}} I added this shortcut because I thought 
that it might be doable but so far I didn't find a solution for this except 
for the the trivial case.

{quote}
Maybe this concept could even be generalized later to work with other 
principals than everyone.
{quote}

the problem is not _everyone_ versus some other principals... if we have a 
concept that _really_ works, it doesn't matter on whether it's everyone or some 
other principal.

btw: the same suggestion has been made by david ages ago, because he just made 
the same assumptions that this is easy to achieve. but unfortunately it's not 
if you look at it from a repository point of view and not from a demo-hack 
point of view.

kind regards
angela

 Improve getSize performance for public content
 

 Key: OAK-2807
 URL: https://issues.apache.org/jira/browse/OAK-2807
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query, security
Affects Versions: 1.0.13, 1.2
Reporter: Michael Marth

 Certain operations in the query engine like getting the size of a result set 
 or facets are expensive to compute due to the fact that ACLs need to be 
 computed on the entire result set. This issue is to discuss an idea how we 
 could improve this:
 There is a very common special case: content (a subtree) that is readable by 
 everyone (anonymous). If we mark an index on that subtree as readable by 
 everyone on index creation then we could skip ACL check on the result set or 
  precompute/cache certain query results.
 In order to avoid information leakage the index would have to be marked 
 invalid as soon as one node in that sub-tree is not readable by everyone 
 anymore. (could be checked through a commit hook)
 Maybe this concept could even be generalized later to work with other 
 principals than everyone.
 Just an idea - feel free to poke holes and shoot it down :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2858) Test failure: ObservationRefreshTest

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2858:

Fix Version/s: (was: 1.3.1)

 Test failure: ObservationRefreshTest
 

 Key: OAK-2858
 URL: https://issues.apache.org/jira/browse/OAK-2858
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: https://builds.apache.org/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: ci, jenkins

 {{org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest}} fails on 
 Jenkins when running the {{DOCUMENT_RDB}} fixture.
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.174 sec 
  FAILURE!
 observation[0](org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest)
   Time elapsed: 16.173 sec   ERROR!
 javax.jcr.InvalidItemStateException: OakMerge0001: OakMerge0001: Failed to 
 merge changes to the underlying store (retries 5, 5286 ms)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:239)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:664)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:489)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
   at 
 org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation(ObservationRefreshTest.java:176)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 

[jira] [Updated] (OAK-2859) Test failure: OrderableNodesTest

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2859:

Fix Version/s: (was: 1.3.1)

 Test failure: OrderableNodesTest
 

 Key: OAK-2859
 URL: https://issues.apache.org/jira/browse/OAK-2859
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: https://builds.apache.org/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: ci, jenkins

 {{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
 running the {{DOCUMENT_RDB}} fixture.
 {noformat}
 Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec 
  FAILURE!
 orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
 elapsed: 3.858 sec   ERROR!
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:57)
   at 
 org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder(OrderableNodesTest.java:47)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Failure seen at builds: 81, 87, 92, 95, 96, 114, 120, 128, 186
 See e.g. 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/128/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2816) RDB NodeStoreFixture for cluster tests broken

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2816:

Fix Version/s: (was: 1.3.1)

 RDB NodeStoreFixture for cluster tests broken
 -

 Key: OAK-2816
 URL: https://issues.apache.org/jira/browse/OAK-2816
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Affects Versions: 1.0.13, 1.2.1
Reporter: Julian Reschke
Assignee: Julian Reschke
  Labels: CI, test

 public NodeStore createNodeStore(int clusterNodeId) will currently use 
 distinct databases per cluster node, as the nodeId is part of the H2DB DB 
 filename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2704) ConcurrentAddIT occasionally fail with OakMerge0001

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2704:

Fix Version/s: (was: 1.3.1)

 ConcurrentAddIT occasionally fail with OakMerge0001
 ---

 Key: OAK-2704
 URL: https://issues.apache.org/jira/browse/OAK-2704
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: it, jcr, rdbmk
Affects Versions: 1.1.7
Reporter: Davide Giannella
Assignee: Julian Reschke
  Labels: CI, Jenkins

 When running Integration testing often the test fail with the following
 {noformat}
 addNodes[2](org.apache.jackrabbit.oak.jcr.ConcurrentAddIT)  Time elapsed: 
 9.88 sec   ERROR!
 java.lang.RuntimeException: 
 org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0001: 
 OakMerge0001: Failed to merge changes to the underlying store (retries 5, 
 5240 ms)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:64)
   at org.apache.jackrabbit.oak.Oak.createContentRepository(Oak.java:561)
   at org.apache.jackrabbit.oak.jcr.Jcr.createRepository(Jcr.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.AbstractRepositoryTest.createRepository(AbstractRepositoryTest.java:125)
   at 
 org.apache.jackrabbit.oak.jcr.AbstractRepositoryTest.getRepository(AbstractRepositoryTest.java:115)
   at 
 org.apache.jackrabbit.oak.jcr.AbstractRepositoryTest.createAdminSession(AbstractRepositoryTest.java:149)
   at 
 org.apache.jackrabbit.oak.jcr.AbstractRepositoryTest.getAdminSession(AbstractRepositoryTest.java:136)
   at 
 org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodes(ConcurrentAddIT.java:54)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: OakMerge0001: 
 OakMerge0001: Failed to merge changes to the underlying store (retries 5, 
 5240 ms)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge0(DocumentNodeStoreBranch.java:257)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreBranch.merge(DocumentNodeStoreBranch.java:191)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentRootBuilder.merge(DocumentRootBuilder.java:159)
   at 
 org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.merge(DocumentNodeStore.java:1434)
   at 
 org.apache.jackrabbit.oak.spi.lifecycle.OakInitializer.initialize(OakInitializer.java:62)
   ... 34 more
 Caused by: org.apache.jackrabbit.oak.plugins.document.DocumentStoreException: 
 update of 1:/jcr:system failed, race condition?
   at 
 org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.internalCreateOrUpdate(RDBDocumentStore.java:914)
   at 

[jira] [Created] (OAK-3007) SegmentStore cache does not take string map into account

2015-06-18 Thread Thomas Mueller (JIRA)
Thomas Mueller created OAK-3007:
---

 Summary: SegmentStore cache does not take string map into account
 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller


The SegmentStore cache size calculation ignores the size of the field 
Segment.string (a concurrent hash map). It looks like a regular segment in a 
memory mapped file has the size 1024, no matter how many strings are loaded in 
memory. This can lead to out of memory. There seems to be no way to limit 
(configure) the amount of memory used by strings. In one example, 100'000 
segments are loaded in memory, and 5 GB are used for Strings in that map.

We need a way to configure the amount of memory used for that. This seems to be 
basically a cache. But instead of each segment having its own cache (as done in 
OAK-2688), it would be better to have one cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3007) SegmentStore cache does not take string map into account

2015-06-18 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3007:

Fix Version/s: 1.3.1

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.1


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. But instead of each segment having its own cache (as 
 done in OAK-2688), it would be better to have one cache with a configurable 
 size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2877) Test failure: OrderableNodesTest.setPrimaryType

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2877:

Fix Version/s: (was: 1.3.1)

 Test failure: OrderableNodesTest.setPrimaryType
 ---

 Key: OAK-2877
 URL: https://issues.apache.org/jira/browse/OAK-2877
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: Jenkins: 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: CI, Jenkins

 {{org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType}} fails on 
 Jenkins: No default node type available for /testdata
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:57)
   at 
 org.apache.jackrabbit.oak.jcr.OrderableNodesTest.setPrimaryType(OrderableNodesTest.java:62)
 {noformat}
 Failure seen at builds: 132, 152, 176
 See 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/132/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/OrderableNodesTest/setPrimaryType_0_/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3001:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 Simplify JournalGarbageCollector using a dedicated timestamp property
 -

 Key: OAK-3001
 URL: https://issues.apache.org/jira/browse/OAK-3001
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, mongomk
Reporter: Stefan Egli
Priority: Critical
  Labels: scalability
 Fix For: 1.2.3, 1.3.2


 This subtask is about spawning out a 
 [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
  from [~chetanm] re JournalGC:
 {quote}
 Further looking at JournalGarbageCollector ... it would be simpler if you 
 record the journal entry timestamp as an attribute in JournalEntry document 
 and then you can delete all the entries which are older than some time by a 
 simple query. This would avoid fetching all the entries to be deleted on the 
 Oak side
 {quote}
 and a corresponding 
 [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
  from myself:
 {quote}
 Re querying by timestamp: that would indeed be simpler. With the current set 
 of DocumentStore API however, I believe this is not possible. But: 
 [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
  comes quite close: it would probably just require the opposite of that 
 method too: 
 {code}
 public T extends Document ListT query(CollectionT collection,
   String fromKey,
   String toKey,
   String indexedProperty,
   long endValue,
   int limit) {
 {code}
 .. or what about generalizing this method to have both a {{startValue}} and 
 an {{endValue}} - with {{-1}} indicating when one of them is not used?
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3007) SegmentStore cache does not take string map into account

2015-06-18 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3007:

Description: 
The SegmentStore cache size calculation ignores the size of the field 
Segment.string (a concurrent hash map). It looks like a regular segment in a 
memory mapped file has the size 1024, no matter how many strings are loaded in 
memory. This can lead to out of memory. There seems to be no way to limit 
(configure) the amount of memory used by strings. In one example, 100'000 
segments are loaded in memory, and 5 GB are used for Strings in that map.

We need a way to configure the amount of memory used for that. This seems to be 
basically a cache. OAK-2688 does this, but it would be better to have one cache 
with a configurable size limit.

  was:
The SegmentStore cache size calculation ignores the size of the field 
Segment.string (a concurrent hash map). It looks like a regular segment in a 
memory mapped file has the size 1024, no matter how many strings are loaded in 
memory. This can lead to out of memory. There seems to be no way to limit 
(configure) the amount of memory used by strings. In one example, 100'000 
segments are loaded in memory, and 5 GB are used for Strings in that map.

We need a way to configure the amount of memory used for that. This seems to be 
basically a cache. But instead of each segment having its own cache (as done in 
OAK-2688), it would be better to have one cache with a configurable size limit.


 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.1


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2655) Test failure: OrderableNodesTest.testAddNode

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2655:

Fix Version/s: (was: 1.3.1)

 Test failure: OrderableNodesTest.testAddNode
 

 Key: OAK-2655
 URL: https://issues.apache.org/jira/browse/OAK-2655
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Affects Versions: 1.0.12, 1.2
 Environment: Jenkins, Ubuntu: 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: CI, Jenkins
 Attachments: 2655.diff


 {{org.apache.jackrabbit.oak.jcr.OrderableNodesTest.testAddNode}} fails on 
 Jenkins when running the {{DOCUMENT_RDB}} fixture. 
 Failure seen at builds: 30, 44, 57, 58, 69, 78, 115, 121, 130, 132, 142, 152
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/30/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/OrderableNodesTest/testAddNode_0_/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1904) testClockDrift seems to occasionally fail on Windows

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-1904:

Fix Version/s: (was: 1.3.1)

 testClockDrift seems to occasionally fail on Windows
 

 Key: OAK-1904
 URL: https://issues.apache.org/jira/browse/OAK-1904
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: Davide Giannella
Assignee: Julian Reschke
  Labels: Windows

 On Windows the {{testClockDrift()}} test seems to be constantly failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3002) Optimize docCache and docChildrenCache invalidation by filtering using journal

2015-06-18 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3002:
-
Fix Version/s: (was: 1.3.1)
   1.3.2

 Optimize docCache and docChildrenCache invalidation by filtering using journal
 --

 Key: OAK-3002
 URL: https://issues.apache.org/jira/browse/OAK-3002
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, mongomk
Reporter: Stefan Egli
Assignee: Stefan Egli
  Labels: scalability
 Fix For: 1.2.3, 1.3.2

 Attachments: JournalLoadTest.java, 
 OAK-3002-improved-doc-and-docChildren-cache-invaliation.3.patch, 
 OAK-3002-improved-doc-cache-invaliation.2.patch


 This subtask is about spawning out a 
 [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14588114page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14588114]
  on OAK-2829 re optimizing docCache invalidation using the newly introduced 
 external diff journal:
 {quote}
 Attached OAK-2829-improved-doc-cache-invaliation.patch which is a suggestion 
 on how to avoid invalidating the entire document cache when doing a 
 {{backgroundRead}} but instead making use of the new journal: ie only 
 invalidate from the document cache what has actually changed.
 I'd like to get an opinion ([~mreutegg], [~chetanm]?) on this first, I have a 
 load test pending locally which found invalidation of the document cache to 
 be the slowest part thus wanted to optimize this first.
 Open still/next:
  * also invalidate only necessary parts from the docChildrenCache
  * junits for all of these
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2807) Improve getSize performance for public content

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591761#comment-14591761
 ] 

angela edited comment on OAK-2807 at 6/18/15 1:32 PM:
--

hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete picture
- with OAK-2008 the permission evaluation will become a multiplexing one and 
there will be _no_ simple shortcut to determine 
'readable-for-everyone-for-the-whole-subtree' any more (if it ever was 
possible).
- {{TreePermissions.canReadAll()}} is already intended to provide exact the 
short-cut you are proposing and the reason why this only returns {{true}} for 
the administrative access is, that i didn't see a scalable way to predict this! 
in the multiplexing setup this would mean that _all_ authorization models 
plugged into the system must return {{TreePermission.canReadAll()}} to return 
true (you wouldn't need to do that yourself as the multiplexer would hide the 
different implementations for you; just meant to illustrate that it might be 
tricky to ever get {{true}} for non-administrative sessions).

{quote}
In order to avoid information leakage the index would have to be marked 
invalid as soon as one node in that sub-tree is not readable by everyone 
anymore. (could be checked through a commit hook)
{quote}

if that was _really_ feasible... why not... but so far i don't see how this 
would work reliably for _every_ combination of principals, policies and special 
node types (like e.g. access control content). 
as you can see in {{TreePermission}} I added this shortcut because I thought 
that it might be doable but so far I didn't find a solution for this except 
for the the trivial case.

{quote}
Maybe this concept could even be generalized later to work with other 
principals than everyone.
{quote}

the problem is not _everyone_ versus some other principals... if we have a 
concept that _really_ works, it doesn't matter on whether it's everyone or some 
other principal.

btw: the same suggestion has been made by david ages ago, because he just made 
the same assumptions that this is easy to achieve. but unfortunately it's not 
if you look at it from a repository point of view and not from a demo-hack 
point of view.

having said that: it's definitely worth some additional investigations but 
please be asserted that it's not as simple as it might look like. i would 
suggest that use {{TreePermission.canReadAll()}} for any kind of optimizations, 
where can easily verify any possible approach instead of being mislead by 
simplistic assumptions. :-)

kind regards
angela


was (Author: anchela):
hi michael

{quote}
There is a very common special case: content (a subtree) that is readable by 
everyone (anonymous). 
{quote}

unfortunately it only looks like being readable to everyone because the 
access-checks will filter out all 'meta' data that is not readable. it's only 
the 'regular' content that is readable. special things like e.g. the policy 
that opens up the read-access for everyone is not world-readable, nor are other 
special data like e.g. analytics data.

{quote}
If we mark an index on that subtree as readable by everyone on index creation 
then we could skip ACL check on the result set or precompute/cache certain 
query results.
{quote}

the problem is that you don't know if it is really readable by everyone for the 
following reasons:
- any policy _above_ your target tree denying access for a  user-principal will 
take precedence
- any policy _below_ (e.g. CUG) that looks down access again will make your 
test for 'readable by everyone' become wrong
- with OAK-1268 and having the latter covered by dedicated policies looking a 
given implementation of {{AccessControlPolicy}} and {{AccessControlEntry}} will 
no longer reflect the complete 

[jira] [Updated] (OAK-2993) RDBDocumentStore on MySQL: default transaction isolation level

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2993:

Fix Version/s: (was: 1.3.1)

 RDBDocumentStore on MySQL: default transaction isolation level
 --

 Key: OAK-2993
 URL: https://issues.apache.org/jira/browse/OAK-2993
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.3.1, 1.0.15
Reporter: Julian Reschke
Assignee: Julian Reschke

 It appears the default transaction isolation if MySQL is 4 (repeatable read) 
 rather than 2 (read committed).
 Consider to allow the implementation to override the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:12 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
* fsBackendPath path of NAS or SAN storage (required)
* path path of local file system cache( optional). Default is 
$\{home.dir\}/repository/datastore. 

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
* fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
* path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
* fsBackendPath specify the SAN path
* path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
* fsBackendPath path of NAS or SAN storage (required)
* path path of local file system cache( optional). Default is 
${home.dir}/repository/datastore. 

h3. Fresh Installation:
h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3007) SegmentStore cache does not take string map into account

2015-06-18 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3007:

Attachment: OAK-3007.patch

OAK-3007.patch (potential patch, not fully tested yet)

 SegmentStore cache does not take string map into account
 --

 Key: OAK-3007
 URL: https://issues.apache.org/jira/browse/OAK-3007
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Thomas Mueller
 Fix For: 1.3.1

 Attachments: OAK-3007.patch


 The SegmentStore cache size calculation ignores the size of the field 
 Segment.string (a concurrent hash map). It looks like a regular segment in a 
 memory mapped file has the size 1024, no matter how many strings are loaded 
 in memory. This can lead to out of memory. There seems to be no way to limit 
 (configure) the amount of memory used by strings. In one example, 100'000 
 segments are loaded in memory, and 5 GB are used for Strings in that map.
 We need a way to configure the amount of memory used for that. This seems to 
 be basically a cache. OAK-2688 does this, but it would be better to have one 
 cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2445) Password History Support

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2445:

Fix Version/s: 1.3.1

 Password History Support
 

 Key: OAK-2445
 URL: https://issues.apache.org/jira/browse/OAK-2445
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Affects Versions: 1.1.5
Reporter: Dominique Jäggi
Assignee: Dominique Jäggi
 Fix For: 1.3.1

 Attachments: OAK-2445_-_Password_History_Support.patch


 This is a patch that enhances oak security to support user user password 
 history.
 details of the feature as per doc:
 {noformat}
 Password History
 
 ### General
 Oak provides functionality to remember a configurable number of
 passwords after password changes and to prevent a password to
 be set during changing a user's password if found in said history.
 ### Configuration
 An administrator may enable password history via the
 _org.apache.jackrabbit.oak.security.user.UserConfigurationImpl_
 OSGi configuration. By default the history is disabled.
 The following configuration option is supported:
 - Maximum Password History Size (_passwordHistorySize_, number of passwords): 
 When greater 0 enables password
   history and sets feature to remember the specified number of passwords for 
 a user.
 ### How it works
  Recording of Passwords
 If the feature is enabled, during a user changing her password, the old 
 password
 hash is recorded in the password history.
 The old password hash is only recorded if a password was set (non-empty).
 Therefore setting a password for a user for the first time does not result
 in a history record, as there is no old password.
 The old password hash is copied to the password history *after* the provided 
 new
 password has been validated (e.g. via the _PasswwordChangeAction_) but 
 *before*
 the new password hash is written to the user's _rep:password_ property.
 The history operates as a FIFO list. A new password history record exceeding 
 the
 configured max history size, results in the oldest recorded password from 
 being
 removed from the history.
 Also, if the configuration parameter for the history
 size is changed to a non-zero but smaller value than before, upon the next
 password change the oldest records exceeding the new history size are removed.
 History password hashes are recorded as child nodes of a user's 
 _rep:pwd/rep:pwdHistory_
 node. The _rep:pwdHistory_ node is governed by the _rep:PasswordHistory_ node 
 type,
 its children by the _rep:PasswordHistoryEntry_ node type, allowing setting of 
 the
 _rep:password_ property and its record date via _jcr:created_.
 # Node Type rep:PasswordHistory and rep:PasswordHistoryEntry
 [rep:PasswordHistory]
 + * (rep:PasswordHistoryEntry) = rep:PasswordHistoryEntry protected
 [rep:PasswordHistoryEntry]  nt:hierarchyNode
 - rep:password (STRING) protected
 # Nodes rep:pwd and rep:pwdHistory
 [rep:Password]
 + rep:pwdHistory (rep:PasswordHistory) = rep:PasswordHistory protected
 ...
 [rep:User]  rep:Authorizable, rep:Impersonatable
 + rep:pwd (rep:Password) = rep:Password protected
 ...
 
 The _rep:pwdHistory_ and history entry nodes and the _rep:password_ property
 are defined protected in order to guard against the user modifying 
 (overcoming) her
 password history limitations. The new sub-nodes also has the advantage of 
 allowing
 repository consumers to e.g. register specific commit hooks / actions on such 
 a node.
  Evaluation of Password History
 Upon a user changing her password and if the password history feature is 
 enabled
 (configured password history size  0), the _PasswordChangeAction_ checks 
 whether
 any of the password hashes recorded in the history matches the new password.
 If any record is a match, a _ConstraintViolationException_ is thrown and the
 user's password is *NOT* changed.
  Oak JCR XML Import
 When users are imported via the Oak JCR XML importer, password history is
 currently not supported (ignored).
 {noformat}
 [~anchela], if you could kindly review the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashank Gupta updated OAK-3005:

Attachment: 
org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS.sample.config

 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch, 
 org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS.sample.config


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2445) Password History Support

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591925#comment-14591925
 ] 

angela commented on OAK-2445:
-

[~djaeggi], the patch looks great... as always a pleasure to review. there is 
just one minor thing i am not fully convinced of: why do you think it's 
necessary to store the password history entries in a separate node each? 
wouldn't it be sufficient to have a single string property for that (either 
single or multivalued? i started wondering because i spotted that 
{{rep:PasswordHistoryEntry}} extends from {{nt:hierarchyNode}} (for the sake of 
getting a {{jcr:created}} (but you additionally get {{jcr:createdBy}}). is it 
by intention that you also want to record _who_ did the change?
without having thought about it very carefully (in contrast to you), i would 
have expected this to be just store in a single property as i wouldn't have 
expected to have thousands of entries in the pw-history. you could argue that 
updating that property would fail if there was a concurrent pw-change but on 
the other hand you may also end up with multiple nodes that have the exact same 
{{jcr:createdBy}} property, which somehow would also leave you with a conflict, 
wouldn't it? so maybe having those concurrent modifications immediately causing 
a collision might even be the expected behavior wdyt? at least it was less 
verbose, easier to implement and probably a bit faster to parse than creating a 
tree structure of that.

 Password History Support
 

 Key: OAK-2445
 URL: https://issues.apache.org/jira/browse/OAK-2445
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Affects Versions: 1.1.5
Reporter: Dominique Jäggi
Assignee: Dominique Jäggi
 Attachments: OAK-2445_-_Password_History_Support.patch


 This is a patch that enhances oak security to support user user password 
 history.
 details of the feature as per doc:
 {noformat}
 Password History
 
 ### General
 Oak provides functionality to remember a configurable number of
 passwords after password changes and to prevent a password to
 be set during changing a user's password if found in said history.
 ### Configuration
 An administrator may enable password history via the
 _org.apache.jackrabbit.oak.security.user.UserConfigurationImpl_
 OSGi configuration. By default the history is disabled.
 The following configuration option is supported:
 - Maximum Password History Size (_passwordHistorySize_, number of passwords): 
 When greater 0 enables password
   history and sets feature to remember the specified number of passwords for 
 a user.
 ### How it works
  Recording of Passwords
 If the feature is enabled, during a user changing her password, the old 
 password
 hash is recorded in the password history.
 The old password hash is only recorded if a password was set (non-empty).
 Therefore setting a password for a user for the first time does not result
 in a history record, as there is no old password.
 The old password hash is copied to the password history *after* the provided 
 new
 password has been validated (e.g. via the _PasswwordChangeAction_) but 
 *before*
 the new password hash is written to the user's _rep:password_ property.
 The history operates as a FIFO list. A new password history record exceeding 
 the
 configured max history size, results in the oldest recorded password from 
 being
 removed from the history.
 Also, if the configuration parameter for the history
 size is changed to a non-zero but smaller value than before, upon the next
 password change the oldest records exceeding the new history size are removed.
 History password hashes are recorded as child nodes of a user's 
 _rep:pwd/rep:pwdHistory_
 node. The _rep:pwdHistory_ node is governed by the _rep:PasswordHistory_ node 
 type,
 its children by the _rep:PasswordHistoryEntry_ node type, allowing setting of 
 the
 _rep:password_ property and its record date via _jcr:created_.
 # Node Type rep:PasswordHistory and rep:PasswordHistoryEntry
 [rep:PasswordHistory]
 + * (rep:PasswordHistoryEntry) = rep:PasswordHistoryEntry protected
 [rep:PasswordHistoryEntry]  nt:hierarchyNode
 - rep:password (STRING) protected
 # Nodes rep:pwd and rep:pwdHistory
 [rep:Password]
 + rep:pwdHistory (rep:PasswordHistory) = rep:PasswordHistory protected
 ...
 [rep:User]  rep:Authorizable, rep:Impersonatable
 + rep:pwd (rep:Password) = rep:Password protected
 ...
 
 The _rep:pwdHistory_ and history entry nodes and the _rep:password_ property
 are defined protected in order to guard against the user modifying 
 (overcoming) her
 password history limitations. The new sub-nodes also 

[jira] [Updated] (OAK-3008) Training material for Oak security

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3008:

Fix Version/s: 1.4

 Training material for Oak security
 --

 Key: OAK-3008
 URL: https://issues.apache.org/jira/browse/OAK-3008
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: angela
Assignee: angela
 Fix For: 1.4


 in order to help my colleagues (an anybody else interested) to become 
 familiar with oak security modules, it would be useful to not only have 
 documentation but also real examples where developers can play with the oak 
 security code and become familiar both with the JCR/Jackrabbit API and the 
 implementation details.
 the structure of those exercises should be such that it can be extended for 
 any other area in oak... for the time being only security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3008) Training material for Oak security

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-3008:
---

Assignee: angela

 Training material for Oak security
 --

 Key: OAK-3008
 URL: https://issues.apache.org/jira/browse/OAK-3008
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: angela
Assignee: angela
 Fix For: 1.4


 in order to help my colleagues (an anybody else interested) to become 
 familiar with oak security modules, it would be useful to not only have 
 documentation but also real examples where developers can play with the oak 
 security code and become familiar both with the JCR/Jackrabbit API and the 
 implementation details.
 the structure of those exercises should be such that it can be extended for 
 any other area in oak... for the time being only security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2445) Password History Support

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591925#comment-14591925
 ] 

angela edited comment on OAK-2445 at 6/18/15 2:55 PM:
--

[~djaeggi], the patch looks great... as always a pleasure to review. there is 
just one minor thing i am not fully convinced of: why do you think it's 
necessary to store the password history entries in a separate node each? 
wouldn't it be sufficient to have a single string property for that (either 
single or multivalued? i started wondering because i spotted that 
{{rep:PasswordHistoryEntry}} extends from {{nt:hierarchyNode}} (for the sake of 
getting a {{jcr:created}} (but you additionally get {{jcr:createdBy}}). is it 
by intention that you also want to record _who_ did the change?
without having thought about it very carefully (in contrast to you), i would 
have expected this to be just store in a single property as i wouldn't have 
expected to have thousands of entries in the pw-history. you could argue that 
updating that property would fail if there was a concurrent pw-change but on 
the other hand you may also end up with multiple nodes that have the exact same 
{{jcr:createdBy}} property, which somehow would also leave you with a conflict, 
wouldn't it? so maybe having those concurrent modifications immediately causing 
a collision might even be the expected behavior wdyt? at least it was less 
verbose, easier to implement and probably a bit faster to parse than creating 
and maintaining a tree structure of that.

and btw: sorry for the delay... it drop off my TODO list. sorry for that.


was (Author: anchela):
[~djaeggi], the patch looks great... as always a pleasure to review. there is 
just one minor thing i am not fully convinced of: why do you think it's 
necessary to store the password history entries in a separate node each? 
wouldn't it be sufficient to have a single string property for that (either 
single or multivalued? i started wondering because i spotted that 
{{rep:PasswordHistoryEntry}} extends from {{nt:hierarchyNode}} (for the sake of 
getting a {{jcr:created}} (but you additionally get {{jcr:createdBy}}). is it 
by intention that you also want to record _who_ did the change?
without having thought about it very carefully (in contrast to you), i would 
have expected this to be just store in a single property as i wouldn't have 
expected to have thousands of entries in the pw-history. you could argue that 
updating that property would fail if there was a concurrent pw-change but on 
the other hand you may also end up with multiple nodes that have the exact same 
{{jcr:createdBy}} property, which somehow would also leave you with a conflict, 
wouldn't it? so maybe having those concurrent modifications immediately causing 
a collision might even be the expected behavior wdyt? at least it was less 
verbose, easier to implement and probably a bit faster to parse than creating a 
tree structure of that.

and btw: sorry for the delay... it drop off my TODO list. sorry for that.

 Password History Support
 

 Key: OAK-2445
 URL: https://issues.apache.org/jira/browse/OAK-2445
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Affects Versions: 1.1.5
Reporter: Dominique Jäggi
Assignee: Dominique Jäggi
 Fix For: 1.3.1

 Attachments: OAK-2445_-_Password_History_Support.patch


 This is a patch that enhances oak security to support user user password 
 history.
 details of the feature as per doc:
 {noformat}
 Password History
 
 ### General
 Oak provides functionality to remember a configurable number of
 passwords after password changes and to prevent a password to
 be set during changing a user's password if found in said history.
 ### Configuration
 An administrator may enable password history via the
 _org.apache.jackrabbit.oak.security.user.UserConfigurationImpl_
 OSGi configuration. By default the history is disabled.
 The following configuration option is supported:
 - Maximum Password History Size (_passwordHistorySize_, number of passwords): 
 When greater 0 enables password
   history and sets feature to remember the specified number of passwords for 
 a user.
 ### How it works
  Recording of Passwords
 If the feature is enabled, during a user changing her password, the old 
 password
 hash is recorded in the password history.
 The old password hash is only recorded if a password was set (non-empty).
 Therefore setting a password for a user for the first time does not result
 in a history record, as there is no old password.
 The old password hash is copied to the password history *after* the provided 
 new
 password has been validated (e.g. via the 

[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:09 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample osgi config attached. 

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore )

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore)

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3008) Training material for Oak security

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591884#comment-14591884
 ] 

angela commented on OAK-3008:
-

first bunch of exercises committed revision 1686235. more coverage for special 
topics will follow.


 Training material for Oak security
 --

 Key: OAK-3008
 URL: https://issues.apache.org/jira/browse/OAK-3008
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: angela
Assignee: angela
 Fix For: 1.4


 in order to help my colleagues (an anybody else interested) to become 
 familiar with oak security modules, it would be useful to not only have 
 documentation but also real examples where developers can play with the oak 
 security code and become familiar both with the JCR/Jackrabbit API and the 
 implementation details.
 the structure of those exercises should be such that it can be extended for 
 any other area in oak... for the time being only security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2445) Password History Support

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591925#comment-14591925
 ] 

angela edited comment on OAK-2445 at 6/18/15 2:54 PM:
--

[~djaeggi], the patch looks great... as always a pleasure to review. there is 
just one minor thing i am not fully convinced of: why do you think it's 
necessary to store the password history entries in a separate node each? 
wouldn't it be sufficient to have a single string property for that (either 
single or multivalued? i started wondering because i spotted that 
{{rep:PasswordHistoryEntry}} extends from {{nt:hierarchyNode}} (for the sake of 
getting a {{jcr:created}} (but you additionally get {{jcr:createdBy}}). is it 
by intention that you also want to record _who_ did the change?
without having thought about it very carefully (in contrast to you), i would 
have expected this to be just store in a single property as i wouldn't have 
expected to have thousands of entries in the pw-history. you could argue that 
updating that property would fail if there was a concurrent pw-change but on 
the other hand you may also end up with multiple nodes that have the exact same 
{{jcr:createdBy}} property, which somehow would also leave you with a conflict, 
wouldn't it? so maybe having those concurrent modifications immediately causing 
a collision might even be the expected behavior wdyt? at least it was less 
verbose, easier to implement and probably a bit faster to parse than creating a 
tree structure of that.

and btw: sorry for the delay... it drop off my TODO list. sorry for that.


was (Author: anchela):
[~djaeggi], the patch looks great... as always a pleasure to review. there is 
just one minor thing i am not fully convinced of: why do you think it's 
necessary to store the password history entries in a separate node each? 
wouldn't it be sufficient to have a single string property for that (either 
single or multivalued? i started wondering because i spotted that 
{{rep:PasswordHistoryEntry}} extends from {{nt:hierarchyNode}} (for the sake of 
getting a {{jcr:created}} (but you additionally get {{jcr:createdBy}}). is it 
by intention that you also want to record _who_ did the change?
without having thought about it very carefully (in contrast to you), i would 
have expected this to be just store in a single property as i wouldn't have 
expected to have thousands of entries in the pw-history. you could argue that 
updating that property would fail if there was a concurrent pw-change but on 
the other hand you may also end up with multiple nodes that have the exact same 
{{jcr:createdBy}} property, which somehow would also leave you with a conflict, 
wouldn't it? so maybe having those concurrent modifications immediately causing 
a collision might even be the expected behavior wdyt? at least it was less 
verbose, easier to implement and probably a bit faster to parse than creating a 
tree structure of that.

 Password History Support
 

 Key: OAK-2445
 URL: https://issues.apache.org/jira/browse/OAK-2445
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Affects Versions: 1.1.5
Reporter: Dominique Jäggi
Assignee: Dominique Jäggi
 Attachments: OAK-2445_-_Password_History_Support.patch


 This is a patch that enhances oak security to support user user password 
 history.
 details of the feature as per doc:
 {noformat}
 Password History
 
 ### General
 Oak provides functionality to remember a configurable number of
 passwords after password changes and to prevent a password to
 be set during changing a user's password if found in said history.
 ### Configuration
 An administrator may enable password history via the
 _org.apache.jackrabbit.oak.security.user.UserConfigurationImpl_
 OSGi configuration. By default the history is disabled.
 The following configuration option is supported:
 - Maximum Password History Size (_passwordHistorySize_, number of passwords): 
 When greater 0 enables password
   history and sets feature to remember the specified number of passwords for 
 a user.
 ### How it works
  Recording of Passwords
 If the feature is enabled, during a user changing her password, the old 
 password
 hash is recorded in the password history.
 The old password hash is only recorded if a password was set (non-empty).
 Therefore setting a password for a user for the first time does not result
 in a history record, as there is no old password.
 The old password hash is copied to the password history *after* the provided 
 new
 password has been validated (e.g. via the _PasswwordChangeAction_) but 
 *before*
 the new password hash is written to the user's _rep:password_ property.
 The history operates 

[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:10 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore )


h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample osgi config attached. 

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore )

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2747) Admin cannot create versions on a locked page by itself

2015-06-18 Thread Robert Munteanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Munteanu updated OAK-2747:
-
Attachment: 0003-OAK-2747-Admin-cannot-create-versions-on-a-locked-pa.patch
0002-Fix-unchecked-conversion-warnings-in-LockManagerImpl.patch
0001-Fix-unchecked-conversion-warnings-in-VersionManagerI.patch

I've attached a patch series which fixes the reported issue - I've validated it 
my using a version of the test submitted by [~mpetria] and also by testing it 
in an application which uses locking and versioning.

The first two patches don't contain any fixes,they just add type parameters to 
some operations in VersionManagerImpl and LockManagerImpl which cluttered with 
screen with warnings ...

 Admin cannot create versions on a locked page by itself
 ---

 Key: OAK-2747
 URL: https://issues.apache.org/jira/browse/OAK-2747
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, jcr
Affects Versions: 1.2.1
Reporter: Marius Petria
 Fix For: 1.4

 Attachments: 
 0001-Fix-unchecked-conversion-warnings-in-VersionManagerI.patch, 
 0002-Fix-unchecked-conversion-warnings-in-LockManagerImpl.patch, 
 0003-OAK-2747-Admin-cannot-create-versions-on-a-locked-pa.patch


 Admin cannot create versions even if it is the lockowner. 
 This is a test to go in VersionManagementTest that shows the issue. 
 The general questions are:
 - should de lockowner be able to create versions?
 - should admin be able to create versions even if it is not the lockowner?
 {code}
  @Test
 public void testCheckInCheckoutLocked() throws Exception {
 Node n = createVersionableNode(superuser.getNode(path));
 n.addMixin(mixLockable);
 superuser.save();
 n = superuser.getNode(n.getPath());
 n.lock(true, false);
 testSession.save();
 n = superuser.getNode(n.getPath());
 n.checkin();
 n.checkout();
 }
 {code}
 fails with 
 {noformat}
 javax.jcr.lock.LockException: Node at /testroot/node1/node1 is locked
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta commented on OAK-3005:
-

h2. How to configure CachingFDS

h3. On new installation setup:
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore)

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:08 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore)

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS

h3. On new installation setup:
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore)

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:11 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
* fsBackendPath path of NAS or SAN storage (required)
* path path of local file system cache( optional). Default is 
${home.dir}/repository/datastore. 

h3. Fresh Installation:
h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
* fsBackendPath path of NAS or SAN storage (required)
* path path of local file system cache( optional). Default is 
${home.dir}/repository/datastore. 

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591837#comment-14591837
 ] 

Shashank Gupta edited comment on OAK-3005 at 6/18/15 2:11 PM:
--

h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
* fsBackendPath path of NAS or SAN storage (required)
* path path of local file system cache( optional). Default is 
${home.dir}/repository/datastore. 

h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 










was (Author: shgupta):
h2. How to configure CachingFDS
* PID org.apache.jackrabbit.oak.plugins.blob.datastore.CachingFDS
* sample OSGI config attached. 

h3. Fresh Installation:
# fsBackendPath path of NAS or SAN storage (required)
# path path of local file system cache( optional default is 
${home.dir}/repository/datastore. for e.g 
crx-quickstart/repository/repository/datastore )


h3. Migration from FileDataStore on SAN to CachingFDS on SAN
# fsBackendPath path of NAS or SAN storage. Set this value as path property for 
FileDataStore config. 
# path path of local file system cache. Default is set at  
${home.dir}/repository/datastore

h3. Migration from FileDataStore on local file system to CachingFDS on SAN
# fsBackendPath specify the SAN path
# path same value as that of FileDataStore#path. Leave it if not set in 
FileDataStore config file. 









 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2956) NPE in UserImporter when importing group with modified authorizable id

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592065#comment-14592065
 ] 

angela commented on OAK-2956:
-

we can prevent the NPE but the xml you are providing is illegal and will fail 
upon save with {{ContraintViolationException}}. the changes you propose cannot 
be made without violating the contract of the XML import which expects the xml 
to be _complete_. and btw: the principal name is *not* duplicate. it has a 
total different semantic meaning than the ID and in fact it is (or used to be) 
different from the ID for almost every single user in real world installations 
that had LDAP integration.

 NPE in UserImporter when importing group with modified authorizable id
 --

 Key: OAK-2956
 URL: https://issues.apache.org/jira/browse/OAK-2956
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Priority: Minor

 When importing a file vault package with a group that exists before (same 
 uuid in the .content.xml) but modified authorizable id, this NPE happens:
 {noformat}
 Caused by: java.lang.NullPointerException: null
   at 
 org.apache.jackrabbit.oak.security.user.UserImporter.handlePropInfo(UserImporter.java:242)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.importProperties(ImporterImpl.java:287)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.startNode(ImporterImpl.java:470)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.processNode(SysViewImportHandler.java:80)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.startElement(SysViewImportHandler.java:117)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImportHandler.startElement(ImportHandler.java:183)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.JcrSysViewTransformer.startNode(JcrSysViewTransformer.java:167)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.DocViewSAXImporter.startElement(DocViewSAXImporter.java:598)
   ... 87 common frames omitted
 {noformat}
 It looks like [this 
 line|https://github.com/apache/jackrabbit-oak/blob/105f890e04ee990f0e71d88937955680670d96f7/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/UserImporter.java#L242],
  meaning {{userManager.getAuthorizable(id)}} returned null.
 While the use case might not be something to support (this happens during 
 development, changing group names/details until right), at least the NPE 
 should be guarded against and a better error message given.
 Looking closer, it seems like a difference between 
 {{UserManager.getAuthorizable(Tree)}} (called earlier in the method) (I 
 assume using UUID to look up the existing group successfully) and 
 {{UserManager.getAuthorizable(String)}} (using the rep:authorizableId to look 
 up the user, failing, because it changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592114#comment-14592114
 ] 

Julian Reschke commented on OAK-2986:
-

The tricky part is to exclude the juli dependency; sling's datasource extension 
does some creative things to redirect log output to slf4j...

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Slings datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592980#comment-14592980
 ] 

Chetan Mehrotra commented on OAK-2986:
--

[~reschke] Looking at code I do not see any usage of RDBDataSourceFactory in 
production code (except oak-run). So would prefer if we do not add 
{{org.apache.juli.logging}} package in oak-core. Can it be moved to test. 
Basically we put it at some place where it can be used in oak-run and 
oak-core/tests. Also note that you can let the juli be included, logback has a 
way to route Java Logging to Logback logs

Further mark the dependency on {{tomcat-jdbc}} as optional

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Slings datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593034#comment-14593034
 ] 

Chetan Mehrotra commented on OAK-2986:
--

bq. Are you saying we don't need the Juli-related workaround you did in the 
Sling datasource service?
Yup. That was just a nice to have feature to simplify logging. Its easy to do 
in OSGi env as such implementations are hidden within bundle. 

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Sling's datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593033#comment-14593033
 ] 

Julian Reschke commented on OAK-2986:
-

There's code there that uses the RDBDataSourceFactory in production; maybe 
that's a bad idea, but for now we have to deal with that.

Are you saying we don't need the Juli-related workaround you did in the Sling 
datasource service?



 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Sling's datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2986:

Description: 
See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.

In addition, this is the datasource used in Sling's datasource service, so it's 
closer to what people will use in practice.

  was:
See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.

In addition, this is the datasource used in Slings datasource service, so it's 
closer to what people will use in practice.


 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Sling's datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2986:

Attachment: OAK-2986.diff

Revised patch that avoids the org.apache.juli dependency, borrowed from 
[~chetanm]'s sling datasource service.

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff, OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Slings datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-06-18 Thread Manfred Baedke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592213#comment-14592213
 ] 

Manfred Baedke commented on OAK-2619:
-

Thx [~jsedding], this looks good and very useful. I'm not yet done testing, but 
will probably commit the patch tomorrow.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2679) Query engine: cache execution plans

2015-06-18 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592321#comment-14592321
 ] 

Michael Marth commented on OAK-2679:


re
{quote}
a simple solution might be to use a timeout, and / or a manual cache clean via 
JMX or so 
{quote}
we could also consider the approach taken by MongoDB: flush the cache by every 
1000 writes (overall)

 Query engine: cache execution plans
 ---

 Key: OAK-2679
 URL: https://issues.apache.org/jira/browse/OAK-2679
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, query
Reporter: Thomas Mueller
Assignee: Thomas Mueller
  Labels: performance
 Fix For: 1.3.1, 1.2.4

 Attachments: OAK-2679.patch, executionplancache.patch


 If there are many indexes, preparing a query can take a long time, in 
 relation to executing the query.
 The query execution plans can be cached. The cache should be invalidated if 
 there are new indexes, or indexes are changed; a simple solution might be to 
 use a timeout, and / or a manual cache clean via JMX or so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2986) RDB: switch to tomcat datasource implementation

2015-06-18 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2986:

Attachment: (was: OAK-2986.diff)

 RDB: switch to tomcat datasource implementation 
 

 Key: OAK-2986
 URL: https://issues.apache.org/jira/browse/OAK-2986
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Affects Versions: 1.2.2, 1.0.15, 1.3
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.3.2

 Attachments: OAK-2986.diff


 See https://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html.
 In addition, this is the datasource used in Slings datasource service, so 
 it's closer to what people will use in practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2920) RDBDocumentStore: fail init when database config seems to be inadequate

2015-06-18 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592348#comment-14592348
 ] 

Michael Marth commented on OAK-2920:


But for the configs that are known not to work: that would not be risky, right?
The point of this issue is to reduce user's guessing why their instance is 
borked...
In any case we could still add a --force option to ignore this check.

 RDBDocumentStore: fail init when database config seems to be inadequate
 ---

 Key: OAK-2920
 URL: https://issues.apache.org/jira/browse/OAK-2920
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Reporter: Julian Reschke
Priority: Minor
  Labels: resilience
 Fix For: 1.3.1


 It has been suggested that the implementation should fail to start (rather 
 than warn) when it detects a DB configuration that is likely to cause 
 problems (such as wrt character encoding or collation sequences)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2956) NPE in UserImporter when importing group with modified authorizable id

2015-06-18 Thread Alexander Klimetschek (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592291#comment-14592291
 ] 

Alexander Klimetschek commented on OAK-2956:


I am using this to define an application group (not to sync content between 
existing systema) and we don't have LDAP at all. Since the XML is also the 
source code, the enforced duplication means it is fragile (another dev could 
change the principal name when that should not be the case) and including the 
uuid means I am including repository implementation details in my sources. The 
package should have a declarative format that works like 
UserManager.createUser() and provides only the minimal required input (and 
where the principal name = user id). If I want a different principal name, I 
could specify it.

 NPE in UserImporter when importing group with modified authorizable id
 --

 Key: OAK-2956
 URL: https://issues.apache.org/jira/browse/OAK-2956
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Priority: Minor

 When importing a file vault package with a group that exists before (same 
 uuid in the .content.xml) but modified authorizable id, this NPE happens:
 {noformat}
 Caused by: java.lang.NullPointerException: null
   at 
 org.apache.jackrabbit.oak.security.user.UserImporter.handlePropInfo(UserImporter.java:242)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.importProperties(ImporterImpl.java:287)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImporterImpl.startNode(ImporterImpl.java:470)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.processNode(SysViewImportHandler.java:80)
   at 
 org.apache.jackrabbit.oak.jcr.xml.SysViewImportHandler.startElement(SysViewImportHandler.java:117)
   at 
 org.apache.jackrabbit.oak.jcr.xml.ImportHandler.startElement(ImportHandler.java:183)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.JcrSysViewTransformer.startNode(JcrSysViewTransformer.java:167)
   at 
 org.apache.jackrabbit.vault.fs.impl.io.DocViewSAXImporter.startElement(DocViewSAXImporter.java:598)
   ... 87 common frames omitted
 {noformat}
 It looks like [this 
 line|https://github.com/apache/jackrabbit-oak/blob/105f890e04ee990f0e71d88937955680670d96f7/oak-core/src/main/java/org/apache/jackrabbit/oak/security/user/UserImporter.java#L242],
  meaning {{userManager.getAuthorizable(id)}} returned null.
 While the use case might not be something to support (this happens during 
 development, changing group names/details until right), at least the NPE 
 should be guarded against and a better error message given.
 Looking closer, it seems like a difference between 
 {{UserManager.getAuthorizable(Tree)}} (called earlier in the method) (I 
 assume using UUID to look up the existing group successfully) and 
 {{UserManager.getAuthorizable(String)}} (using the rep:authorizableId to look 
 up the user, failing, because it changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2920) RDBDocumentStore: fail init when database config seems to be inadequate

2015-06-18 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2920:
---
Labels: resilience  (was: )

 RDBDocumentStore: fail init when database config seems to be inadequate
 ---

 Key: OAK-2920
 URL: https://issues.apache.org/jira/browse/OAK-2920
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: rdbmk
Reporter: Julian Reschke
Priority: Minor
  Labels: resilience
 Fix For: 1.3.1


 It has been suggested that the implementation should fail to start (rather 
 than warn) when it detects a DB configuration that is likely to cause 
 problems (such as wrt character encoding or collation sequences)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2619) Repeated upgrades

2015-06-18 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591384#comment-14591384
 ] 

Julian Sedding edited comment on OAK-2619 at 6/18/15 7:16 AM:
--

[~baedke] you are right. When I created the patches I tried to separate 
repeated upgrades from the include/exclude logic. {{NodeStateCopier}} only gets 
integrated into {{RepositoryUpgrade}} in a later commit regarding OAK-2586 
(Support including and excluding paths during upgrade).

I'll create a patch that includes this part. Sorry for the inconvenience.


was (Author: jsedding):
[~baedke] you are right. When I created the patches I tried to separate 
repeated upgrades from the include/exclude logic. {NodeStateCopier} only gets 
integrated into {RepositoryUpgrade} in a later commit regarding OAK-2586 
(Support including and excluding paths during upgrade).

I'll create a patch that includes this part. Sorry for the inconvenience.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Attachments: OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-06-18 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591384#comment-14591384
 ] 

Julian Sedding commented on OAK-2619:
-

[~baedke] you are right. When I created the patches I tried to separate 
repeated upgrades from the include/exclude logic. {NodeStateCopier} only gets 
integrated into {RepositoryUpgrade} in a later commit regarding OAK-2586 
(Support including and excluding paths during upgrade).

I'll create a patch that includes this part. Sorry for the inconvenience.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Attachments: OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2936) PojoSR should use Felix Connect API instead of pojosr

2015-06-18 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2936.
--
Resolution: Fixed

Done with http://svn.apache.org/r1686162

 PojoSR should use Felix Connect API instead of pojosr
 -

 Key: OAK-2936
 URL: https://issues.apache.org/jira/browse/OAK-2936
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: pojosr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.3.1


 Recently Apache Felix Connect first version is released. Oak PojoSR module is 
 currently based on older pojosr module which has moved to Felix as Connect 
 submodule. Oak PojoSR  should now make use of the  Apache Felix Connect 
 {code:xml}
 dependency
 groupIdorg.apache.felix/groupId
 artifactIdorg.apache.felix.connect/artifactId
 version0.1.0/version
 /dependency
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashank Gupta resolved OAK-3005.
-
Resolution: Fixed

Fix: added the wrapper. 
At revision: 1686158

 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591563#comment-14591563
 ] 

angela edited comment on OAK-2991 at 6/18/15 10:03 AM:
---

thanks for reporting... i will look into it asap.

btw: you have to use {{user.declaredMemberOf()}}, otherwise you will also get 
the inherited group where you obviously can't remove this very member.


was (Author: anchela):
thanks for reporting... i will look into it asap.

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor

 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3006) Remove workaround added for OAK-1404

2015-06-18 Thread angela (JIRA)
angela created OAK-3006:
---

 Summary: Remove workaround added for OAK-1404
 Key: OAK-3006
 URL: https://issues.apache.org/jira/browse/OAK-3006
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela


with the changes made in OAK-2978 the workaround added for the system subject 
in the light of OAK-1404 is no longer needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3006) Remove workaround added for OAK-1404

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3006:

Priority: Minor  (was: Major)

 Remove workaround added for OAK-1404
 

 Key: OAK-3006
 URL: https://issues.apache.org/jira/browse/OAK-3006
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor

 with the changes made in OAK-2978 the workaround added for the system subject 
 in the light of OAK-1404 is no longer needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2991:

Attachment: OAK-2991_testcase.patch

attached test case intended to reproduce the described behavior. it passes.
[~alexander.klimetschek], could you please take a look the test and see if it 
does the right thing? any hint how i can reproduce the problem you are seeing?

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor
 Attachments: OAK-2991_testcase.patch


 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2991:

Attachment: OAK-2991_testcase.patch

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor
 Attachments: OAK-2991_testcase.patch


 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2991:

Attachment: (was: OAK-2991_testcase.patch)

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor
 Attachments: OAK-2991_testcase.patch


 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3005) OSGI wrapper service for Jackrabbit CachingFDS

2015-06-18 Thread Shashank Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591459#comment-14591459
 ] 

Shashank Gupta commented on OAK-3005:
-

Currently JCR's CachingDS has its own internal caching mechanism which is used 
by S3DS. The current patch is just an OSGI wrapper service for JCR's 
CachingFDS. 
Had offline discussion with [~chetanm].  we agree that itis better place to 
have caching capabilities at {{BlobStore}} level and its DataStore should not 
requires caching.  This effort would be taken separately. 



 OSGI wrapper service for Jackrabbit CachingFDS
 --

 Key: OAK-3005
 URL: https://issues.apache.org/jira/browse/OAK-3005
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: blob
Affects Versions: 1.0.15
Reporter: Shashank Gupta
Assignee: Shashank Gupta
  Labels: features, performance
 Fix For: 1.0.16

 Attachments: OAK-2729.patch


 OSGI service wrapper for JCR-3869 which provides CachingDataStore 
 capabilities for SAN  NAS storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2619) Repeated upgrades

2015-06-18 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated OAK-2619:

Attachment: OAK-2619-v2.patch

[~baedke], attached is a new patch which includes the changes to 
{{RepositoryUpgrade}}.

I have also updated my github fork at 
https://github.com/code-distillery/jackrabbit-oak in case that's more 
convenient.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2619) Repeated upgrades

2015-06-18 Thread Julian Sedding (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591454#comment-14591454
 ] 

Julian Sedding edited comment on OAK-2619 at 6/18/15 8:16 AM:
--

[~baedke], attached is [a new patch|^OAK-2619-v2.patch] which includes the 
changes to {{RepositoryUpgrade}}.

I have also updated my github fork at 
https://github.com/code-distillery/jackrabbit-oak in case that's more 
convenient.


was (Author: jsedding):
[~baedke], attached is a new patch which includes the changes to 
{{RepositoryUpgrade}}.

I have also updated my github fork at 
https://github.com/code-distillery/jackrabbit-oak in case that's more 
convenient.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
 Attachments: OAK-2619-v2.patch, OAK-2619.patch, 
 incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2962) SegmentNodeStoreService fails to handle empty strings in the OSGi configuration

2015-06-18 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2962:

Attachment: OAK-2962-02.patch

In {{OAK\-2962-02.patch}} I extracted the {{lookup()}} method into a utility 
class. {{SegmentNodeStoreService}} now uses this utility method. I also added 
some unit tests. [~amitjain], can you take a look at the patch?

 SegmentNodeStoreService fails to handle empty strings in the OSGi 
 configuration
 ---

 Key: OAK-2962
 URL: https://issues.apache.org/jira/browse/OAK-2962
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Francesco Mari
Assignee: Francesco Mari
 Fix For: 1.3.1

 Attachments: OAK-2962-01.patch, OAK-2962-02.patch


 When an OSGi configuration property is removed from the dictionary associated 
 to a component, the default value assigned to it is an empty string.
 When such an empty string is processed by {{SegmentNodeStoreService#lookup}}, 
 it is returned to its caller as a valid configuration value. The callers of 
 {{SegmentNodeStoreService#lookup}}, instead, expect {{null}} when such an 
 empty value is found.
 The method {{SegmentNodeStoreService#lookup}} should check for empty strings 
 in the OSGi configuration, and treat them as {{null}} values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2991:

Component/s: (was: security)
 core

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek

 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela reassigned OAK-2991:
---

Assignee: angela

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela

 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2991:

Priority: Minor  (was: Major)

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela
Priority: Minor

 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2991) group.removeMember() on result of user.memberOf() does not work

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591563#comment-14591563
 ] 

angela commented on OAK-2991:
-

thanks for reporting... i will look into it asap.

 group.removeMember() on result of user.memberOf() does not work
 ---

 Key: OAK-2991
 URL: https://issues.apache.org/jira/browse/OAK-2991
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Affects Versions: 1.2.2
Reporter: Alexander Klimetschek
Assignee: angela

 When using the group from the memberOf() iterator to remove a user from that 
 group, the change is not persisted. One has to fetch the group separately 
 from the UserManager.
 {code}
 final IteratorGroup groups = user.memberOf();
 while (groups.hasNext()) {
 Group group = groups.next();
 group.removeMember(user); // does not work
 session.save();
 
 group = userManager.getGroup(group.getID());
 group.removeMember(user); // does work
 session.save(); 
 }
 {code}
 Note that {{removeMember()}} always returns true, indicating that the change 
 worked. Debugging through the code, especially MembershipWriter, shows that 
 the rep:members property is correctly updated, so probably some later 
 modification restores the original value before the save is executed. (No JCR 
 events are triggered for the group).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3006) Remove workaround added for OAK-1404

2015-06-18 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-3006.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

 Remove workaround added for OAK-1404
 

 Key: OAK-3006
 URL: https://issues.apache.org/jira/browse/OAK-3006
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
Priority: Minor
 Fix For: 1.3.1


 with the changes made in OAK-2978 the workaround added for the system subject 
 in the light of OAK-1404 is no longer needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3003) Improve login performance with huge group membership

2015-06-18 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589578#comment-14589578
 ] 

angela edited comment on OAK-3003 at 6/18/15 10:50 AM:
---

initial proposal including some preliminary figures from the 
{{LoginWithMembershipTest}}.

[~djaeggi], [~chaotic], would it be possible to look at the proposal? the goal 
was to tmp. cache group-principals as calculated upon 
{{PermissionProvider#getPrincipals(String)}} but making sure this is only 
writable by the system session compiling that list upon login.

One known drawback of this proposal is a potential information leakage about 
group membership for a given user: unless group members always have read access 
to the group, we may reveal information about group membership that was 
accessible using regular {{Authorizable.memberOf}} calls. A possible way to 
prevent this was storing the information in a hidden node (name starting with 
':') which in turn results in the caching to be no longer accessible through 
JCR and OAK API altogether; i.e. cleaning up or looking at it was only possible 
on the NodeStore level.

TODO:
- verify proper handling of potential collisions upon concurrent login
- verify there is _really_ no way to exploit this
- verify behavior in clustered environment.

however, before investing a lot of effort, i would be glad to get feedback from 
my fellows in the security team :-)


was (Author: anchela):
initial proposal including some preliminary figures from the 
{{LoginWithMembershipTest}}.

[~djaeggi], [~chaotic], would it be possible to look at the proposal? the goal 
was to tmp. cache group-principals as calculated upon 
{{PermissionProvider#getPrincipals(String)}} but making sure this is only 
writable by the system session compiling that list upon login.

TODO:
- verify proper handling of potential collisions upon concurrent login
- verify there is _really_ no way to exploit this
- verify behavior in clustered environment.

however, before investing a lot of effort, i would be glad to get feedback from 
my fellows in the security team :-)

 Improve login performance with huge group membership
 

 Key: OAK-3003
 URL: https://issues.apache.org/jira/browse/OAK-3003
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: angela
Assignee: angela
 Attachments: OAK-3003.patch, group_cache_in_userprincipalprovider.txt


 As visible when running {{LoginWithMembershipTest}} default login performance 
 (which uses the {{o.a.j.oak.security.principal.PrincipalProviderImpl}} to 
 lookup the complete set of principals) suffers when a given user is member of 
 a huge number of groups (see also OAK-2690 for benchmark data).
 While using dynamic group membership (and thus a custom {{PrincipalProvider}} 
 would be the preferable way to deal with this, we still want to optimize 
 {{PrincipalProvider.getPrincipals(String userId}} for the default 
 implementation.
 With the introduction of a less generic implementation in OAK-2690, we might 
 be able to come up with an optimization that makes use of the very 
 implementation details of the user management while at the same time being 
 able to properly secure it as we won't need to extend the public API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)