[jira] [Resolved] (OAK-3146) ExternalLoginModuleFactory should inject SyncManager and ExternalIdentityProviderManager

2015-07-27 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-3146.
--
Resolution: Fixed

Applied the patch along with a testcase
* trunk - http://svn.apache.org/r1692998
* 1.0 - http://svn.apache.org/r1692999
* 1.2 - http://svn.apache.org/r1693000

 ExternalLoginModuleFactory should inject SyncManager and 
 ExternalIdentityProviderManager
 

 Key: OAK-3146
 URL: https://issues.apache.org/jira/browse/OAK-3146
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: security
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: performance
 Fix For: 1.2.4, 1.3.4, 1.0.18

 Attachments: OAK-3146.patch


 {{ExternalLoginModule}} currently performs a lookup of {{SyncManager}} and 
 {{ExternalIdentityProviderManager}} in initialize call which performs service 
 lookup using ServiceTracker. Doing a service lookup in critical path would 
 have adverse performance impact. 
 Instead of performing the lookup {{ExternalLoginModuleFactory}} should track 
 the two manager instances and pass them to {{ExternalLoginModule}}. This 
 would ensure that expensive operation like service lookup is not performed in 
 critical path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2875) Namespaces keep references to old node states

2015-07-27 Thread Alex Parvulescu (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642734#comment-14642734
 ] 

Alex Parvulescu commented on OAK-2875:
--

[~mduerig] what do you think, can we work on cleaning  pushing this one to 
trunk?
the only thing missing from the patch is bumping up the OSGi package export 
version on _org.apache.jackrabbit.oak.namepath_ to 2.0

 Namespaces keep references to old node states
 -

 Key: OAK-2875
 URL: https://issues.apache.org/jira/browse/OAK-2875
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, jcr
Reporter: Alex Parvulescu
Assignee: Alex Parvulescu
 Fix For: 1.3.5

 Attachments: OAK-2875-v1.patch, OAK-2875-v2.patch


 As described on the parent issue OA2849, the session namespaces keep a 
 reference to a Tree instance which will make GC inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3122) Direct copy of the chunked data between blob stores

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek resolved OAK-3122.

Resolution: Won't Fix

Marking as won't fix because it's impossible right now to direct copy 
MongoBlobStore to FileDataStore without touching NodeStore. The SplitBlobStore 
approach may be introduced in another JIRA.

 Direct copy of the chunked data between blob stores
 ---

 Key: OAK-3122
 URL: https://issues.apache.org/jira/browse/OAK-3122
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, mongomk, upgrade
Reporter: Tomek Rękawek
 Fix For: 1.4


 It could be useful to have a tool that allows to copy blob chunks directly 
 between different stores, so users can quickly migrate their data, without a 
 need to touch the node store, consolidate binaries, etc.
 Such tool should have direct access to the methods operating on the binary 
 blocks, implemented in the {{AbstractBlobStore}} and its subtypes:
 {code}
 void storeBlock(byte[] digest, int level, byte[] data);
 byte[] readBlockFromBackend(BlockId blockId);
 IteratorString getAllChunkIds(final long maxLastModifiedTime);
 {code}
 My proposal is to create a {{ChunkedBlobStore}} interface containing these 
 methods, which can be implemented by {{FileBlobStore}} and {{MongoBlobStore}}.
 Then we can enumerate all chunk ids, read the underlying blocks from source 
 blob store and save them in the destination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3144) Support multivalue user properties for Ldap users

2015-07-27 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642682#comment-14642682
 ] 

Konrad Windszus commented on OAK-3144:
--

The PR at https://github.com/apache/jackrabbit-oak/pull/32 adds this capability 
to the 1.2 branch.

 Support multivalue user properties for Ldap users
 -

 Key: OAK-3144
 URL: https://issues.apache.org/jira/browse/OAK-3144
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.3
Reporter: Konrad Windszus

 Currently the {{ExternalUser.getProperties}} only exposes single value 
 properties (or in case of multiple values in the LDAP only the first value). 
 The problem is the code {{LdapIdentityProvider.createUser()}} 
 (https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-ldap/src/main/java/org/apache/jackrabbit/oak/security/authentication/ldap/impl/LdapIdentityProvider.java#L711).
  This only uses 
 http://directory.apache.org/api/gen-docs/latest/apidocs/org/apache/directory/api/ldap/model/entry/Attribute.html#getString%28%29
  which returns the first value only. One has to leverage the iterator 
 implemented by each attribute object to get all values!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2619) Repeated upgrades

2015-07-27 Thread Manfred Baedke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manfred Baedke updated OAK-2619:

Fix Version/s: 1.2.4

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
  Labels: doc-impacting
 Fix For: 1.2.4, 1.3.3

 Attachments: OAK-2619-branch-1.0.patch, OAK-2619-v2.patch, 
 OAK-2619.patch, incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2619) Repeated upgrades

2015-07-27 Thread Manfred Baedke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642867#comment-14642867
 ] 

Manfred Baedke commented on OAK-2619:
-

Merged into branch 1.2 with revision 1692897.

 Repeated upgrades
 -

 Key: OAK-2619
 URL: https://issues.apache.org/jira/browse/OAK-2619
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: upgrade
Affects Versions: 1.1.7
Reporter: Julian Sedding
Assignee: Manfred Baedke
Priority: Minor
  Labels: doc-impacting
 Fix For: 1.2.4, 1.3.3

 Attachments: OAK-2619-branch-1.0.patch, OAK-2619-v2.patch, 
 OAK-2619.patch, incremental-upgrade-no-changes-mongo.png, 
 incremental-upgrade-no-changes-tar.png, initial-upgrade-mongo.png, 
 initial-upgrade-tar.png


 When upgrading from Jackrabbit 2 to Oak there are several scenarios that 
 could benefit from the ability to upgrade repeatedly into one target 
 repository.
 E.g. a migration process might look as follows:
 # upgrade a backup of a large repository a week before go-live
 # run the upgrade again every night (commit-hooks only handle delta)
 # run the upgrade one final time before go-live (commit-hooks only handle 
 delta)
 In this scenario each upgrade would require a full traversal of the source 
 repository. However, if done right, only the delta needs to be written and 
 the commit-hooks also only need to process the delta.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3139) SNFE in persisted comapation map when using CLEAN_OLD

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3139:
---
Attachment: OAK-3139.patch

Patch with a POC for a potential fix. [~alex.parvulescu], WDYT of the general 
approach? The idea is to have a new segment type (compaction map), which would 
not get gc'ed with CLEAN_OLD to prevent old entries in the compaction map from 
getting lost. The patch needs a lot of cleaning up still. Just wanted to check 
with you whether you'd agree on the general direction or whether we should look 
for another approach. 

 SNFE in persisted comapation map when using CLEAN_OLD
 -

 Key: OAK-3139
 URL: https://issues.apache.org/jira/browse/OAK-3139
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Critical
  Labels: compaction, gc
 Fix For: 1.3.4

 Attachments: OAK-3139.patch


 When using {{CLEAN_OLD}} it might happen that segments of the persisted 
 compaction map get collected. The reason for this is that only the segment 
 containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
 segments of the compaction map eligible for collection once old enough. 
 {noformat}
 org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
 95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:919)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:134)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
 at 
 org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
 at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:154)
 at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:186)
 at 
 org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:118)
 at 
 org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:100)
 at 
 org.apache.jackrabbit.oak.plugins.segment.CompactionMap.get(CompactionMap.java:93)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.uncompact(SegmentWriter.java:1023)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1033)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.init(SegmentNodeStore.java:418)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:204)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3146) ExternalLoginModuleFactory should inject SyncManager and ExternalIdentityProviderManager

2015-07-27 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3146:
-
Attachment: OAK-3146.patch

[patch|^OAK-3146.patch] with proposed approach. {{ExternalLoginModule}} would 
continue to work as is in non OSGi env i.e. one which is not back by Felix JAAS 
support. However if LoginModule is created via LoginModuleFactory then the 
syncManager etc would be initialized and expensive service lookup would be 
avoided in initialize call

Would see if I can come up with a testcase. Validating that 
{{ExternalLoginModule}} is not doing a lookup and instead using the passed 
references is tricky

[~tripod] Can you review the patch?

 ExternalLoginModuleFactory should inject SyncManager and 
 ExternalIdentityProviderManager
 

 Key: OAK-3146
 URL: https://issues.apache.org/jira/browse/OAK-3146
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: security
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.4, 1.3.4, 1.0.18

 Attachments: OAK-3146.patch


 {{ExternalLoginModule}} currently performs a lookup of {{SyncManager}} and 
 {{ExternalIdentityProviderManager}} in initialize call which performs service 
 lookup using ServiceTracker. Doing a service lookup in critical path would 
 have adverse performance impact. 
 Instead of performing the lookup {{ExternalLoginModuleFactory}} should track 
 the two manager instances and pass them to {{ExternalLoginModule}}. This 
 would ensure that expensive operation like service lookup is not performed in 
 critical path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3146) ExternalLoginModuleFactory should inject SyncManager and ExternalIdentityProviderManager

2015-07-27 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-3146:


 Summary: ExternalLoginModuleFactory should inject SyncManager and 
ExternalIdentityProviderManager
 Key: OAK-3146
 URL: https://issues.apache.org/jira/browse/OAK-3146
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: security
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.4, 1.3.4, 1.0.18


{{ExternalLoginModule}} currently performs a lookup of {{SyncManager}} and 
{{ExternalIdentityProviderManager}} in initialize call which performs service 
lookup using ServiceTracker. Doing a service lookup in critical path would have 
adverse performance impact. 

Instead of performing the lookup {{ExternalLoginModuleFactory}} should track 
the two manager instances and pass them to {{ExternalLoginModule}}. This would 
ensure that expensive operation like service lookup is not performed in 
critical path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property

2015-07-27 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642871#comment-14642871
 ] 

Stefan Egli commented on OAK-3001:
--

[~mreutegg], [~reschke], [~chetanm], how should we proceed here. To me the 
simplest change would be to change the current {{query}} method to look as 
follows:
{code}
public T extends Document ListT query(CollectionT collection,
  String fromKey,
  String toKey,
  String indexedProperty,
  long startValue,
  long endValue,
  int limit) {
{code}
where {{startValue==Long.MIN_VALUE}} and/or {{endValue==Long.MAX_VALUE}} could 
be used to basically disable that part of the condition.

 Simplify JournalGarbageCollector using a dedicated timestamp property
 -

 Key: OAK-3001
 URL: https://issues.apache.org/jira/browse/OAK-3001
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Stefan Egli
Assignee: Stefan Egli
Priority: Critical
  Labels: scalability
 Fix For: 1.2.4, 1.3.4


 This subtask is about spawning out a 
 [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733]
  from [~chetanm] re JournalGC:
 {quote}
 Further looking at JournalGarbageCollector ... it would be simpler if you 
 record the journal entry timestamp as an attribute in JournalEntry document 
 and then you can delete all the entries which are older than some time by a 
 simple query. This would avoid fetching all the entries to be deleted on the 
 Oak side
 {quote}
 and a corresponding 
 [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870]
  from myself:
 {quote}
 Re querying by timestamp: that would indeed be simpler. With the current set 
 of DocumentStore API however, I believe this is not possible. But: 
 [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127]
  comes quite close: it would probably just require the opposite of that 
 method too: 
 {code}
 public T extends Document ListT query(CollectionT collection,
   String fromKey,
   String toKey,
   String indexedProperty,
   long endValue,
   int limit) {
 {code}
 .. or what about generalizing this method to have both a {{startValue}} and 
 an {{endValue}} - with {{-1}} indicating when one of them is not used?
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3147) Make it possible to collapse results under jcr:content nodes

2015-07-27 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-3147:


 Summary: Make it possible to collapse results under jcr:content 
nodes
 Key: OAK-3147
 URL: https://issues.apache.org/jira/browse/OAK-3147
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.2.4, 1.3.4, 1.0.18


In some use cases a lot of search results live in descendants of jcr:content 
nodes although the caller is only interested in those jcr:content nodes as 
results (that post filtering being applied using primary types or other 
constraints) so that the Solr query could be optimized by applying Solr's 
Collapse query parser in order to collapse such documents and therefore 
allowing huge improvements in performance, especially on the query engine not 
having to filter e.g. 90% of the nodes returned by the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3146) ExternalLoginModuleFactory should inject SyncManager and ExternalIdentityProviderManager

2015-07-27 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-3146:


Assignee: Chetan Mehrotra

 ExternalLoginModuleFactory should inject SyncManager and 
 ExternalIdentityProviderManager
 

 Key: OAK-3146
 URL: https://issues.apache.org/jira/browse/OAK-3146
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: security
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2.4, 1.3.4, 1.0.18

 Attachments: OAK-3146.patch


 {{ExternalLoginModule}} currently performs a lookup of {{SyncManager}} and 
 {{ExternalIdentityProviderManager}} in initialize call which performs service 
 lookup using ServiceTracker. Doing a service lookup in critical path would 
 have adverse performance impact. 
 Instead of performing the lookup {{ExternalLoginModuleFactory}} should track 
 the two manager instances and pass them to {{ExternalLoginModule}}. This 
 would ensure that expensive operation like service lookup is not performed in 
 critical path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3144) Support multivalue user properties

2015-07-27 Thread Konrad Windszus (JIRA)
Konrad Windszus created OAK-3144:


 Summary: Support multivalue user properties
 Key: OAK-3144
 URL: https://issues.apache.org/jira/browse/OAK-3144
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: auth-ldap
Affects Versions: 1.3.3
Reporter: Konrad Windszus


Currently the {{ExternalUser.getProperties}} only exposes single value 
properties (or in case of multiple values in the LDAP only the first value). 
The problem is the code {{LdapIdentityProvider.createUser()}} 
(https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-ldap/src/main/java/org/apache/jackrabbit/oak/security/authentication/ldap/impl/LdapIdentityProvider.java#L711).
 This only uses 
http://directory.apache.org/api/gen-docs/latest/apidocs/org/apache/directory/api/ldap/model/entry/Attribute.html#getString%28%29
 which returns the first value only. One has to leverage the iterator 
implemented by each attribute object to get all values!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-865) Expose repository management data and statistics

2015-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642422#comment-14642422
 ] 

Michael Dürig commented on OAK-865:
---

I suggest we resolve this issue as this is mainly done. New monitoring is being 
added as required and tracked in its own issues nowadays. 

 Expose repository management data and statistics
 

 Key: OAK-865
 URL: https://issues.apache.org/jira/browse/OAK-865
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, jcr
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: monitoring

 We should come up with a way to expose repository management information and 
 statistics. See JCR-2936 plus subtasks and JCR-3243 for how this is done in 
 Jackrabbit 2. See [this discussion | 
 http://apache-sling.73963.n3.nabble.com/Monitoring-and-Statistics-td4021905.html]
  for an alternative proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2881) ConsistencyChecker#checkConsistency can't cope with inconsistent journal

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2881:
---
Fix Version/s: (was: 1.3.4)
   1.3.5

 ConsistencyChecker#checkConsistency can't cope with inconsistent journal
 

 Key: OAK-2881
 URL: https://issues.apache.org/jira/browse/OAK-2881
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: run
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: resilience, tooling
 Fix For: 1.3.5


 When running the consistency checker against a repository with a corrupt 
 journal, it fails with an {{ISA}} instead of trying to skip over invalid 
 revision identifiers:
 {noformat}
 Exception in thread main java.lang.IllegalArgumentException: Bad record 
 identifier: foobar
 at 
 org.apache.jackrabbit.oak.plugins.segment.RecordId.fromString(RecordId.java:57)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:227)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:178)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:156)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.init(FileStore.java:166)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore$ReadOnlyStore.init(FileStore.java:805)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.init(ConsistencyChecker.java:108)
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.tooling.ConsistencyChecker.checkConsistency(ConsistencyChecker.java:70)
 at org.apache.jackrabbit.oak.run.Main.check(Main.java:701)
 at org.apache.jackrabbit.oak.run.Main.main(Main.java:158)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2879) Compaction should check for required disk space before running

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2879:
---
Fix Version/s: (was: 1.3.4)
   1.3.5

 Compaction should check for required disk space before running
 --

 Key: OAK-2879
 URL: https://issues.apache.org/jira/browse/OAK-2879
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: compaction, doc-impacting, gc, resilience
 Fix For: 1.3.5


 In the worst case compaction doubles the repository size while running. As 
 this is somewhat unexpected we should check whether there is enough free disk 
 space before running compaction and log a warning otherwise. This is to avoid 
 a common source of running out of disk space and ending up with a corrupted 
 repository. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-27 Thread Stefan Egli (JIRA)
Stefan Egli created OAK-3145:


 Summary: Allow plugging in additional jcr-descriptors
 Key: OAK-3145
 URL: https://issues.apache.org/jira/browse/OAK-3145
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.3.3
Reporter: Stefan Egli
Assignee: Stefan Egli
Priority: Minor
 Fix For: 1.3.4


The current way oak compiles the JCR descriptors is somewhat hard-coded and 
does not allow an easy way to extend it. Currently oak-jcr's 
{{RepositoryImpl.determineDescriptors()}} uses oak-core's 
{{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
{{JCRDescriptorsImpl}}). The result of this is then put into the 
{{RepositoryImpl.descriptors}} field - which is subsequently used.

While these descriptors can explicitly be overwritten via {{put}}, that all has 
to happen within overwriting code of the {{RepositoryImpl}} hierarchy. There's 
no plug-in mechanism.

Using the whiteboard pattern there should be a mechanism that allows to provide 
services that implement {{Descriptors.class}} which could then be woven into 
the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for example. The 
plugin mechanism would thus allow hook in Descriptors as 'the base' which are 
still overwritten by oak-core/oak-jcr's own defaults (to ensure the final say 
is with oak-core/oak-jcr) - as opening up such a Descriptors-plugin mechanism 
could potentially be used by 3rd party code too.

PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2507) Truncate journal.log after off line compaction

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2507:
---
Fix Version/s: (was: 1.3.4)
   1.3.5

 Truncate journal.log after off line compaction
 --

 Key: OAK-2507
 URL: https://issues.apache.org/jira/browse/OAK-2507
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: compaction, gc
 Fix For: 1.3.5


 After off line compaction the repository contains a single revision. However 
 the journal.log file will still contain the trail of all revisions that have 
 been removed during the compaction process. I suggest we truncate the 
 journal.log to only contain the latest revision created during compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2859) Test failure: OrderableNodesTest

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2859:
---
Description: 
{{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
running the {{DOCUMENT_RDB}} fixture.

{noformat}
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec  
FAILURE!
orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
elapsed: 3.858 sec   ERROR!
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:57)
at 
org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder(OrderableNodesTest.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
{noformat}

Failure seen at builds: 81, 87, 92, 95, 96, 114, 120, 128, 134, 186, 243, 272, 
292

See e.g. 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/128/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/console

  was:
{{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
running the {{DOCUMENT_RDB}} fixture.

{noformat}
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec  
FAILURE!
orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
elapsed: 3.858 sec   ERROR!
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 

[jira] [Updated] (OAK-2878) Test failure: AutoCreatedItemsTest.autoCreatedItems

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2878:
---
Description: 
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138, 249, 292

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/

  was:
{{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
on Jenkins: No default node type available for /testdata/property

{noformat}
javax.jcr.nodetype.ConstraintViolationException: No default node type available 
for /testdata/property
at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
at 
org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
at 
org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:58)
at 
org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems(AutoCreatedItemsTest.java:42)
{noformat}

Failure seen at builds: 130, 134, 138, 249

See 
https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/134/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.jcr/AutoCreatedItemsTest/autoCreatedItems_0_/


 Test failure: AutoCreatedItemsTest.autoCreatedItems
 ---

 Key: OAK-2878
 URL: https://issues.apache.org/jira/browse/OAK-2878
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: Jenkins: https://builds.apache.org/job/Apache Jackrabbit 
 Oak matrix/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: CI, jenkins

 {{org.apache.jackrabbit.oak.jcr.AutoCreatedItemsTest.autoCreatedItems}} fails 
 on Jenkins: No default node type available for /testdata/property
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata/property
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 

[jira] [Updated] (OAK-1576) SegmentMK: Implement refined conflict resolution for addExistingNode conflicts

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1576:
---
Fix Version/s: (was: 1.3.5)
   1.3.6

 SegmentMK: Implement refined conflict resolution for addExistingNode conflicts
 --

 Key: OAK-1576
 URL: https://issues.apache.org/jira/browse/OAK-1576
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: concurrency, resilience, scalability
 Fix For: 1.3.6


 Implement refined conflict resolution for addExistingNode conflicts as 
 defined in the parent issue for the SegementMK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1553) More sophisticated conflict resolution when concurrently adding nodes

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1553:
---
Fix Version/s: (was: 1.3.5)
   1.4

 More sophisticated conflict resolution when concurrently adding nodes
 -

 Key: OAK-1553
 URL: https://issues.apache.org/jira/browse/OAK-1553
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk, segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: concurrency, resilience, scalability
 Fix For: 1.4

 Attachments: OAK-1553.patch


 {{MicroKernel.rebase}} currently specifies: addExistingNode: A node has been 
 added that is different from a node of them same name that has been added to 
 the trunk.
 This is somewhat troublesome in the case where the same node with different 
 but non conflicting child items is added concurrently:
 {code}
 f.add(fo).add(u1); commit();
 f.add(fo).add(u2); commit();
 {code}
 currently fails with a conflict because {{fo}} is not the same node for the 
 both cases. See discussion http://markmail.org/message/flst4eiqvbp4gi3z



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3139) SNFE in persisted comapation map when using CLEAN_OLD

2015-07-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3139:
---
Fix Version/s: 1.3.4

 SNFE in persisted comapation map when using CLEAN_OLD
 -

 Key: OAK-3139
 URL: https://issues.apache.org/jira/browse/OAK-3139
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: compaction, gc
 Fix For: 1.3.4


 When using {{CLEAN_OLD}} it might happen that segments of the persisted 
 compaction map get collected. The reason for this is that only the segment 
 containing the root of the map is pinned ({{SegmentId#pin}}), leaving other 
 segments of the compaction map eligible for collection once old enough. 
 {noformat}
 org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
 95cbb3e2-3a8c-4976-ae5b-6322ff102731 not found
 at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:919)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:134)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
 at 
 org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
 at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:154)
 at 
 org.apache.jackrabbit.oak.plugins.segment.MapRecord.getEntry(MapRecord.java:186)
 at 
 org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:118)
 at 
 org.apache.jackrabbit.oak.plugins.segment.PersistedCompactionMap.get(PersistedCompactionMap.java:100)
 at 
 org.apache.jackrabbit.oak.plugins.segment.CompactionMap.get(CompactionMap.java:93)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.uncompact(SegmentWriter.java:1023)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentWriter.writeNode(SegmentWriter.java:1033)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeBuilder.getNodeState(SegmentNodeBuilder.java:100)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore$Commit.init(SegmentNodeStore.java:418)
 at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:204)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2859) Test failure: OrderableNodesTest

2015-07-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642444#comment-14642444
 ] 

Michael Dürig commented on OAK-2859:


Ignoring the test for now as it kept failing the build: 
http://svn.apache.org/r1692835

 Test failure: OrderableNodesTest
 

 Key: OAK-2859
 URL: https://issues.apache.org/jira/browse/OAK-2859
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
 Environment: https://builds.apache.org/
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: ci, jenkins

 {{org.apache.jackrabbit.oak.jcr.OrderableNodesTest}} fails on Jenkins when 
 running the {{DOCUMENT_RDB}} fixture.
 {noformat}
 Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.572 sec 
  FAILURE!
 orderableFolder[0](org.apache.jackrabbit.oak.jcr.OrderableNodesTest)  Time 
 elapsed: 3.858 sec   ERROR!
 javax.jcr.nodetype.ConstraintViolationException: No default node type 
 available for /testdata
   at org.apache.jackrabbit.oak.util.TreeUtil.addChild(TreeUtil.java:186)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.NodeDelegate.addChild(NodeDelegate.java:692)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:296)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl$5.perform(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:202)
   at 
 org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:262)
   at 
 org.apache.jackrabbit.oak.jcr.session.NodeImpl.addNode(NodeImpl.java:247)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.getOrAddNode(TestContentLoader.java:90)
   at 
 org.apache.jackrabbit.oak.jcr.TestContentLoader.loadTestContent(TestContentLoader.java:57)
   at 
 org.apache.jackrabbit.oak.jcr.OrderableNodesTest.orderableFolder(OrderableNodesTest.java:47)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at org.junit.runners.Suite.runChild(Suite.java:128)
   at org.junit.runners.Suite.runChild(Suite.java:24)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Failure seen at builds: 81, 87, 92, 95, 96, 114, 120, 128, 134, 186, 243, 
 272, 292
 See e.g. 
 https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/128/jdk=jdk-1.6u45,label=Ubuntu,nsfixtures=DOCUMENT_RDB,profile=unittesting/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2844) Introducing a simple document-based discovery-light service (to circumvent documentMk's eventual consistency delays)

2015-07-27 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642443#comment-14642443
 ] 

Stefan Egli commented on OAK-2844:
--

[~chetanm], created OAK-3145 for this separated-out Descriptors plugin mechanism

 Introducing a simple document-based discovery-light service (to circumvent 
 documentMk's eventual consistency delays)
 

 Key: OAK-2844
 URL: https://issues.apache.org/jira/browse/OAK-2844
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk
Reporter: Stefan Egli
Assignee: Stefan Egli
  Labels: resilience
 Fix For: 1.4

 Attachments: InstanceStateChangeListener.java, OAK-2844.WIP-02.patch, 
 OAK-2844.patch, OAK-2844.v3.patch


 When running discovery.impl on a mongoMk-backed jcr repository, there are 
 risks of hitting problems such as described in SLING-3432 
 pseudo-network-partitioning: this happens when a jcr-level heartbeat does 
 not reach peers within the configured heartbeat timeout - it then treats that 
 affected instance as dead, removes it from the topology, and continues with 
 the remainings, potentially electing a new leader, running the risk of 
 duplicate leaders. This happens when delays in mongoMk grow larger than the 
 (configured) heartbeat timeout. These problems ultimately are due to the 
 'eventual consistency' nature of, not only mongoDB, but more so of mongoMk. 
 The only alternative so far is to increase the heartbeat timeout to match the 
 expected or measured delays that mongoMk can produce (under say given 
 load/performance scenarios).
 Assuming that mongoMk will always carry a risk of certain delays and a 
 maximum, reasonable (for discovery.impl timeout that is) maximum cannot be 
 guaranteed, a better solution is to provide discovery with more 'real-time' 
 like information and/or privileged access to mongoDb.
 Here's a summary of alternatives that have so far been floating around as a 
 solution to circumvent eventual consistency:
  # expose existing (jmx) information about active 'clusterIds' - this has 
 been proposed in SLING-4603. The pros: reuse of existing functionality. The 
 cons: going via jmx, binding of exposed functionality as 'to be maintained 
 API'
  # expose a plain mongo db/collection (via osgi injection) such that a higher 
 (sling) level discovery could directly write heartbeats there. The pros: 
 heartbeat latency would be minimal (assuming the collection is not sharded). 
 The cons: exposes a mongo db/collection potentially also to anyone else, with 
 the risk of opening up to unwanted possibilities
  # introduce a simple 'discovery-light' API to oak which solely provides 
 information about which instances are active in a cluster. The implementation 
 of this is not exposed. The pros: no need to expose a mongoDb/collection, 
 allows any other jmx-functionality to remain unchanged. The cons: a new API 
 that must be maintained
 This ticket is about the 3rd option, about a new mongo-based discovery-light 
 service that is introduced to oak. The functionality in short:
  * it defines a 'local instance id' that is non-persisted, ie can change at 
 each bundle activation.
  * it defines a 'view id' that uniquely identifies a particular incarnation 
 of a 'cluster view/state' (which is: a list of active instance ids)
  * and it defines a list of active instance ids
  * the above attributes are passed to interested components via a listener 
 that can be registered. that listener is called whenever the discovery-light 
 notices the cluster view has changed.
 While the actual implementation could in fact be based on the existing 
 {{getActiveClusterNodes()}} {{getClusterId()}} of the 
 {{DocumentNodeStoreMBean}}, the suggestion is to not fiddle with that part, 
 as that has dependencies to other logic. But instead, the suggestion is to 
 create a dedicated, other, collection ('discovery') where heartbeats as well 
 as the currentView are stored.
 Will attach a suggestion for an initial version of this for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-27 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3145:
-
Attachment: OAK-3145.patch

Attached [^OAK-3145.patch] as a suggested implementation. Open for review.

 Allow plugging in additional jcr-descriptors
 

 Key: OAK-3145
 URL: https://issues.apache.org/jira/browse/OAK-3145
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.3.3
Reporter: Stefan Egli
Assignee: Stefan Egli
Priority: Minor
 Fix For: 1.3.4

 Attachments: OAK-3145.patch


 The current way oak compiles the JCR descriptors is somewhat hard-coded and 
 does not allow an easy way to extend it. Currently oak-jcr's 
 {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
 {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
 {{JCRDescriptorsImpl}}). The result of this is then put into the 
 {{RepositoryImpl.descriptors}} field - which is subsequently used.
 While these descriptors can explicitly be overwritten via {{put}}, that all 
 has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
 There's no plug-in mechanism.
 Using the whiteboard pattern there should be a mechanism that allows to 
 provide services that implement {{Descriptors.class}} which could then be 
 woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
 example. The plugin mechanism would thus allow hook in Descriptors as 'the 
 base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
 ensure the final say is with oak-core/oak-jcr) - as opening up such a 
 Descriptors-plugin mechanism could potentially be used by 3rd party code too.
 PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3145) Allow plugging in additional jcr-descriptors

2015-07-27 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-3145:
-
Description: 
The current way oak compiles the JCR descriptors is somewhat hard-coded and 
does not allow an easy way to extend it. Currently oak-jcr's 
{{RepositoryImpl.determineDescriptors()}} uses oak-core's 
{{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
{{JCRDescriptorsImpl}}). The result of this is then put into the 
{{RepositoryImpl.descriptors}} field - which is subsequently used.

While these descriptors can explicitly be overwritten via {{put}}, that all has 
to happen within overwriting code of the {{RepositoryImpl}} hierarchy. There's 
no plug-in mechanism.

Using the whiteboard pattern there should be a mechanism that allows to provide 
services that implement {{Descriptors.class}} which could then be woven into 
the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for example. The 
plugin mechanism would thus allow hook in Descriptors as 'the base' which are 
still overwritten by oak-core/oak-jcr's own defaults (to ensure the final say 
is with oak-core/oak-jcr) - as opening up such a *Descriptors-plugin mechanism 
could potentially be used by 3rd party code too*.

PS: this is a spin-off of OAK-2844

  was:
The current way oak compiles the JCR descriptors is somewhat hard-coded and 
does not allow an easy way to extend it. Currently oak-jcr's 
{{RepositoryImpl.determineDescriptors()}} uses oak-core's 
{{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
{{JCRDescriptorsImpl}}). The result of this is then put into the 
{{RepositoryImpl.descriptors}} field - which is subsequently used.

While these descriptors can explicitly be overwritten via {{put}}, that all has 
to happen within overwriting code of the {{RepositoryImpl}} hierarchy. There's 
no plug-in mechanism.

Using the whiteboard pattern there should be a mechanism that allows to provide 
services that implement {{Descriptors.class}} which could then be woven into 
the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for example. The 
plugin mechanism would thus allow hook in Descriptors as 'the base' which are 
still overwritten by oak-core/oak-jcr's own defaults (to ensure the final say 
is with oak-core/oak-jcr) - as opening up such a Descriptors-plugin mechanism 
could potentially be used by 3rd party code too.

PS: this is a spin-off of OAK-2844


 Allow plugging in additional jcr-descriptors
 

 Key: OAK-3145
 URL: https://issues.apache.org/jira/browse/OAK-3145
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.3.3
Reporter: Stefan Egli
Assignee: Stefan Egli
Priority: Minor
 Fix For: 1.3.4

 Attachments: OAK-3145.patch


 The current way oak compiles the JCR descriptors is somewhat hard-coded and 
 does not allow an easy way to extend it. Currently oak-jcr's 
 {{RepositoryImpl.determineDescriptors()}} uses oak-core's 
 {{ContentRepositoryImpl.createDescriptors()}} ones and overwrites a few (in 
 {{JCRDescriptorsImpl}}). The result of this is then put into the 
 {{RepositoryImpl.descriptors}} field - which is subsequently used.
 While these descriptors can explicitly be overwritten via {{put}}, that all 
 has to happen within overwriting code of the {{RepositoryImpl}} hierarchy. 
 There's no plug-in mechanism.
 Using the whiteboard pattern there should be a mechanism that allows to 
 provide services that implement {{Descriptors.class}} which could then be 
 woven into the {{GeneralDescriptors}} in use by {{ContentRepositorImpl}} for 
 example. The plugin mechanism would thus allow hook in Descriptors as 'the 
 base' which are still overwritten by oak-core/oak-jcr's own defaults (to 
 ensure the final say is with oak-core/oak-jcr) - as opening up such a 
 *Descriptors-plugin mechanism could potentially be used by 3rd party code 
 too*.
 PS: this is a spin-off of OAK-2844



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2844) Introducing a simple document-based discovery-light service (to circumvent documentMk's eventual consistency delays)

2015-07-27 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14642447#comment-14642447
 ] 

Stefan Egli commented on OAK-2844:
--

PS: I haven't modified [^OAK-2844.v3.patch] at this stage, it therefore 
includes the OAK-3145 still..

 Introducing a simple document-based discovery-light service (to circumvent 
 documentMk's eventual consistency delays)
 

 Key: OAK-2844
 URL: https://issues.apache.org/jira/browse/OAK-2844
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mongomk
Reporter: Stefan Egli
Assignee: Stefan Egli
  Labels: resilience
 Fix For: 1.4

 Attachments: InstanceStateChangeListener.java, OAK-2844.WIP-02.patch, 
 OAK-2844.patch, OAK-2844.v3.patch


 When running discovery.impl on a mongoMk-backed jcr repository, there are 
 risks of hitting problems such as described in SLING-3432 
 pseudo-network-partitioning: this happens when a jcr-level heartbeat does 
 not reach peers within the configured heartbeat timeout - it then treats that 
 affected instance as dead, removes it from the topology, and continues with 
 the remainings, potentially electing a new leader, running the risk of 
 duplicate leaders. This happens when delays in mongoMk grow larger than the 
 (configured) heartbeat timeout. These problems ultimately are due to the 
 'eventual consistency' nature of, not only mongoDB, but more so of mongoMk. 
 The only alternative so far is to increase the heartbeat timeout to match the 
 expected or measured delays that mongoMk can produce (under say given 
 load/performance scenarios).
 Assuming that mongoMk will always carry a risk of certain delays and a 
 maximum, reasonable (for discovery.impl timeout that is) maximum cannot be 
 guaranteed, a better solution is to provide discovery with more 'real-time' 
 like information and/or privileged access to mongoDb.
 Here's a summary of alternatives that have so far been floating around as a 
 solution to circumvent eventual consistency:
  # expose existing (jmx) information about active 'clusterIds' - this has 
 been proposed in SLING-4603. The pros: reuse of existing functionality. The 
 cons: going via jmx, binding of exposed functionality as 'to be maintained 
 API'
  # expose a plain mongo db/collection (via osgi injection) such that a higher 
 (sling) level discovery could directly write heartbeats there. The pros: 
 heartbeat latency would be minimal (assuming the collection is not sharded). 
 The cons: exposes a mongo db/collection potentially also to anyone else, with 
 the risk of opening up to unwanted possibilities
  # introduce a simple 'discovery-light' API to oak which solely provides 
 information about which instances are active in a cluster. The implementation 
 of this is not exposed. The pros: no need to expose a mongoDb/collection, 
 allows any other jmx-functionality to remain unchanged. The cons: a new API 
 that must be maintained
 This ticket is about the 3rd option, about a new mongo-based discovery-light 
 service that is introduced to oak. The functionality in short:
  * it defines a 'local instance id' that is non-persisted, ie can change at 
 each bundle activation.
  * it defines a 'view id' that uniquely identifies a particular incarnation 
 of a 'cluster view/state' (which is: a list of active instance ids)
  * and it defines a list of active instance ids
  * the above attributes are passed to interested components via a listener 
 that can be registered. that listener is called whenever the discovery-light 
 notices the cluster view has changed.
 While the actual implementation could in fact be based on the existing 
 {{getActiveClusterNodes()}} {{getClusterId()}} of the 
 {{DocumentNodeStoreMBean}}, the suggestion is to not fiddle with that part, 
 as that has dependencies to other logic. But instead, the suggestion is to 
 create a dedicated, other, collection ('discovery') where heartbeats as well 
 as the currentView are stored.
 Will attach a suggestion for an initial version of this for review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)