[jira] [Updated] (OAK-3352) Expose Lucene search score explanation

2015-12-01 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh updated OAK-3352:
---
Attachment: OAK-3352.patch

Attaching [patch|^OAK-3352.patch]. The patch parses query same as 
{{rep:excerpt()}} does (i.e. simple String.contains() in filter) - I'm not sure 
if this is fine or we should put it in as a flag in filter as Chetan suggested 
in his proposal.
Column name used is oak:explainScore although I think something of the form 
oak:explainScore*()* might be better (parentheses to give better sense that 
this is not a property name).

In absence of a better idea, the test case just checks if results contains 
{{(MATCH)}} as a couple of explanations that I saw had this string in the 
explanation.

BTW, 
[IndexSearcher.explain|https://lucene.apache.org/core/4_6_0/core/org/apache/lucene/search/IndexSearcher.html#explain%28org.apache.lucene.search.Query,%20int%29]
 suggests that calculating score can be slow for performance - so, we should 
specify it clearly in documentation too.

[~chetanm], [~teofili], can you please review it?

> Expose Lucene search score explanation 
> ---
>
> Key: OAK-3352
> URL: https://issues.apache.org/jira/browse/OAK-3352
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: lucene, query
>Reporter: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3352.patch
>
>
> Lucene provides {{Explanation}} [1] support to understand how the score for a 
> particular search result is determined. To enable users to reason out about 
> fulltext search results in Oak we should expose this information as part of 
> query result
> [1] 
> https://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/Explanation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3699) RDBDocumentStore shutdown: improve logging

2015-12-01 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034048#comment-15034048
 ] 

Julian Reschke commented on OAK-3699:
-

(also add simple stats on writes to CLUSTERNODES, this will help finding out 
what cluster node this store instance was operating on)

> RDBDocumentStore shutdown: improve logging
> --
>
> Key: OAK-3699
> URL: https://issues.apache.org/jira/browse/OAK-3699
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.11, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>
> We current INFO-level-log the startup, but not the shutdown (dispose()). This 
> makes it harder to see in system logs whether the DocumentStore has been 
> shutdown properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3699) RDBDocumentStore shutdown: improve logging

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3699:

Labels: resilience  (was: )

> RDBDocumentStore shutdown: improve logging
> --
>
> Key: OAK-3699
> URL: https://issues.apache.org/jira/browse/OAK-3699
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.11, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.3.12
>
>
> We current INFO-level-log the startup, but not the shutdown (dispose()). This 
> makes it harder to see in system logs whether the DocumentStore has been 
> shutdown properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3699) RDBDocumentStore shutdown: improve logging

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3699:

Fix Version/s: 1.0.25
   1.2.9

> RDBDocumentStore shutdown: improve logging
> --
>
> Key: OAK-3699
> URL: https://issues.apache.org/jira/browse/OAK-3699
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.11, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.2.9, 1.0.25, 1.3.12
>
>
> We current INFO-level-log the startup, but not the shutdown (dispose()). This 
> makes it harder to see in system logs whether the DocumentStore has been 
> shutdown properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3436) Prevent missing checkpoint due to unstable topology from causing complete reindexing

2015-12-01 Thread Manfred Baedke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034086#comment-15034086
 ] 

Manfred Baedke commented on OAK-3436:
-

[~chetanm], [~alex.parvulescu]: If each node maintained it's own list of temp 
checkpoints, each node could clean up only the checkpoints it created itself 
(or which are very very old). This should be sufficient to avoid the loss of 
checkpoints.

> Prevent missing checkpoint due to unstable topology from causing complete 
> reindexing
> 
>
> Key: OAK-3436
> URL: https://issues.apache.org/jira/browse/OAK-3436
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: resilience
> Fix For: 1.2.9, 1.0.25, 1.3.12
>
> Attachments: AsyncIndexUpdateClusterTest.java, OAK-3436-0.patch
>
>
> Async indexing logic relies on embedding application to ensure that async 
> indexing job is run as a singleton in a cluster. For Sling based apps it 
> depends on Sling Discovery support. At times it is being seen that if 
> topology is not stable then different cluster nodes can consider them as 
> leader and execute the async indexing job concurrently.
> This can cause problem as both cluster node might not see same repository 
> state (due to write skew and eventual consistency) and might remove the 
> checkpoint which other cluster node is still relying upon. For e.g. consider 
> a 2 node cluster N1 and N2 where both are performing async indexing.
> # Base state - CP1 is the checkpoint for "async" job
> # N2 starts indexing and removes changes CP1 to CP2. For Mongo the 
> checkpoints are saved in {{settings}} collection
> # N1 also decides to execute indexing but has yet not seen the latest 
> repository state so still thinks that CP1 is the base checkpoint and tries to 
> read it. However CP1 is already removed from {{settings}} and this makes N1 
> think that checkpoint is missing and it decides to reindex everything!
> To avoid this topology must be stable but at Oak level we should still handle 
> such a case and avoid doing a full reindexing. So we would need to have a 
> {{MissingCheckpointStrategy}} similar to {{MissingIndexEditorStrategy}} as 
> done in OAK-2203 
> Possible approaches
> # A1 - Fail the indexing run if checkpoint is missing - Checkpoint being 
> missing can have valid reason and invalid reason. Need to see what are valid 
> scenarios where a checkpoint can go missing
> # A2 - When a checkpoint is created also store the creation time. When a 
> checkpoint is found to be missing and its a *recent* checkpoint then fail the 
> run. For e.g. we would fail the run till checkpoint found to be missing is 
> less than an hour old (for just started take startup time into account)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3699) RDBDocumentStore shutdown: improve logging

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3699:

Fix Version/s: 1.3.12

> RDBDocumentStore shutdown: improve logging
> --
>
> Key: OAK-3699
> URL: https://issues.apache.org/jira/browse/OAK-3699
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.11, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.12
>
>
> We current INFO-level-log the startup, but not the shutdown (dispose()). This 
> makes it harder to see in system logs whether the DocumentStore has been 
> shutdown properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3708) Update Oak to Jackrabbit 2.11.3

2015-12-01 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3708:
-

 Summary: Update Oak to Jackrabbit 2.11.3
 Key: OAK-3708
 URL: https://issues.apache.org/jira/browse/OAK-3708
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Davide Giannella
Assignee: Davide Giannella
Priority: Blocker
 Fix For: 1.3.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3699) RDBDocumentStore shutdown: improve logging

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3699.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1717462
1.2: http://svn.apache.org/r1717466
1.0: http://svn.apache.org/r1717467


> RDBDocumentStore shutdown: improve logging
> --
>
> Key: OAK-3699
> URL: https://issues.apache.org/jira/browse/OAK-3699
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.11, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>  Labels: resilience
> Fix For: 1.2.9, 1.0.25, 1.3.12
>
>
> We current INFO-level-log the startup, but not the shutdown (dispose()). This 
> makes it harder to see in system logs whether the DocumentStore has been 
> shutdown properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3707) Register composite commit hook with whiteboard

2015-12-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3707:
--
Attachment: OAK-3707-1.patch

Attaching a [patch|^OAK-3707-1.patch].

[~mduerig] please review.

> Register composite commit hook with whiteboard
> --
>
> Key: OAK-3707
> URL: https://issues.apache.org/jira/browse/OAK-3707
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Affects Versions: 1.3.11
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.4
>
> Attachments: OAK-3707-1.patch
>
>
> Register, during repository initialisation the composite of the CommitHook 
> with the whiteboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3494) MemoryDiffCache should also check parent paths before falling to Loader (or returning null)

2015-12-01 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035430#comment-15035430
 ] 

Marcel Reutegger commented on OAK-3494:
---

+1

> MemoryDiffCache should also check parent paths before falling to Loader (or 
> returning null)
> ---
>
> Key: OAK-3494
> URL: https://issues.apache.org/jira/browse/OAK-3494
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Vikas Saurabh
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_0, candidate_oak_1_2, performance
> Fix For: 1.3.10
>
> Attachments: OAK-3494-1.patch, OAK-3494-2.patch, 
> OAK-3494-TestCase.patch, OAK-3494.patch
>
>
> Each entry in {{MemoryDiffCache}} is keyed with {{(path, fromRev, toRev)}} 
> for the list of modified children at {{path}}. A diff calcualted by 
> {{DocumentNodeStore.diffImpl}} at '/' (passively via loader) or 
> {{JournalEntry.applyTo}} (actively) fill each path for which there are 
> modified children (including the hierarchy)
> But, if an observer calls {{compareWithBaseState}} on a unmodified sub-tree, 
> the observer will still go down to {{diffImpl}} although cached parent entry 
> can be used to answer the query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3352) Expose Lucene search score explanation

2015-12-01 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035244#comment-15035244
 ] 

Chetan Mehrotra commented on OAK-3352:
--

bq. +boolean addExplain = queryStatement != null && 
queryStatement.contains(QueryImpl.OAK_EXPLAIN_SCORE);

I would prefer to add this to the filter as similar construct can be used for 
other indexes also [~tmueller] Thoughts? How the explanation requirement is 
expressed is implementation detail of query and index implementation need not 
analyze the query statement to infer this.

bq. Column name used is oak:explainScore although I think something of the form 
oak:explainScore*()* might be better (parentheses to give better sense that 
this is not a property name).

So instead of sounding like function lets change the name to 
{{oak:scoreExplanation}} ?

bq. BTW, IndexSearcher.explain suggests that calculating score can be slow for 
performance - so, we should specify it clearly in documentation too.

As the cost is only involved if this feature is made use of in query I think it 
should be fine. But yes as part of document update regarding this feature it 
can be highlighted



> Expose Lucene search score explanation 
> ---
>
> Key: OAK-3352
> URL: https://issues.apache.org/jira/browse/OAK-3352
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: lucene, query
>Reporter: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3352.patch
>
>
> Lucene provides {{Explanation}} [1] support to understand how the score for a 
> particular search result is determined. To enable users to reason out about 
> fulltext search results in Oak we should expose this information as part of 
> query result
> [1] 
> https://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/Explanation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3703) Improve handling of IOException

2015-12-01 Thread JIRA
Michael Dürig created OAK-3703:
--

 Summary: Improve handling of IOException
 Key: OAK-3703
 URL: https://issues.apache.org/jira/browse/OAK-3703
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig


{{SegmentStore.writeSegment}} doesn't specify its behaviour in the face of IO 
errors. Moreover {{FileStore.writeSegment}} just catches any {{IOException}} 
and throws a {{RuntimeException}} with the former as its cause. 

I think we need to clarify this as an immediate cause of the current state is 
that some of the {{SegmentWriter}} write methods *do* throw an {{IOException}} 
and some *don't*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-12-01 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033343#comment-15033343
 ] 

Chetan Mehrotra commented on OAK-3654:
--

Some confusion. The default TimeSeries support is still there just that Metrics 
support is disabled for that counter as it was having an appreciable overhead 
for this usecase

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
> Attachments: OAK-3654-fast-clock.patch, OAK-3654-v1.patch, 
> query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033355#comment-15033355
 ] 

Michael Dürig commented on OAK-3654:


Aha ok, thanks for the clarification. 

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
> Attachments: OAK-3654-fast-clock.patch, OAK-3654-v1.patch, 
> query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3702) More resilient BackgroundThread implementation

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3702:
---
Fix Version/s: 1.4

> More resilient BackgroundThread implementation
> --
>
> Key: OAK-3702
> URL: https://issues.apache.org/jira/browse/OAK-3702
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience
> Fix For: 1.4
>
>
> Currently {{BackgroundThread}} dies silently when hit by an uncaught 
> exception. We should log a warning. 
> Also calling {{Thread#start}} from within the constructor is an anti-pattern 
> as it exposes {{this}} before fully initialised. This is potentially causing 
> OAK-3303. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3704) Keep track of nested CUGs

2015-12-01 Thread angela (JIRA)
angela created OAK-3704:
---

 Summary: Keep track of nested CUGs
 Key: OAK-3704
 URL: https://issues.apache.org/jira/browse/OAK-3704
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: authorization-cug
Reporter: angela
Assignee: angela


in order to ease evaluation of cug policies the impl should come with a 
postcommitvalidator that keeps track of nested cugs... e.g. in a hidden 
property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3702) More resilient BackgroundThread implementation

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-3702.

Resolution: Fixed

Fixed at http://svn.apache.org/viewvc?rev=1717394=rev and 
http://svn.apache.org/viewvc?rev=1717393=rev

> More resilient BackgroundThread implementation
> --
>
> Key: OAK-3702
> URL: https://issues.apache.org/jira/browse/OAK-3702
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience
> Fix For: 1.4
>
>
> Currently {{BackgroundThread}} dies silently when hit by an uncaught 
> exception. We should log a warning. 
> Also calling {{Thread#start}} from within the constructor is an anti-pattern 
> as it exposes {{this}} before fully initialised. This is potentially causing 
> OAK-3303. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3702) More resilient BackgroundThread implementation

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-3702:
---
Fix Version/s: (was: 1.4)
   1.3.12

> More resilient BackgroundThread implementation
> --
>
> Key: OAK-3702
> URL: https://issues.apache.org/jira/browse/OAK-3702
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience
> Fix For: 1.3.12
>
>
> Currently {{BackgroundThread}} dies silently when hit by an uncaught 
> exception. We should log a warning. 
> Also calling {{Thread#start}} from within the constructor is an anti-pattern 
> as it exposes {{this}} before fully initialised. This is potentially causing 
> OAK-3303. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3706) Update to maven-bundle-plugin 3.0.1

2015-12-01 Thread Julian Sedding (JIRA)
Julian Sedding created OAK-3706:
---

 Summary: Update to maven-bundle-plugin 3.0.1
 Key: OAK-3706
 URL: https://issues.apache.org/jira/browse/OAK-3706
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: parent
Affects Versions: 1.3.11
Reporter: Julian Sedding


The maven-bundle-plugin version 3.0.0 has an issue where test dependencies are 
used in the import/export package calculation (FELIX-5062). We should update to 
version 3.0.1 where this issue is fixed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033292#comment-15033292
 ] 

Michael Dürig commented on OAK-3654:


I'm not too happy about losing the session read times. Are we sufficiently sure 
no one relies on them? I wouldn't want to hack them back in a customer case...

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
> Attachments: OAK-3654-fast-clock.patch, OAK-3654-v1.patch, 
> query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3702) More resilient BackgroundThread implementation

2015-12-01 Thread JIRA
Michael Dürig created OAK-3702:
--

 Summary: More resilient BackgroundThread implementation
 Key: OAK-3702
 URL: https://issues.apache.org/jira/browse/OAK-3702
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig


Currently {{BackgroundThread}} dies silently when hit by an uncaught exception. 
We should log a warning. 

Also calling {{Thread#start}} from within the constructor is an anti-pattern as 
it exposes {{this}} before fully initialised. This is potentially causing 
OAK-3303. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3268) Improve datastore resilience

2015-12-01 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain reassigned OAK-3268:
--

Assignee: Amit Jain

> Improve datastore resilience
> 
>
> Key: OAK-3268
> URL: https://issues.apache.org/jira/browse/OAK-3268
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: blob
>Reporter: Michael Marth
>Assignee: Amit Jain
>Priority: Critical
>  Labels: resilience
>
> As discussed bilaterally grouping the improvements for datastore resilience 
> in this issue for easier tracking



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3269) Improve indexing resilience

2015-12-01 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-3269:


Assignee: Chetan Mehrotra

> Improve indexing resilience
> ---
>
> Key: OAK-3269
> URL: https://issues.apache.org/jira/browse/OAK-3269
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: lucene
>Reporter: Michael Marth
>Assignee: Chetan Mehrotra
>Priority: Critical
>  Labels: resilience
>
> As discussed bilaterally grouping the improvements for indexer resilience in 
> this issue for easier tracking



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3696) Improve SegmentMK resilience

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-3696:
--

Assignee: Michael Dürig

> Improve SegmentMK resilience
> 
>
> Key: OAK-3696
> URL: https://issues.apache.org/jira/browse/OAK-3696
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: segmentmk
>Reporter: Michael Marth
>Assignee: Michael Dürig
>  Labels: resilience
>
> Epic for collection SegmentMK resilience improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3692) java.lang.NoClassDefFoundError: org/apache/lucene/index/sorter/Sorter$DocComparator

2015-12-01 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033498#comment-15033498
 ] 

Vikas Saurabh commented on OAK-3692:


Thanks for looking into this [~chetanm]. Your patch gets the build going. Btw, 
I was wondering, if we should rather just ignore lucene packages like:
{noformat}
 
   !org.apache.lucene.*
   *
 
{noformat}
Wdyt?

> java.lang.NoClassDefFoundError: 
> org/apache/lucene/index/sorter/Sorter$DocComparator
> ---
>
> Key: OAK-3692
> URL: https://issues.apache.org/jira/browse/OAK-3692
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Blocker
> Fix For: 1.3.12
>
> Attachments: OAK-3692.patch
>
>
> I'm getting following exception while trying to include oak trunk build into 
> AEM:
> {noformat}
> 27.11.2015 20:41:25.946 *ERROR* [oak-lucene-2] 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider Uncaught 
> exception in 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider@36ba558b
> java.lang.NoClassDefFoundError: 
> org/apache/lucene/index/sorter/Sorter$DocComparator
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.util.SuggestHelper.getLookup(SuggestHelper.java:108)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:106)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$2.childNodeChanged(MapRecord.java:403)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:394)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:73)
> at 
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:131)
> at 
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.lucene.index.sorter.Sorter$DocComparator not found by 
> org.apache.jackrabbit.oak-lucene [95]
> at 
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1573)
> at 
> org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
> at 
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   

[jira] [Updated] (OAK-1844) Verify resilience goals

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1844:
---
Epic Name: Verify resilience

> Verify resilience goals
> ---
>
> Key: OAK-1844
> URL: https://issues.apache.org/jira/browse/OAK-1844
> Project: Jackrabbit Oak
>  Issue Type: Epic
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience
> Fix For: 1.4
>
>
> This is a container issue for verifying the resilience goals set out for Oak. 
> See https://wiki.apache.org/jackrabbit/Resilience for the work in progress of 
> such goals. Once we have an agreement on that, subtasks of this issue could 
> be used to track the verification process of each of the individual goals. 
> Discussion here: http://jackrabbit.markmail.org/thread/5cndir5sjrc5dtla



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3303) FileStore flush thread can get stuck

2015-12-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-3303.
--
   Resolution: Duplicate
Fix Version/s: (was: 1.3.12)

optimistically marking as duplicate of OAK-3702, if this comes back we can 
reopen the issue.

> FileStore flush thread can get stuck
> 
>
> Key: OAK-3303
> URL: https://issues.apache.org/jira/browse/OAK-3303
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>
> In some very rare circumstances the flush thread was seen as possibly stuck 
> for a while following a restart of the system. This results in data loss on 
> restart (the system will roll back to the latest persisted revision on 
> restart), and worse off there's no way of extracting the latest head revision 
> using the tar files, so recovery is not (yet) possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (OAK-2835) TARMK Cold Standby inefficient cleanup

2015-12-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu reopened OAK-2835:
--

> TARMK Cold Standby inefficient cleanup
> --
>
> Key: OAK-2835
> URL: https://issues.apache.org/jira/browse/OAK-2835
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk, tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Critical
>  Labels: compaction, gc, production, resilience
> Attachments: OAK-2835.patch
>
>
> Following OAK-2817, it turns out that patching the data corruption issue 
> revealed an inefficiency of the cleanup method. similar to the online 
> compaction situation, the standby has issues clearing some of the in-memory 
> references to old revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2835) TARMK Cold Standby inefficient cleanup

2015-12-01 Thread Alex Parvulescu (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Parvulescu resolved OAK-2835.
--
Resolution: Duplicate

adding correct resolution (duplicate)

> TARMK Cold Standby inefficient cleanup
> --
>
> Key: OAK-2835
> URL: https://issues.apache.org/jira/browse/OAK-2835
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk, tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Critical
>  Labels: compaction, gc, production, resilience
> Attachments: OAK-2835.patch
>
>
> Following OAK-2817, it turns out that patching the data corruption issue 
> revealed an inefficiency of the cleanup method. similar to the online 
> compaction situation, the standby has issues clearing some of the in-memory 
> references to old revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3686) Solr suggestion results should have 1 row per suggestion with appropriate column names

2015-12-01 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029317#comment-15029317
 ] 

Vikas Saurabh edited comment on OAK-3686 at 12/1/15 3:34 PM:
-

[~teofili], I'm attaching a [patch|^OAK-3686.patch]. Current implementation 
uses unique=true for path cursor while a similar issue for lucene - OAK-2754 
(Use non unique PathCursor in LucenePropertyIndex) was done for lucene property 
index to use non unique path cursor. I'm not sure if we can do it safely for 
solr or not (i.e. the current version of solr queries would be returning unique 
nodes anyway for relevant cases)


was (Author: catholicon):
@teofili, I'm attaching a [patch|^OAK-3686.patch]. Current implementation uses 
unique=true for path cursor while a similar issue for lucene - OAK-2754 (Use 
non unique PathCursor in LucenePropertyIndex) was done for lucene property 
index to use non unique path cursor. I'm not sure if we can do it safely for 
solr or not (i.e. the current version of solr queries would be returning unique 
nodes anyway for relevant cases)

> Solr suggestion results should have 1 row per suggestion with appropriate 
> column names
> --
>
> Key: OAK-3686
> URL: https://issues.apache.org/jira/browse/OAK-3686
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: solr
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.12
>
> Attachments: OAK-3686.patch
>
>
> Currently suggest query returns just one row with {{rep:suggest()}} column 
> containing a string that needs to be parsed.
> It'd better if each suggestion is returned as individual row with column 
> names such as {{suggestion}}, {{weight}}(???), etc.
> (This is essentially the same issue as OAK-3509 but for solr)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions

2015-12-01 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033883#comment-15033883
 ] 

Davide Giannella commented on OAK-2472:
---

After offline discussions with [~mduerig], [~alex.parvulescu],
[~tmueller], [~tomek.rekawek], [~mreutegg], [~frm], [~teofili],
[~anchela] it's been decided to try out by registering the composed
commit hooks with the whiteboard.

It will be then possible to retrieve the registered commit hooks from
there.



> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.4
>
> Attachments: atomic-counter.md
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-3576) Allow custom extension to augment indexed lucene documents

2015-12-01 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh reassigned OAK-3576:
--

Assignee: Vikas Saurabh

> Allow custom extension to augment indexed lucene documents
> --
>
> Key: OAK-3576
> URL: https://issues.apache.org/jira/browse/OAK-3576
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
> Attachments: OAK-3576.jsedding.patch, OAK-3576.wip.patch
>
>
> Following up on http://oak.markmail.org/thread/a53ahsgb3bowtwyq, we should 
> have an extension point in oak to allow custom code to add fields to 
> documents getting indexed in lucene. We'd also need to allow extension point 
> to add extra query terms to utilize such augmented fields.
> (cc [~teofili], [~chetanm])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: (was: OAK-3637.patch)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: OAK-3637.patch

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3372) Collapsing external events in BackgroundObserver even before queue is full leads to JournalEntry not getting used

2015-12-01 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033913#comment-15033913
 ] 

Vikas Saurabh commented on OAK-3372:


[~mduerig], I think we should backport this issue as it's useful with journal 
optimization. (cc [~mreutegg])

> Collapsing external events in BackgroundObserver even before queue is full 
> leads to JournalEntry not getting used
> -
>
> Key: OAK-3372
> URL: https://issues.apache.org/jira/browse/OAK-3372
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.5
>Reporter: Vikas Saurabh
>Assignee: Michael Dürig
>  Labels: candidate_oak_1_0, candidate_oak_1_2, resilience
> Fix For: 1.3.10
>
> Attachments: OAK-3372.patch
>
>
> BackgroundObserver currently merges external events if the last one in queue 
> is also an external event. This leads to diff being done for a revision pair 
> which is different from the ones pushed actively into cache during backgroud 
> read (using JournalEntry) i.e. diff queries for {{diff("/a/b", rA, rC)}} 
> while background read had pushed results of {{diff("/a/b", rA, rB)}} and 
> {{diff("/a/b", rB, rC)}}.
> (cc [~mreutegg], [~egli], [~chetanm], [~mduerig])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3494) MemoryDiffCache should also check parent paths before falling to Loader (or returning null)

2015-12-01 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033912#comment-15033912
 ] 

Vikas Saurabh commented on OAK-3494:


[~mreutegg], I think we should backport this issue as it's useful with journal 
optimization.

> MemoryDiffCache should also check parent paths before falling to Loader (or 
> returning null)
> ---
>
> Key: OAK-3494
> URL: https://issues.apache.org/jira/browse/OAK-3494
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Vikas Saurabh
>Assignee: Marcel Reutegger
>  Labels: candidate_oak_1_0, candidate_oak_1_2, performance
> Fix For: 1.3.10
>
> Attachments: OAK-3494-1.patch, OAK-3494-2.patch, 
> OAK-3494-TestCase.patch, OAK-3494.patch
>
>
> Each entry in {{MemoryDiffCache}} is keyed with {{(path, fromRev, toRev)}} 
> for the list of modified children at {{path}}. A diff calcualted by 
> {{DocumentNodeStore.diffImpl}} at '/' (passively via loader) or 
> {{JournalEntry.applyTo}} (actively) fill each path for which there are 
> modified children (including the hierarchy)
> But, if an observer calls {{compareWithBaseState}} on a unmodified sub-tree, 
> the observer will still go down to {{diffImpl}} although cached parent entry 
> can be used to answer the query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3692) java.lang.NoClassDefFoundError: org/apache/lucene/index/sorter/Sorter$DocComparator

2015-12-01 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-3692.

Resolution: Fixed

Talked to Chetan offline and it seems that whitelisting o.a.j.o.* makes more 
sense. Committed that with re-addition of {{lucene-misc}} dependency on trunk 
at [r1717410|http://svn.apache.org/r1717410].

BTW, this should have got caught earlier (when lucene-misc wasn't included). 
But, that didn't happen as we instruct bnd to not import lucene packages hence 
the requirement of packages provided lucene-misc didn't get detected at OSGI 
level - hence actual execution/usage led to CNFE.

> java.lang.NoClassDefFoundError: 
> org/apache/lucene/index/sorter/Sorter$DocComparator
> ---
>
> Key: OAK-3692
> URL: https://issues.apache.org/jira/browse/OAK-3692
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Blocker
> Fix For: 1.3.12
>
> Attachments: OAK-3692.patch
>
>
> I'm getting following exception while trying to include oak trunk build into 
> AEM:
> {noformat}
> 27.11.2015 20:41:25.946 *ERROR* [oak-lucene-2] 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider Uncaught 
> exception in 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider@36ba558b
> java.lang.NoClassDefFoundError: 
> org/apache/lucene/index/sorter/Sorter$DocComparator
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.util.SuggestHelper.getLookup(SuggestHelper.java:108)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.(IndexNode.java:106)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexNode.open(IndexNode.java:69)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker$1.leave(IndexTracker.java:98)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:153)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compareBranch(MapRecord.java:565)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:470)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:148)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$2.childNodeChanged(MapRecord.java:403)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord$3.childNodeChanged(MapRecord.java:444)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:487)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:436)
> at 
> org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:394)
> at 
> org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:583)
> at 
> org.apache.jackrabbit.oak.spi.commit.EditorDiff.process(EditorDiff.java:52)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexTracker.update(IndexTracker.java:108)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexProvider.contentChanged(LuceneIndexProvider.java:73)
> at 
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:131)
> at 
> org.apache.jackrabbit.oak.spi.commit.BackgroundObserver$1$1.call(BackgroundObserver.java:125)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.lucene.index.sorter.Sorter$DocComparator not found by 
> org.apache.jackrabbit.oak-lucene [95]
> at 
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1573)
> at 
> 

[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033642#comment-15033642
 ] 

Tomek Rękawek commented on OAK-3649:


Please use this one: https://github.com/apache/jackrabbit-oak/pull/47.diff

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1844) Verify resilience goals

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1844:
---
Assignee: Tomek Rękawek  (was: Michael Dürig)

> Verify resilience goals
> ---
>
> Key: OAK-1844
> URL: https://issues.apache.org/jira/browse/OAK-1844
> Project: Jackrabbit Oak
>  Issue Type: Epic
>Reporter: Michael Dürig
>Assignee: Tomek Rękawek
>  Labels: resilience
> Fix For: 1.4
>
>
> This is a container issue for verifying the resilience goals set out for Oak. 
> See https://wiki.apache.org/jackrabbit/Resilience for the work in progress of 
> such goals. Once we have an agreement on that, subtasks of this issue could 
> be used to track the verification process of each of the individual goals. 
> Discussion here: http://jackrabbit.markmail.org/thread/5cndir5sjrc5dtla



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3705) Change default of compaction.forceAfterFail to false

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-3705.

   Resolution: Fixed
Fix Version/s: 1.3.12

Fixed at http://svn.apache.org/viewvc?rev=1717411=rev

> Change default of compaction.forceAfterFail to false
> 
>
> Key: OAK-3705
> URL: https://issues.apache.org/jira/browse/OAK-3705
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: compaction, gc
> Fix For: 1.3.12
>
>
> Having {{compaction.forceAfterFail}} set to {{true}} will block repository 
> writes for an extended period of time (minutes, probably hours) if all 
> previous compaction cycles couldn't catch up with the latest changes. I think 
> this is not acceptable and we should change the default to {{false}}: if 
> compaction is not able to catch up the recommendation should be to move it to 
> a quieter time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3703) Improve handling of IOException

2015-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033538#comment-15033538
 ] 

Michael Dürig commented on OAK-3703:


I experimented with two approached to this driven by the desire to make usage 
of {{IOException}} in {{SegmentWriter}} consistent:

# throw {{IOException}} from all write methods: 
https://github.com/mduerig/jackrabbit-oak/commit/b991d10885dbd48d0185f16034d668808ad37463
# don't throw {{IOException}} from any of the write methods but throw an 
{{ISE}} with the {{IOException}} as its cause from where the actual IO error 
occured: 
https://github.com/mduerig/jackrabbit-oak/commit/2cd3d401a46ea4a217e353cedccdf15058dbf0b2

Given we use the 1st approach re. these nasty checked exception in the rest of 
Oak's code base I'd prefer that one. 

[~alex.parvulescu] WDYT

> Improve handling of IOException
> ---
>
> Key: OAK-3703
> URL: https://issues.apache.org/jira/browse/OAK-3703
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segmentmk
>Reporter: Michael Dürig
>Assignee: Michael Dürig
>  Labels: resilience, technical_debt
>
> {{SegmentStore.writeSegment}} doesn't specify its behaviour in the face of IO 
> errors. Moreover {{FileStore.writeSegment}} just catches any {{IOException}} 
> and throws a {{RuntimeException}} with the former as its cause. 
> I think we need to clarify this as an immediate cause of the current state is 
> that some of the {{SegmentWriter}} write methods *do* throw an 
> {{IOException}} and some *don't*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-12-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028543#comment-15028543
 ] 

Tomek Rękawek edited comment on OAK-3649 at 12/1/15 1:07 PM:
-

[~reschke], thanks for the comment. Indeed, it shouldn't be the developer 
responsibility to remember whether a method requires a lock or not - it should 
be somehow enforced. Locks are reentrant, so maybe we can just acquire a lock 
every time there's a destructive or complex operation which should be performed 
atomically?

I pass the NodeDocumentLocks to the NodeDocumentCache, so now all cache 
operations are thread-safe. If the DocumentStore needs synchronize a bigger 
part of code, it can use the same reentrant lock stripe.

The side effect is that TreeLock has been re-introduced again into the 
RDBDocumentStore, but only the acquire() method is used, so it behaves like a 
normal lock stripe. I hope you don't mind this.

Let me know what do you think.


was (Author: tomek.rekawek):
[~reschke], thanks for the comment. Indeed, it shouldn't be the developer 
responsibility to remember whether a method requires a lock or not - it should 
be somehow enforced. Locks are reentrant, so maybe we can just acquire a lock 
every time there's a destructive or complex operation which should be performed 
atomically?

In the following branch I pass the NodeDocumentLocks to the NodeDocumentCache, 
so now all cache operations are thread-safe. If the DocumentStore needs 
synchronize a bigger part of code, it can use the same reentrant lock stripe.

The side effect is that TreeLock has been re-introduced again into the 
RDBDocumentStore, but only the acquire() method is used, so it behaves like a 
normal lock stripe. I hope you don't mind this.

Branch:
https://github.com/trekawek/jackrabbit-oak/tree/OAK-3649-thread-safe

Updated patch:
https://github.com/trekawek/jackrabbit-oak/pull/3.diff

Let me know what do you think.

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2858) Test failure: ObservationRefreshTest

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-2858.
-
Resolution: Fixed

Re-enabled test in http://svn.apache.org/r1717406

> Test failure: ObservationRefreshTest
> 
>
> Key: OAK-2858
> URL: https://issues.apache.org/jira/browse/OAK-2858
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
> Environment: https://builds.apache.org/
>Reporter: Michael Dürig
>Assignee: Julian Reschke
>  Labels: ci, jenkins
> Fix For: 1.3.12
>
>
> {{org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest}} fails on 
> Jenkins when running the {{DOCUMENT_RDB}} fixture.
> {noformat}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.174 sec 
> <<< FAILURE!
> observation[0](org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest)
>   Time elapsed: 16.173 sec  <<< ERROR!
> javax.jcr.InvalidItemStateException: OakMerge0001: OakMerge0001: Failed to 
> merge changes to the underlying store (retries 5, 5286 ms)
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:239)
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:664)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:489)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
>   at 
> org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation(ObservationRefreshTest.java:176)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Updated] (OAK-2858) Test failure: ObservationRefreshTest

2015-12-01 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2858:

Fix Version/s: (was: 1.4)
   1.3.12

> Test failure: ObservationRefreshTest
> 
>
> Key: OAK-2858
> URL: https://issues.apache.org/jira/browse/OAK-2858
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: rdbmk
> Environment: https://builds.apache.org/
>Reporter: Michael Dürig
>Assignee: Julian Reschke
>  Labels: ci, jenkins
> Fix For: 1.3.12
>
>
> {{org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest}} fails on 
> Jenkins when running the {{DOCUMENT_RDB}} fixture.
> {noformat}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.174 sec 
> <<< FAILURE!
> observation[0](org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest)
>   Time elapsed: 16.173 sec  <<< ERROR!
> javax.jcr.InvalidItemStateException: OakMerge0001: OakMerge0001: Failed to 
> merge changes to the underlying store (retries 5, 5286 ms)
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:239)
>   at 
> org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:664)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:489)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.performVoid(SessionImpl.java:424)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:268)
>   at 
> org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:421)
>   at 
> org.apache.jackrabbit.oak.jcr.observation.ObservationRefreshTest.observation(ObservationRefreshTest.java:176)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:24)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>

[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-12-01 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033653#comment-15033653
 ] 

Julian Reschke commented on OAK-3649:
-

Thanks, that's better :-)

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2675) Include change type information in perf logs for diff logic

2015-12-01 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033659#comment-15033659
 ] 

Chetan Mehrotra commented on OAK-2675:
--

[~catholicon] Do you think having this support would help in 
debugging/understanding observation performance issues better. Otherwise we can 
close it

> Include change type information in perf logs for diff logic
> ---
>
> Key: OAK-2675
> URL: https://issues.apache.org/jira/browse/OAK-2675
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Priority: Minor
>  Labels: observation, performance, resilience, tooling
> Fix For: 1.3.12
>
>
> Currently the diff perf logs in {{NodeObserver}} does not indicate what type 
> of change was processed i.e. was the change an internal one or an external 
> one. 
> Having this information would allow us to determine how the cache is being 
> used. For e.g. if we see slower number even for local changes then that would 
> indicate that there is some issue with the diff cache and its not be being 
> utilized effectively 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: (was: OAK-3637.patch)

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3637) Bulk document updates in RDBDocumentStore

2015-12-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3637:
---
Attachment: OAK-3637.patch

> Bulk document updates in RDBDocumentStore
> -
>
> Key: OAK-3637
> URL: https://issues.apache.org/jira/browse/OAK-3637
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Reporter: Tomek Rękawek
> Fix For: 1.4
>
> Attachments: OAK-3637.patch, OAK-3637.patch
>
>
> Implement the [batch createOrUpdate|OAK-3662] in the RDBDocumentStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-12-01 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033637#comment-15033637
 ] 

Julian Reschke commented on OAK-3649:
-

https://github.com/trekawek/jackrabbit-oak/pull/3.diff doesn't apply cleanly; 
do I need something else?

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-12-01 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033790#comment-15033790
 ] 

Julian Reschke commented on OAK-3649:
-

OK, tests pass for me. Now reviewing the code:

- do we really need the lockAcquisitionCounter? IMHO AtomicLongs can be a 
bottleneck on certain non-Oracle Java versions


> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2675) Include change type information in perf logs for diff logic

2015-12-01 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033663#comment-15033663
 ] 

Vikas Saurabh commented on OAK-2675:


I think it's useful to know the type of change from debug point of view (in 
fact, I had added some clumsy logs on similar lines while identifying OAK-3372 
and OAK-3494.

I'd have to look back at the useful ones again. I'd put up a patch here and 
maybe update the issue accordingly. Nonetheless, we surely have scope for 
improvement here... so, we shouldn't close it. I'm assigning it to myself.

> Include change type information in perf logs for diff logic
> ---
>
> Key: OAK-2675
> URL: https://issues.apache.org/jira/browse/OAK-2675
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Priority: Minor
>  Labels: observation, performance, resilience, tooling
> Fix For: 1.3.12
>
>
> Currently the diff perf logs in {{NodeObserver}} does not indicate what type 
> of change was processed i.e. was the change an internal one or an external 
> one. 
> Having this information would allow us to determine how the cache is being 
> used. For e.g. if we see slower number even for local changes then that would 
> indicate that there is some issue with the diff cache and its not be being 
> utilized effectively 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2675) Include change type information in perf logs for diff logic

2015-12-01 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh reassigned OAK-2675:
--

Assignee: Vikas Saurabh

> Include change type information in perf logs for diff logic
> ---
>
> Key: OAK-2675
> URL: https://issues.apache.org/jira/browse/OAK-2675
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Vikas Saurabh
>Priority: Minor
>  Labels: observation, performance, resilience, tooling
> Fix For: 1.3.12
>
>
> Currently the diff perf logs in {{NodeObserver}} does not indicate what type 
> of change was processed i.e. was the change an internal one or an external 
> one. 
> Having this information would allow us to determine how the cache is being 
> used. For e.g. if we see slower number even for local changes then that would 
> indicate that there is some issue with the diff cache and its not be being 
> utilized effectively 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3707) Register composite commit hook with whiteboard

2015-12-01 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3707:
-

 Summary: Register composite commit hook with whiteboard
 Key: OAK-3707
 URL: https://issues.apache.org/jira/browse/OAK-3707
 Project: Jackrabbit Oak
  Issue Type: Improvement
Affects Versions: 1.3.11
Reporter: Davide Giannella
Assignee: Davide Giannella
 Fix For: 1.4


Register, during repository initialisation the composite of the CommitHook with 
the whiteboard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2472) Add support for atomic counters on cluster solutions

2015-12-01 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033883#comment-15033883
 ] 

Davide Giannella edited comment on OAK-2472 at 12/1/15 4:20 PM:


After offline discussions with [~mduerig], [~alex.parvulescu],
[~tmueller], [~tomek.rekawek], [~mreutegg], [~frm], [~teofili],
[~anchela] it's been decided to try out by registering the composed
commit hooks with the whiteboard.

It will be then possible to retrieve the registered commit hooks from
there.

Filed OAK-3707



was (Author: edivad):
After offline discussions with [~mduerig], [~alex.parvulescu],
[~tmueller], [~tomek.rekawek], [~mreutegg], [~frm], [~teofili],
[~anchela] it's been decided to try out by registering the composed
commit hooks with the whiteboard.

It will be then possible to retrieve the registered commit hooks from
there.



> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.4
>
> Attachments: atomic-counter.md
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3656) Expose CommitHook as OSGi service

2015-12-01 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-3656.
---
Resolution: Won't Fix

Decided to take a different direction for addressing OAK-2472 dependencies. See 
OAK-3656 for details

> Expose CommitHook as OSGi service
> -
>
> Key: OAK-3656
> URL: https://issues.apache.org/jira/browse/OAK-3656
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: core
>Affects Versions: 1.3.10
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> In Oak we currently expose two commit hooks type as OSGi services via a 
> Provider wrapper: IndexEditor and Editor.
> We don't do the same for ConflictHandler and CommitHook itself. 
> It would be nice to have those exposed as OSGi services so that an OSGi 
> repository could leverage them as @Reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)