[jira] [Updated] (OAK-3634) RDB/MongoDocumentStore may return stale documents

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3634:

Affects Version/s: 1.2.7
   1.0.23

> RDB/MongoDocumentStore may return stale documents
> -
>
> Key: OAK-3634
> URL: https://issues.apache.org/jira/browse/OAK-3634
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk, rdbmk
>Affects Versions: 1.2.7, 1.3.10, 1.0.23
>Reporter: Julian Reschke
> Attachments: OAK-3634.diff
>
>
> It appears that the implementations of the {{update}} method sometimes 
> populate the memory cache with documents that do not reflect any current or 
> previous state in the persistence (that is, miss changes made by another 
> node).
> (will attach test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3653) Incorrect last revision of cached node state

2015-11-19 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-3653:
--
Fix Version/s: 1.4

> Incorrect last revision of cached node state
> 
>
> Key: OAK-3653
> URL: https://issues.apache.org/jira/browse/OAK-3653
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Vikas Saurabh
> Fix For: 1.4
>
> Attachments: Capture.JPG, Capture1.JPG, Capture2.JPG, oplog.json
>
>
> While installing a package in one of the systems (Oak-1.0.16), we observed 
> broken workflow model. Upon further investigation, it was found that 
> {{baseVersion}} of the model was pointing to a node in version storage which 
> wasn't visible on the instance. This node was available in mongo and could 
> also be seen on a different instance of the cluster. 
> The node was created on the same instance where it wasn't visible (cluster id 
> - 3)
> Further investigation showed that for some (yet unknown) reason, 
> {{lastRevision}} for cached node state of node's parent at revision where the 
> node was created was stale (an older revision) which must have led to invalid 
> children list and hence unavailable node.
> Attaching a few files which were captured during the investigation:
> * [^oplog.json] - oplog for +-10s for r151191c7601-0-3
> * [^Capture.JPG]- snapshot of groovy console output using script at \[0] to 
> list node children cache entry for parent
> * [^Capture1.JPG]- snapshot of groovy console output using script at \[1] to 
> traverse to invisible node
> * [^Capture2.JPG]- Almost same as Capture.jpg but this time for node cache 
> instead (using script at \[2])
> The node which wasn't visible in mongo:
> {noformat}
> db.nodes.findOne({_id: 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7"})
> {
> "_id" : 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7",
> "_deleted" : {
> "r151191c7601-0-3" : "false"
> },
> "_commitRoot" : {
> "r151191c7601-0-3" : "0"
> },
> "_lastRev" : {
> "r0-0-3" : "r151191c7601-0-3"
> },
> "jcr:primaryType" : {
> "r151191c7601-0-3" : "\"nam:nt:version\""
> },
> "jcr:uuid" : {
> "r151191c7601-0-3" : 
> "\"40234f1d-2435-4c6b-9962-20af22b1c948\""
> },
> "_modified" : NumberLong(1447825270),
> "jcr:created" : {
> "r151191c7601-0-3" : "\"dat:2015-11-17T21:41:14.058-08:00\""
> },
> "_children" : true,
> "jcr:predecessors" : {
> "r151191c7601-0-3" : 
> "[\"ref:6cecc77b-3020-4a47-b9cc-084a618aa957\"]"
> },
> "jcr:successors" : {
> "r151191c7601-0-3" : "\"[0]:Reference\""
> },
> "_modCount" : NumberLong(1)
> }
> {noformat}
> Last revision entry of cached node state of 
> {{/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e}}
>  (parent) at revision {{r151191c7601-0-3}} was {{r14f035ebdf8-0-3}} (the last 
> revs were propagated correctly up till root for r151191c7601-0-3).
> As a work-around, doing {{cache.invalidate(k)}} for node cache (inside the 
> loop of script at \[2]) revived the visibility of the node.
> *Disclaimer*: _The issue was reported and investigated on a cluster backed by 
> mongo running on Oak-1.0.16. But, the problem itself doesn't seem like any 
> which have been fixed in later versions_
> \[0]: {noformat}
> def printNodeChildrenCache(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def cache = rootNode.sessionDelegate.root.store.nodeChildrenCache
> def cacheMap = cache.asMap()
> cacheMap.each{k,v -> 
>   if (k.toString().startsWith(path + "@")) {
> println "${k}:${v}"
>   }
> }
>   } finally {
> session.logout()
>   }
> }
>  
> printNodeChildrenCache("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e")
> {noformat}
> \[1]: {noformat}
> def traverse(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def nb = rootNode.sessionDelegate.root.store.root
> def p = ''
> path.tokenize('/').each() {
>   p = p + '/' + it
>   nb = nb.getChildNode(it)
>   println "${p} ${nb}"
> }
>   } finally {
> session.logout()
>   }
> }
> traverse("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7")
> {noformat}
> \[2]: {noformat}
> def printNodeChildrenCache(def path) {
>   

[jira] [Assigned] (OAK-3653) Incorrect last revision of cached node state

2015-11-19 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger reassigned OAK-3653:
-

Assignee: Marcel Reutegger

> Incorrect last revision of cached node state
> 
>
> Key: OAK-3653
> URL: https://issues.apache.org/jira/browse/OAK-3653
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Vikas Saurabh
>Assignee: Marcel Reutegger
> Fix For: 1.4
>
> Attachments: Capture.JPG, Capture1.JPG, Capture2.JPG, oplog.json
>
>
> While installing a package in one of the systems (Oak-1.0.16), we observed 
> broken workflow model. Upon further investigation, it was found that 
> {{baseVersion}} of the model was pointing to a node in version storage which 
> wasn't visible on the instance. This node was available in mongo and could 
> also be seen on a different instance of the cluster. 
> The node was created on the same instance where it wasn't visible (cluster id 
> - 3)
> Further investigation showed that for some (yet unknown) reason, 
> {{lastRevision}} for cached node state of node's parent at revision where the 
> node was created was stale (an older revision) which must have led to invalid 
> children list and hence unavailable node.
> Attaching a few files which were captured during the investigation:
> * [^oplog.json] - oplog for +-10s for r151191c7601-0-3
> * [^Capture.JPG]- snapshot of groovy console output using script at \[0] to 
> list node children cache entry for parent
> * [^Capture1.JPG]- snapshot of groovy console output using script at \[1] to 
> traverse to invisible node
> * [^Capture2.JPG]- Almost same as Capture.jpg but this time for node cache 
> instead (using script at \[2])
> The node which wasn't visible in mongo:
> {noformat}
> db.nodes.findOne({_id: 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7"})
> {
> "_id" : 
> "7:/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7",
> "_deleted" : {
> "r151191c7601-0-3" : "false"
> },
> "_commitRoot" : {
> "r151191c7601-0-3" : "0"
> },
> "_lastRev" : {
> "r0-0-3" : "r151191c7601-0-3"
> },
> "jcr:primaryType" : {
> "r151191c7601-0-3" : "\"nam:nt:version\""
> },
> "jcr:uuid" : {
> "r151191c7601-0-3" : 
> "\"40234f1d-2435-4c6b-9962-20af22b1c948\""
> },
> "_modified" : NumberLong(1447825270),
> "jcr:created" : {
> "r151191c7601-0-3" : "\"dat:2015-11-17T21:41:14.058-08:00\""
> },
> "_children" : true,
> "jcr:predecessors" : {
> "r151191c7601-0-3" : 
> "[\"ref:6cecc77b-3020-4a47-b9cc-084a618aa957\"]"
> },
> "jcr:successors" : {
> "r151191c7601-0-3" : "\"[0]:Reference\""
> },
> "_modCount" : NumberLong(1)
> }
> {noformat}
> Last revision entry of cached node state of 
> {{/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e}}
>  (parent) at revision {{r151191c7601-0-3}} was {{r14f035ebdf8-0-3}} (the last 
> revs were propagated correctly up till root for r151191c7601-0-3).
> As a work-around, doing {{cache.invalidate(k)}} for node cache (inside the 
> loop of script at \[2]) revived the visibility of the node.
> *Disclaimer*: _The issue was reported and investigated on a cluster backed by 
> mongo running on Oak-1.0.16. But, the problem itself doesn't seem like any 
> which have been fixed in later versions_
> \[0]: {noformat}
> def printNodeChildrenCache(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def cache = rootNode.sessionDelegate.root.store.nodeChildrenCache
> def cacheMap = cache.asMap()
> cacheMap.each{k,v -> 
>   if (k.toString().startsWith(path + "@")) {
> println "${k}:${v}"
>   }
> }
>   } finally {
> session.logout()
>   }
> }
>  
> printNodeChildrenCache("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e")
> {noformat}
> \[1]: {noformat}
> def traverse(def path) {
>   def session = 
> osgi.getService(org.apache.sling.jcr.api.SlingRepository.class).loginAdministrative(null)
>   try {
> def rootNode = session.getRootNode()
> def nb = rootNode.sessionDelegate.root.store.root
> def p = ''
> path.tokenize('/').each() {
>   p = p + '/' + it
>   nb = nb.getChildNode(it)
>   println "${p} ${nb}"
> }
>   } finally {
> session.logout()
>   }
> }
> traverse("/jcr:system/jcr:versionStorage/d9/9f/22/d99f2279-dcc9-40a0-9aa4-c91fe310287e/1.7")
> {noformat}
> \[2]: 

[jira] [Updated] (OAK-3652) RDB support: extend RDB export tool for CSV export

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3652:

Labels: candidate_oak_1_0 candidate_oak_1_2  (was: )

> RDB support: extend RDB export tool for CSV export
> --
>
> Key: OAK-3652
> URL: https://issues.apache.org/jira/browse/OAK-3652
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.11
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2948) Expose DefaultSyncHandler

2015-11-19 Thread Nicolas Peltier (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013187#comment-15013187
 ] 

Nicolas Peltier commented on OAK-2948:
--

[~tripod], so there might something i don't understand: to implement my custom 
solution, i need to implement my own SyncHandler, right? and if i'm happy with 
the current config, i'd like to reuse implementation of DefaultSyncConfig in 
the same way DefaultSyncHandler does [0]. Ideally i could extend 
DefaultSyncHandler but this one is not exposed neither.

[0] 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L87

> Expose DefaultSyncHandler
> -
>
> Key: OAK-2948
> URL: https://issues.apache.org/jira/browse/OAK-2948
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Konrad Windszus
> Fix For: 1.3.2, 1.2.7, 1.0.22
>
>
> We do have the use case of extending the user sync. Unfortunately 
> {{DefaultSyncHandler}} is not exposed, so if you want to change one single 
> aspect of the user synchronisation you have to copy over the code from the 
> {{DefaultSyncHandler}}. Would it be possible to make that class part of the 
> exposed classes, so that deriving your own class from that DefaultSyncHandler 
> is possible?
> Very often company LDAPs are not very standardized. In our case we face an 
> issue, that the membership is being listed in a user attribute, rather than 
> in a group attribute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names

2015-11-19 Thread Vikas Saurabh (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013233#comment-15013233
 ] 

Vikas Saurabh commented on OAK-3509:


[~teofili], I can put up a patch for this one. But, it'd convenient to change 
both suggestion and spellcheck result rows. Btw, I'm just planning to support 2 
columns (which can probably be extended later as you hinted above) - 
rep:spellcheck and jcr:score (where I'm planning to put value of weight of each 
suggestion as jcr:score). Does that sound fine?

Also, this is clearly a backward compatibility issue. I'm not sure how we deal 
with that. Do we just document it? What our take usually about backporting (to 
say 1.2 or 1.0) for such issues?

> Lucene suggestion results should have 1 row per suggestion with appropriate 
> column names
> 
>
> Key: OAK-3509
> URL: https://issues.apache.org/jira/browse/OAK-3509
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Tommaso Teofili
>Priority: Minor
> Fix For: 1.3.11
>
>
> Currently suggest query returns just one row with {{rep:suggest()}} column 
> containing a string that needs to be parsed.
> It'd better if each suggestion is returned as individual row with column 
> names such as {{suggestion}}, {{weight}}(???), etc.
> (cc [~teofili])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2948) Expose DefaultSyncHandler

2015-11-19 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014123#comment-15014123
 ] 

Tobias Bocanegra commented on OAK-2948:
---

yeah, the {{DefaultSyncConfigImpl}} separates the configuration for the 
{{DefaultSyncHandler}}:

{noformat}
@Component(
label = "Apache Jackrabbit Oak Default Sync Handler",
name = 
"org.apache.jackrabbit.oak.spi.security.authentication.external.impl.DefaultSyncHandler",
configurationFactory = true,
metatype = true,
ds = false
)
{noformat}

so basically it's the same as if it would be in the same class as 
DefaultSyncHandler. we just separated it so that we can have a own pojo for the 
config for non osgi use cases.

If you create your own sync handler, you obviously need your own component 
configuration. currently, you need to copy-paste it. correct.




> Expose DefaultSyncHandler
> -
>
> Key: OAK-2948
> URL: https://issues.apache.org/jira/browse/OAK-2948
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Konrad Windszus
> Fix For: 1.3.2, 1.2.7, 1.0.22
>
>
> We do have the use case of extending the user sync. Unfortunately 
> {{DefaultSyncHandler}} is not exposed, so if you want to change one single 
> aspect of the user synchronisation you have to copy over the code from the 
> {{DefaultSyncHandler}}. Would it be possible to make that class part of the 
> exposed classes, so that deriving your own class from that DefaultSyncHandler 
> is possible?
> Very often company LDAPs are not very standardized. In our case we face an 
> issue, that the membership is being listed in a user attribute, rather than 
> in a group attribute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions

2015-11-19 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013396#comment-15013396
 ] 

Davide Giannella commented on OAK-2472:
---

Had another extensive chat with [~fmeschbe] on the topic and we have
two possible ways out: expose the CommitHook as service and then
reference those in the OSGi artifact we want, or expose plain java
CommitHook object from Oak and Jcr that will return the so-far added
CommitHooks. This last one will then be manually injected using the
OSGi bind/unbind in the actual object.

I will follow the effort of the two in separate issues.



> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.4
>
> Attachments: atomic-counter.md
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-19 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3654:
-
Attachment: OAK-3654-v1.patch

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3657) RDBDocumentStore: cache update logic introduced for OAK-3566 should only be used for NODES collection

2015-11-19 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3657:
---

 Summary: RDBDocumentStore: cache update logic introduced for 
OAK-3566 should only be used for NODES collection
 Key: OAK-3657
 URL: https://issues.apache.org/jira/browse/OAK-3657
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: rdbmk
Affects Versions: 1.3.11, 1.2.8, 1.0.24
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions

2015-11-19 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013399#comment-15013399
 ] 

Davide Giannella commented on OAK-2472:
---

Filed OAK-3656 for tracking the CommitHook as OSGi service.

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.4
>
> Attachments: atomic-counter.md
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-19 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013403#comment-15013403
 ] 

Chetan Mehrotra edited comment on OAK-3654 at 11/19/15 12:10 PM:
-

Attached is [initial implementation|^OAK-3654-v1.patch] to seek feedback on the 
approach. (No test case yet)

Approach taken here is a pluggable one - It allows Oak to run without having a 
required dependency on metrics library and any stats collection can be 
completely switched off. It introduces a new {{StatisticsProvider}} interface 
which allow access to various types of Meters (Counter, Timer, Meter). These 
meters mimic the Metrics API. The default implementation is provided based on 
AtomicLong which is based on counter managed in {{TimeSeriesRecorder}}. 

At runtime one can deploy the Metrics bundle and then configure the 
{{MetricStatisticsProvider}} OSGi components [1]. This would then be used by 
{{StatisticsManager}} and thus via other Oak susbsystems like QueryEngine and 
Observation and stats would be routed to it. Other part of Oak have *no 
dependency* on Metrics API. Thats it! All the stats are now published on JMX

!query-stats.png!

Above image show the JMX stats for Query duration. 

*Usage*

Using this API in other places of Oak is also pretty easy. Just obtain a 
reference to {{StatisticsProvider}} and obtain the correct type of Meter and 
use it

{code:java}
@Reference
private StatisticsProvider provider;

public void readFromMongo(){
//Better to keep a instance variable reference and avoid lookup
MeterStats readCounter = provider.getMeter("mongo-reads");

readCounter.mark();
}
{code}

No other work required. The stats would be published to JMX with no extra effort

*TODO*

* Clock usage - Metric Meter, Counter by default invoke System.nanoTime for 
every call on the counter. I am not sure if that is an issue. If it turns out 
to be an issue we can use Oak Clock.Fast for time calculation

[~ianeboston] [~mduerig] Can you review the approach

[1] It has to be ensured that it gets activated before repository is created 
(can be done by adding a dep on OSGi component which registers the Repository 
say RepositoryManager)




was (Author: chetanm):
Attached is [initial implementation|^OAK-3654-v1.patch] to seek feedback on the 
approach. (No test case yet)

Approach taken here is a pluggable one - It allows Oak to run without having a 
required dependency on metrics library and any stats collection can be 
completely switched off. It introduces a new {{StatisticsProvider}} interface 
which allow access to various types of Meters (Counter, Timer, Meter). These 
meters mimic the Metrics API. The default implementation is provided based on 
AtomicLong which is based on counter managed in {{TimeSeriesRecorder}}. 

At runtime one can deploy the Metrics bundle and then configure the 
{{MetricStatisticsProvider}} OSGi components [1]. This would then be used by 
{{StatisticsManager}} and thus via other Oak susbsystems like QueryEngine and 
Observation and stats would be routed to it. Other part of Oak have *no 
dependency* on Metrics API. Thats it! All the stats are now published on JMX

!query-stats.png!

Above image show the JMX stats for Query duration. 

*Usage*

Using this API in other places of Oak is also pretty easy. Just obtain a 
reference to {{StatisticsProvider}} and obtain the correct type of Meter and 
use it

{code:java}
@Reference
private StatisticsProvider provider;

public void readFromMongo(){
//Better to keep a instance variable reference and avoid lookup
MeterStats readCounter = provider.getMeter("mongo-reads");

readCounter.mark();
}
{code}

No other work required. The stats would be published to JMX with no extra effort

[~ianeboston] [~mduerig] Can you review the approach

[1] It has to be ensured that it gets activated before repository is created 
(can be done by adding a dep on OSGi component which registers the Repository 
say RepositoryManager)



> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch, query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having 

[jira] [Updated] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-19 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-3654:
-
Attachment: query-stats.png

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch, query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3656) Expose CommitHook as OSGi service

2015-11-19 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3656:
-

 Summary: Expose CommitHook as OSGi service
 Key: OAK-3656
 URL: https://issues.apache.org/jira/browse/OAK-3656
 Project: Jackrabbit Oak
  Issue Type: Wish
  Components: core
Reporter: Davide Giannella
Assignee: Davide Giannella
 Fix For: 1.3.10


In Oak we currently expose two commit hooks type as OSGi services via a 
Provider wrapper: IndexEditor and Editor.

We don't do the same for ConflictHandler and CommitHook itself. 

It would be nice to have those exposed as OSGi services so that an OSGi 
repository could leverage them as @Reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3566) Stale documents in RDBDocumentStore cache

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3566:

Issue Type: Technical task  (was: Bug)
Parent: OAK-1266

> Stale documents in RDBDocumentStore cache
> -
>
> Key: OAK-3566
> URL: https://issues.apache.org/jira/browse/OAK-3566
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, rdbmk
>Affects Versions: 1.0, 1.2
>Reporter: Marcel Reutegger
>Assignee: Julian Reschke
> Fix For: 1.3.11, 1.2.8, 1.0.24
>
> Attachments: OAK-3566-test.patch, OAK-3566.diff, OAK-3566.patch
>
>
> This issue is about the same problem as described in OAK-1897 but for the 
> RDBDocumentStore implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-19 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013403#comment-15013403
 ] 

Chetan Mehrotra commented on OAK-3654:
--

Attached is [initial implementation|^OAK-3654-v1.patch] to seek feedback on the 
approach. (No test case yet)

Approach taken here is a pluggable one - It allows Oak to run without having a 
required dependency on metrics library and any stats collection can be 
completely switched off. It introduces a new {{StatisticsProvider}} interface 
which allow access to various types of Meters (Counter, Timer, Meter). These 
meters mimic the Metrics API. The default implementation is provided based on 
AtomicLong which is based on counter managed in {{TimeSeriesRecorder}}. 

At runtime one can deploy the Metrics bundle and then configure the 
{{MetricStatisticsProvider}} OSGi components [1]. This would then be used by 
{{StatisticsManager}} and thus via other Oak susbsystems like QueryEngine and 
Observation and stats would be routed to it. Other part of Oak have *no 
dependency* on Metrics API. Thats it! All the stats are now published on JMX

!query-stats.png!

Above image show the JMX stats for Query duration. 

*Usage*

Using this API in other places of Oak is also pretty easy. Just obtain a 
reference to {{StatisticsProvider}} and obtain the correct type of Meter and 
use it

{code:java}
@Reference
private StatisticsProvider provider;

public void readFromMongo(){
//Better to keep a instance variable reference and avoid lookup
MeterStats readCounter = provider.getMeter("mongo-reads");

readCounter.mark();
}
{code}

No other work required. The stats would be published to JMX with no extra effort

[~ianeboston] [~mduerig] Can you review the approach

[1] It has to be ensured that it gets activated before repository is created 
(can be done by adding a dep on OSGi component which registers the Repository 
say RepositoryManager)



> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch, query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013455#comment-15013455
 ] 

Thomas Mueller commented on OAK-3654:
-

It looks good to me. As far as I understand, it is useful for monitoring 
MongoDB and database operations. 

For me, statistics alone are rarely enough. What might make sense is to 
additionally log (info or warn level) the slowest operation per minute, if 
there was a slow operation. That way, the log file is not flooded, but you at 
least know which ones were the worst cases in the past. One per minute should 
be enough (or maybe configurable using a system property).

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch, query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3655) DocumentStore: clarify which methods allow UpdateOps with "isNew", and enforce this in implemenentations

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3655:

Summary: DocumentStore: clarify which methods allow UpdateOps with "isNew", 
and enforce this in implemenentations  (was: DocumentStore: clarify which 
methods allow UpdateOps with "isNew", and enforce this implemenentations)

> DocumentStore: clarify which methods allow UpdateOps with "isNew", and 
> enforce this in implemenentations
> 
>
> Key: OAK-3655
> URL: https://issues.apache.org/jira/browse/OAK-3655
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Affects Versions: 1.2.7, 1.3.10, 1.0.23
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> For instance, implementations of {{update()}} currently ignore the flag. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3655) DocumentStore: clarify which methods allow UpdateOps with "isNew", and enforce this implemenentations

2015-11-19 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3655:
---

 Summary: DocumentStore: clarify which methods allow UpdateOps with 
"isNew", and enforce this implemenentations
 Key: OAK-3655
 URL: https://issues.apache.org/jira/browse/OAK-3655
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: documentmk, mongomk, rdbmk
Affects Versions: 1.0.23, 1.3.10, 1.2.7
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Minor


For instance, implementations of {{update()}} currently ignore the flag. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomek Rękawek updated OAK-3649:
---
Fix Version/s: (was: 1.4)
   1.3.11

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.3.11
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013692#comment-15013692
 ] 

Tomek Rękawek commented on OAK-3649:


Pull request:
https://github.com/apache/jackrabbit-oak/pull/47

Patch file:
https://github.com/apache/jackrabbit-oak/pull/47.diff

I created two new classes: NodeDocumentCache and NodeDocumentLocks, as it 
wasn't possible to encapsulate the whole synchronization logic inside the new 
cache class. Right now NodeDocumentCache is not thread safe and it's up to the 
DocumentStore to take care about locks.

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.3.11
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2843) Broadcasting cache

2015-11-19 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015230#comment-15015230
 ] 

Amit Jain edited comment on OAK-2843 at 11/20/15 5:04 AM:
--

I get a test failure on {{BroadcastTest#broadcastTCP}} apparently only on 
windows [1]

[1]
{noformat}
broadcastTCP(org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest)
  Time elapsed: 1.127 sec  <<< FAILURE!
java.lang.AssertionError: min: 90 got: 54
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcast(BroadcastTest.java:215)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcastTCP(BroadcastTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noformat}


was (Author: amitjain):
I get a test failure on {{BroadcastTest#broadcastTCP}} apparently only on 
windows [1]

[1]
{noquote}
broadcastTCP(org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest)
  Time elapsed: 1.127 sec  <<< FAILURE!
java.lang.AssertionError: min: 90 got: 54
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcast(BroadcastTest.java:215)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcastTCP(BroadcastTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 

[jira] [Commented] (OAK-2843) Broadcasting cache

2015-11-19 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015230#comment-15015230
 ] 

Amit Jain commented on OAK-2843:


I get a test failure on {{BroadcastTest#broadcastTCP}} apparently only on 
windows [1]

[1]
{noquote}
broadcastTCP(org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest)
  Time elapsed: 1.127 sec  <<< FAILURE!
java.lang.AssertionError: min: 90 got: 54
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcast(BroadcastTest.java:215)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.BroadcastTest.broadcastTCP(BroadcastTest.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noquote}

> Broadcasting cache
> --
>
> Key: OAK-2843
> URL: https://issues.apache.org/jira/browse/OAK-2843
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> In a cluster environment, we could speed up reading if the cache(s) broadcast 
> data to other instances. This would avoid bottlenecks at the storage layer 
> (MongoDB, RDBMs).
> The configuration metadata (IP addresses and ports of where to send data to, 
> a unique identifier of the repository and the cluster nodes, possibly 
> encryption key) rarely changes and can be stored in the same place as we 
> store cluster metadata (cluster info collection). That way, in many cases no 
> manual configuration is needed. We could use TCP/IP and / or UDP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3658) Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling

2015-11-19 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3658:
---
Component/s: jcr

> Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling
> 
>
> Key: OAK-3658
> URL: https://issues.apache.org/jira/browse/OAK-3658
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: jcr
>Reporter: Amit Jain
>Priority: Minor
>
> Tests fail regularly on trunk - {{JackrabbitNodeTest#testRename}} and 
> {{JackrabbitNodeTest#testRenameEventHandling}}.
> {noformat}
> Test set: org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest
> ---
> Tests run: 8, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 0.106 sec <<< 
> FAILURE!
> testRenameEventHandling(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  
> Time elapsed: 0.01 sec  <<< ERROR!
> javax.jcr.nodetype.ConstraintViolationException: Item is protected.
>   at 
> org.apache.jackrabbit.oak.jcr.session.ItemImpl$ItemWriteOperation.checkPreconditions(ItemImpl.java:98)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:614)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:270)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl.rename(NodeImpl.java:1485)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRenameEventHandling(JackrabbitNodeTest.java:124)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at 
> org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:464)
>   at junit.framework.TestSuite.runTest(TestSuite.java:252)
>   at junit.framework.TestSuite.run(TestSuite.java:247)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> testRename(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  Time elapsed: 
> 0.007 sec  <<< FAILURE!
> junit.framework.ComparisonFailure: expected:<[a]> but was:<[rep:policy]>
>   at junit.framework.Assert.assertEquals(Assert.java:100)
>   at junit.framework.Assert.assertEquals(Assert.java:107)
>   at junit.framework.TestCase.assertEquals(TestCase.java:269)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRename(JackrabbitNodeTest.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at 

[jira] [Created] (OAK-3658) Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling

2015-11-19 Thread Amit Jain (JIRA)
Amit Jain created OAK-3658:
--

 Summary: Test failures: JackrabbitNodeTest#testRename and 
testRenameEventHandling
 Key: OAK-3658
 URL: https://issues.apache.org/jira/browse/OAK-3658
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: Amit Jain
Priority: Minor


Tests fail regularly on trunk - {{JackrabbitNodeTest#testRename}} and 
{{JackrabbitNodeTest#testRenameEventHandling}}.

{noformat}
Test set: org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest
---
Tests run: 8, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 0.106 sec <<< 
FAILURE!
testRenameEventHandling(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  Time 
elapsed: 0.01 sec  <<< ERROR!
javax.jcr.nodetype.ConstraintViolationException: Item is protected.
at 
org.apache.jackrabbit.oak.jcr.session.ItemImpl$ItemWriteOperation.checkPreconditions(ItemImpl.java:98)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:614)
at 
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:270)
at 
org.apache.jackrabbit.oak.jcr.session.NodeImpl.rename(NodeImpl.java:1485)
at 
org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRenameEventHandling(JackrabbitNodeTest.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at 
org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:464)
at junit.framework.TestSuite.runTest(TestSuite.java:252)
at junit.framework.TestSuite.run(TestSuite.java:247)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRename(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  Time elapsed: 
0.007 sec  <<< FAILURE!
junit.framework.ComparisonFailure: expected:<[a]> but was:<[rep:policy]>
at junit.framework.Assert.assertEquals(Assert.java:100)
at junit.framework.Assert.assertEquals(Assert.java:107)
at junit.framework.TestCase.assertEquals(TestCase.java:269)
at 
org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRename(JackrabbitNodeTest.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at 
org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:464)
at junit.framework.TestSuite.runTest(TestSuite.java:252)
at junit.framework.TestSuite.run(TestSuite.java:247)
at 

[jira] [Updated] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3649:

Fix Version/s: (was: 1.3.11)
   1.3.12

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3649:

Labels: candidate_oak_1_0 candidate_oak_1_2  (was: )

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015315#comment-15015315
 ] 

Julian Reschke commented on OAK-3649:
-

It's good to start work on this, but I believe we first should find out whether 
we can eliminate the {{TreeLock}}s altogether.

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015315#comment-15015315
 ] 

Julian Reschke edited comment on OAK-3649 at 11/20/15 6:47 AM:
---

It's good to start work on this, but I believe we first should find out whether 
we can eliminate the TreeLocks altogether.


was (Author: reschke):
It's good to start work on this, but I believe we first should find out whether 
we can eliminate the {{TreeLock}}s altogether.

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
>  Labels: candidate_oak_1_0, candidate_oak_1_2
> Fix For: 1.3.12
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3649) Extract node document cache from Mongo and RDB document stores

2015-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013698#comment-15013698
 ] 

Tomek Rękawek commented on OAK-3649:


[~chetanm], I saw that you've recently worked in this area, removing the 
HierrachialCacheInvalidator. Could you take a look on this patch and apply it 
if you'll find it useful?

cc: [~mreutegg], [~tmueller] - if you guys have some feedback or even would 
help me to get this merged, I would be grateful.

> Extract node document cache from Mongo and RDB document stores
> --
>
> Key: OAK-3649
> URL: https://issues.apache.org/jira/browse/OAK-3649
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: documentmk, mongomk, rdbmk
>Reporter: Tomek Rękawek
>Priority: Minor
> Fix For: 1.3.11
>
>
> MongoDocumentStore and RDBDocumentStore contains copy & pasted methods 
> responsible for handling node document cache. Extract these into a new 
> NodeDocumentCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013704#comment-15013704
 ] 

Thomas Mueller commented on OAK-2539:
-

Right now, only "contains" full-text conditions are converted to union, but not 
conditions of type "not contains", "similar", spellcheck, suggest.

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1736) Support for Faceted Search

2015-11-19 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013744#comment-15013744
 ] 

Tommaso Teofili commented on OAK-1736:
--

Lucene and Solr based faceting (with raw ACL filtering) at 
https://github.com/tteofili/jackrabbit-oak/tree/oak-1736d

> Support for Faceted Search
> --
>
> Key: OAK-1736
> URL: https://issues.apache.org/jira/browse/OAK-1736
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: lucene, query, solr
>Reporter: Thomas Mueller
>Assignee: Tommaso Teofili
> Fix For: 1.4
>
> Attachments: OAK-1736.2.patch
>
>
> Details to be defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3656) Expose CommitHook as OSGi service

2015-11-19 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3656:
--
Affects Version/s: 1.310

> Expose CommitHook as OSGi service
> -
>
> Key: OAK-3656
> URL: https://issues.apache.org/jira/browse/OAK-3656
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: core
>Affects Versions: 1.310
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.3.10
>
>
> In Oak we currently expose two commit hooks type as OSGi services via a 
> Provider wrapper: IndexEditor and Editor.
> We don't do the same for ConflictHandler and CommitHook itself. 
> It would be nice to have those exposed as OSGi services so that an OSGi 
> repository could leverage them as @Reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3634) RDB/MongoDocumentStore may return stale documents

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3634:

Attachment: OAK-3634.diff

Updated patch: a naive way to address this by letting update() just invalidate 
the cache. However, this leads to new failures in Mongo-specific test cases 
(currently @ignored); some of these might be intentional (CacheConsistencyIT) 
while others might not (CacheConsistency).

[~mreutegg]: once again will need your feedback...

> RDB/MongoDocumentStore may return stale documents
> -
>
> Key: OAK-3634
> URL: https://issues.apache.org/jira/browse/OAK-3634
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: mongomk, rdbmk
>Affects Versions: 1.2.7, 1.3.10, 1.0.23
>Reporter: Julian Reschke
> Attachments: OAK-3634.diff, OAK-3634.diff
>
>
> It appears that the implementations of the {{update}} method sometimes 
> populate the memory cache with documents that do not reflect any current or 
> previous state in the persistence (that is, miss changes made by another 
> node).
> (will attach test)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013582#comment-15013582
 ] 

Thomas Mueller commented on OAK-2539:
-

With trunk, the debug log for this query is (see below) indicates the cost for 
traversal is low in this case, in the case of the original query (with "or"). 
The query engine should probably detect that there is a fulltext condition that 
is not using an index, and rule out using the non-union plan.

{noformat}
persistentCache="crx-quickstart/repository/cache,size\=1024,binary\=0,broadcast\=tcp:key
 123"

http://localhost:8080/crx/explorer/testing/createNodes.jsp

QueryEngineImpl Parsing JCR-SQL2 statement: SELECT * FROM [nt:base] AS s 
WHERE ISDESCENDANTNODE([/content/usergenerated/]) 
AND (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))
QueryEngineImpl Optimised query available. select [s].[jcr:primaryType] as 
[s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryEngineImpl Preparing: select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (isdescendantnode([s], 
[/content/usergenerated/])) 
and ((contains([s].[stringb], 'b')) or ([s].[stringa] = 'a'))
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' 
OR CONTAINS(s.[stringb], 'b')), path=/content/usergenerated//*)
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is Infinity
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is 2100.0
QueryEngineImpl actualCost: 2100.0 - costOverhead: 1.7976931348623157E308 - 
overallCost: 1.7976931348623157E308
QueryEngineImpl Preparing: select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])), 
path=/content/usergenerated//*, property=[stringa=[a]])
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is Infinity
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is 2100.0
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE (contains([s].[stringb], 'b')) and (isdescendantnode([s], 
[/content/usergenerated/])) 
fullText=stringb:"b", path=/content/usergenerated//*, property=[stringb=[is 
not null]])
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is 198919.0
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is Infinity
QueryEngineImpl actualCost: 201019.0 - costOverhead: 0.0 - overallCost: 201019.0
QueryEngineImpl Cheapest cost: 201019.0 - query: select [s].[jcr:primaryType] 
as [s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
UnionQueryImpl query union plan [nt:base] as [s] /* traverse 
"/content/usergenerated//*" 
where ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])) */ 
union [nt:base] as [s] /* aggregate stringb:b ft:(stringb:"b") 
where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/])) */
QueryImpl query execute SELECT * FROM [nt:base] AS s WHERE ([s].[stringa] = 
'a') 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl query plan [nt:base] as [s] /* traverse "/content/usergenerated//*" 
where ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])) */
QueryImpl query execute SELECT * FROM [nt:base] AS s WHERE 
(contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl query plan [nt:base] as [s] /* aggregate stringb:b ft:(stringb:"b") 
where 

[jira] [Updated] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2539:

Fix Version/s: (was: 1.4)
   1.3.11

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-2539:
---

Assignee: Thomas Mueller  (was: Davide Giannella)

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Thomas Mueller
> Fix For: 1.4
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013648#comment-15013648
 ] 

Thomas Mueller commented on OAK-2539:
-

I will do some minor changes to the log message, rename a method (currently 
named oak2660CostOverhead), and get rid of an "instanceof".

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3657) RDBDocumentStore: cache update logic introduced for OAK-3566 should only be used for NODES collection

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3657:

Affects Version/s: (was: 1.3.11)
   1.3.10

> RDBDocumentStore: cache update logic introduced for OAK-3566 should only be 
> used for NODES collection
> -
>
> Key: OAK-3657
> URL: https://issues.apache.org/jira/browse/OAK-3657
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.11
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3657) RDBDocumentStore: cache update logic introduced for OAK-3566 should only be used for NODES collection

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3657:

Fix Version/s: 1.3.11

> RDBDocumentStore: cache update logic introduced for OAK-3566 should only be 
> used for NODES collection
> -
>
> Key: OAK-3657
> URL: https://issues.apache.org/jira/browse/OAK-3657
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.11
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3657) RDBDocumentStore: cache update logic introduced for OAK-3566 should only be used for NODES collection

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3657:

Fix Version/s: 1.0.25
   1.2.9

> RDBDocumentStore: cache update logic introduced for OAK-3566 should only be 
> used for NODES collection
> -
>
> Key: OAK-3657
> URL: https://issues.apache.org/jira/browse/OAK-3657
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.11, 1.2.9, 1.0.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3657) RDBDocumentStore: cache update logic introduced for OAK-3566 should only be used for NODES collection

2015-11-19 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3657.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1715191
1.2: http://svn.apache.org/r1715204
1.0: http://svn.apache.org/r1715191


> RDBDocumentStore: cache update logic introduced for OAK-3566 should only be 
> used for NODES collection
> -
>
> Key: OAK-3657
> URL: https://issues.apache.org/jira/browse/OAK-3657
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: rdbmk
>Affects Versions: 1.3.10, 1.2.8, 1.0.24
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.11, 1.2.9, 1.0.25
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2948) Expose DefaultSyncHandler

2015-11-19 Thread Konrad Windszus (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013534#comment-15013534
 ] 

Konrad Windszus commented on OAK-2948:
--

@Nicolas Peltier: It is right that currently you have to reimplement your own 
{{DefaultSyncConfigImpl.of(ConfigurationParameters params)}}.

> Expose DefaultSyncHandler
> -
>
> Key: OAK-2948
> URL: https://issues.apache.org/jira/browse/OAK-2948
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: auth-external
>Reporter: Konrad Windszus
> Fix For: 1.3.2, 1.2.7, 1.0.22
>
>
> We do have the use case of extending the user sync. Unfortunately 
> {{DefaultSyncHandler}} is not exposed, so if you want to change one single 
> aspect of the user synchronisation you have to copy over the code from the 
> {{DefaultSyncHandler}}. Would it be possible to make that class part of the 
> exposed classes, so that deriving your own class from that DefaultSyncHandler 
> is possible?
> Very often company LDAPs are not very standardized. In our case we face an 
> issue, that the membership is being listed in a user attribute, rather than 
> in a group attribute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2843) Broadcasting cache

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013544#comment-15013544
 ] 

Thomas Mueller commented on OAK-2843:
-

I implemented a new broadcast algorithm "tcp". The configuration options are: 
"sendTo" (default: "localhost"), which is the space separated list of hosts (ip 
addresses). "ports" with a start end end port (default 9800 and 9810). "key" 
the unique repository id. Example configuration:

{noformat}
launchpad configuration (escaped):
persistentCache="crx-quickstart/repository/cache,size\=1024,binary\=0,broadcast\=tcp:key
 123"

persistent cache URI (not escaped):
repo/cache,size=1024,broadcast=tcp:key hello;sendTo 192.168.0.1 
192.168.0.2;ports 9100 9200
{noformat}

Currently, a unique "key" needs to be manually configured for each cluster 
node. This key needs to be unique for the repository (each repository needs a 
separate key), to make sure cluster nodes only talk to other cluster nodes of 
the same repository. It would be much better to auto-configure this setting. 
The "key" should not be stored in the repository once and then re-used, but 
generated each time a new cluster node is added or removed, to ensure the key 
is unique even if the repository backend storage is copied. Probably the best 
place for that is in org.apache.jackrabbit.oak.plugins.document.ClusterView 
(part of DocumentDiscoveryLiteService).

> Broadcasting cache
> --
>
> Key: OAK-2843
> URL: https://issues.apache.org/jira/browse/OAK-2843
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> In a cluster environment, we could speed up reading if the cache(s) broadcast 
> data to other instances. This would avoid bottlenecks at the storage layer 
> (MongoDB, RDBMs).
> The configuration metadata (IP addresses and ports of where to send data to, 
> a unique identifier of the repository and the cluster nodes, possibly 
> encryption key) rarely changes and can be stored in the same place as we 
> store cluster metadata (cluster info collection). That way, in many cases no 
> manual configuration is needed. We could use TCP/IP and / or UDP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013557#comment-15013557
 ] 

Thomas Mueller commented on OAK-2539:
-

OAK-1617 is already set to fixed, so in theory this should resolve OAK-2539 as 
well; I will check.

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2472) Add support for atomic counters on cluster solutions

2015-11-19 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-2472:
--
Affects Version/s: 1.3.0

> Add support for atomic counters on cluster solutions
> 
>
> Key: OAK-2472
> URL: https://issues.apache.org/jira/browse/OAK-2472
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.3.0
>Reporter: Davide Giannella
>Assignee: Davide Giannella
>  Labels: scalability
> Fix For: 1.4
>
> Attachments: atomic-counter.md
>
>
> As of OAK-2220 we added support for atomic counters in a non-clustered 
> situation. 
> This ticket is about covering the clustered ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3656) Expose CommitHook as OSGi service

2015-11-19 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3656:
--
Affects Version/s: (was: 1.310)
   1.3.10

> Expose CommitHook as OSGi service
> -
>
> Key: OAK-3656
> URL: https://issues.apache.org/jira/browse/OAK-3656
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: core
>Affects Versions: 1.3.10
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> In Oak we currently expose two commit hooks type as OSGi services via a 
> Provider wrapper: IndexEditor and Editor.
> We don't do the same for ConflictHandler and CommitHook itself. 
> It would be nice to have those exposed as OSGi services so that an OSGi 
> repository could leverage them as @Reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3656) Expose CommitHook as OSGi service

2015-11-19 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella updated OAK-3656:
--
Fix Version/s: (was: 1.3.10)
   1.4

> Expose CommitHook as OSGi service
> -
>
> Key: OAK-3656
> URL: https://issues.apache.org/jira/browse/OAK-3656
> Project: Jackrabbit Oak
>  Issue Type: Wish
>  Components: core
>Affects Versions: 1.3.10
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> In Oak we currently expose two commit hooks type as OSGi services via a 
> Provider wrapper: IndexEditor and Editor.
> We don't do the same for ConflictHandler and CommitHook itself. 
> It would be nice to have those exposed as OSGi services so that an OSGi 
> repository could leverage them as @Reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2843) Broadcasting cache

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013552#comment-15013552
 ] 

Thomas Mueller commented on OAK-2843:
-

http://svn.apache.org/r1715177 (trunk) this includes a command line tool to 
listen to broadcasts in the network: start BroadcastTest.main.

> Broadcasting cache
> --
>
> Key: OAK-2843
> URL: https://issues.apache.org/jira/browse/OAK-2843
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.11
>
>
> In a cluster environment, we could speed up reading if the cache(s) broadcast 
> data to other instances. This would avoid bottlenecks at the storage layer 
> (MongoDB, RDBMs).
> The configuration metadata (IP addresses and ports of where to send data to, 
> a unique identifier of the repository and the cluster nodes, possibly 
> encryption key) rarely changes and can be stored in the same place as we 
> store cluster metadata (cluster info collection). That way, in many cases no 
> manual configuration is needed. We could use TCP/IP and / or UDP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013582#comment-15013582
 ] 

Thomas Mueller edited comment on OAK-2539 at 11/19/15 2:06 PM:
---

With trunk, the debug log for this query is (see below) indicates problem is 
solved:

{noformat}
QueryEngineImpl Parsing JCR-SQL2 statement: SELECT * FROM [nt:base] AS s 
WHERE ISDESCENDANTNODE([/content/usergenerated/]) 
AND (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))
QueryEngineImpl Optimised query available. select [s].[jcr:primaryType] as 
[s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryEngineImpl Preparing: select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (isdescendantnode([s], 
[/content/usergenerated/])) 
and ((contains([s].[stringb], 'b')) or ([s].[stringa] = 'a'))
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' 
OR CONTAINS(s.[stringb], 'b')), path=/content/usergenerated//*)
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is Infinity
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is 2100.0
QueryEngineImpl actualCost: 2100.0 - costOverhead: 1.7976931348623157E308 - 
overallCost: 1.7976931348623157E308
QueryEngineImpl Preparing: select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])), 
path=/content/usergenerated//*, property=[stringa=[a]])
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is Infinity
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is 2100.0
QueryImpl cost using filter Filter(query=SELECT * FROM [nt:base] AS s 
WHERE (contains([s].[stringb], 'b')) and (isdescendantnode([s], 
[/content/usergenerated/])) 
fullText=stringb:"b", path=/content/usergenerated//*, property=[stringb=[is 
not null]])
QueryImpl cost for reference is Infinity
QueryImpl cost for property is Infinity
QueryImpl cost for nodeType is Infinity
QueryImpl cost for lucene-property is Infinity
QueryImpl cost for aggregate lucene is 198919.0
QueryImpl cost for ordered is Infinity
QueryImpl cost for traverse is Infinity
QueryEngineImpl actualCost: 201019.0 - costOverhead: 0.0 - overallCost: 201019.0
QueryEngineImpl Cheapest cost: 201019.0 - query: select [s].[jcr:primaryType] 
as [s.jcr:primaryType] 
from [nt:base] as [s] where ([s].[stringa] = 'a') 
and (isdescendantnode([s], [/content/usergenerated/])) 
union select [s].[jcr:primaryType] as [s.jcr:primaryType] 
from [nt:base] as [s] where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
UnionQueryImpl query union plan [nt:base] as [s] /* traverse 
"/content/usergenerated//*" 
where ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])) */ 
union [nt:base] as [s] /* aggregate stringb:b ft:(stringb:"b") 
where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/])) */
QueryImpl query execute SELECT * FROM [nt:base] AS s WHERE ([s].[stringa] = 
'a') 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl query plan [nt:base] as [s] /* traverse "/content/usergenerated//*" 
where ([s].[stringa] = 'a') and (isdescendantnode([s], 
[/content/usergenerated/])) */
QueryImpl query execute SELECT * FROM [nt:base] AS s WHERE 
(contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/]))
QueryImpl query plan [nt:base] as [s] /* aggregate stringb:b ft:(stringb:"b") 
where (contains([s].[stringb], 'b')) 
and (isdescendantnode([s], [/content/usergenerated/])) */
{noformat}


was (Author: tmueller):
With trunk, the debug log for this query is (see below) indicates the cost for 
traversal is low in this case, in the case of the original query (with "or"). 
The query engine should probably detect 

[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013596#comment-15013596
 ] 

Thomas Mueller commented on OAK-2539:
-

[~edivad], the problem seems to be fixed, right?

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-11-19 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013600#comment-15013600
 ] 

Davide Giannella commented on OAK-2539:
---

if it returns the right resultset it is :)

By looking at the logs I can see the query engine detected a condition in the 
original query that is better suited with a Union even if the cost of the first 
is lower (see cost overhead)

{noformat}
QueryEngineImpl actualCost: 2100.0 - costOverhead: 1.7976931348623157E308 - 
overallCost: 1.7976931348623157E308
{noformat}

so on paper I'd say it's resolved.

> SQL2 query not working with filter (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> 
>
> Key: OAK-2539
> URL: https://issues.apache.org/jira/browse/OAK-2539
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Reporter: Calvin Wong
>Assignee: Davide Giannella
> Fix For: 1.4
>
>
> Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
> Add 2 String properties: stringa = "a", stringb = "b".
> Use query tool in CRX/DE to do SQL2 search:
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
> CONTAINS(s.[stringb], 'b'))
> This search will find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (CONTAINS(s.[stringb], 'b'))
> This search will not find qtest:
> SELECT * FROM [nt:base] AS s WHERE 
> ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
> CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)