[jira] [Assigned] (OAK-3503) Upgrade Maven Bundle Plugin to 3.0.0

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari reassigned OAK-3503:
---

Assignee: Francesco Mari

> Upgrade Maven Bundle Plugin to 3.0.0
> 
>
> Key: OAK-3503
> URL: https://issues.apache.org/jira/browse/OAK-3503
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: parent
>Affects Versions: 1.3.7
>Reporter: Oliver Lietz
>Assignee: Francesco Mari
> Fix For: 1.3.9
>
> Attachments: OAK-3503.patch
>
>
> This solves a problem with {{Require-Capability}} header (OAK-3083):
> {{MANIFEST.MF}} with Maven Bundle Plugin {{2.5.3}}:
> {noformat}
> Manifest-Version: 1.0
> Bnd-LastModified: 1443377959783
> Build-Jdk: 1.7.0_51
> Built-By: amjain
> Bundle-Category: oak
> Bundle-Description: The goal of the Oak effort within the Apache Jackrab
>  bit™ project isto implement a scalable and performant hierarchica
>  l content repositoryfor use as the foundation of modern world-class
>   web sites and otherdemanding content applications.
> Bundle-DocURL: http://jackrabbit.apache.org/oak/
> Bundle-License: http://www.apache.org/licenses/LICENSE-2.0.txt
> Bundle-ManifestVersion: 2
> Bundle-Name: Oak Core
> Bundle-SymbolicName: org.apache.jackrabbit.oak-core
> Bundle-Vendor: The Apache Software Foundation
> Bundle-Version: 1.3.7
> Created-By: Apache Maven Bundle Plugin
> DynamicImport-Package: org.apache.felix.jaas.boot
> Embed-Transitive: true
> Export-Package: org.apache.jackrabbit.oak;version="1.1.0";uses:="javax.a
>  nnotation,javax.management,org.apache.jackrabbit.oak.api,org.apache.jac
>  krabbit.oak.plugins.index,org.apache.jackrabbit.oak.query,org.apache.ja
>  ckrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.lifecycle,org.apa
>  che.jackrabbit.oak.spi.query,org.apache.jackrabbit.oak.spi.security,org
>  .apache.jackrabbit.oak.spi.state,org.apache.jackrabbit.oak.spi.whiteboa
>  rd",org.apache.jackrabbit.oak.api;version="2.1";uses:="com.google.commo
>  n.base,javax.annotation,javax.jcr,javax.security.auth.login",org.apache
>  .jackrabbit.oak.api.jmx;version="2.0.0";uses:="javax.annotation,javax.m
>  anagement.openmbean,org.apache.jackrabbit.oak.api,org.apache.jackrabbit
>  .oak.commons.jmx",org.apache.jackrabbit.oak.stats;version="1.1";uses:="
>  javax.annotation,javax.management.openmbean,org.apache.jackrabbit.api.s
>  tats,org.apache.jackrabbit.oak.api.jmx,org.apache.jackrabbit.oak.spi.wh
>  iteboard,org.apache.jackrabbit.stats,org.slf4j",org.apache.jackrabbit.o
>  ak.json;version="1.0";uses:="org.apache.jackrabbit.oak.api,org.apache.j
>  ackrabbit.oak.commons.json,org.apache.jackrabbit.oak.spi.state",org.apa
>  che.jackrabbit.oak.management;version="1.1.0";uses:="javax.annotation,j
>  avax.management.openmbean,org.apache.jackrabbit.oak.api.jmx,org.apache.
>  jackrabbit.oak.commons.jmx,org.apache.jackrabbit.oak.spi.whiteboard",or
>  g.apache.jackrabbit.oak.util;version="1.3.0";uses:="com.google.common.i
>  o,javax.annotation,javax.jcr,javax.management.openmbean,org.apache.jack
>  rabbit.oak.api,org.apache.jackrabbit.oak.api.jmx,org.apache.jackrabbit.
>  oak.namepath,org.apache.jackrabbit.oak.spi.state,org.apache.jackrabbit.
>  oak.spi.whiteboard,org.slf4j",org.apache.jackrabbit.oak.namepath;versio
>  n="2.0";uses:="javax.annotation,javax.jcr,javax.jcr.nodetype,org.apache
>  .jackrabbit.oak.api,org.apache.jackrabbit.oak.plugins.identifier,org.ap
>  ache.jackrabbit.oak.spi.state",org.apache.jackrabbit.oak.osgi;version="
>  2.0";uses:="javax.annotation,org.apache.jackrabbit.oak.spi.commit,org.a
>  pache.jackrabbit.oak.spi.whiteboard,org.osgi.framework,org.osgi.service
>  .component,org.osgi.util.tracker",org.apache.jackrabbit.oak.plugins.ato
>  mic;version="1.0";uses:="javax.annotation,org.apache.jackrabbit.oak.api
>  ,org.apache.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.sta
>  te",org.apache.jackrabbit.oak.plugins.backup;version="1.0";uses:="javax
>  .annotation,javax.management.openmbean,org.apache.jackrabbit.oak.api,or
>  g.apache.jackrabbit.oak.spi.state",org.apache.jackrabbit.oak.plugins.co
>  mmit;version="1.1.0";uses:="javax.annotation,org.apache.jackrabbit.oak.
>  api,org.apache.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.
>  state",org.apache.jackrabbit.oak.plugins.identifier;version="1.0";uses:
>  ="javax.annotation,org.apache.jackrabbit.oak.api,org.apache.jackrabbit.
>  oak.spi.state",org.apache.jackrabbit.oak.plugins.index;version="3.0.0";
>  uses:="javax.annotation,javax.jcr,org.apache.jackrabbit.oak.api,org.apa
>  che.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.state,org.a
>  pache.jackrabbit.oak.spi.whiteboard,org.apache.jackrabbit.oak.util",org
>  .apache.jackrabbit.oak.plugins.index.fulltext;version="1.0.0";uses:="ja
>  vax.annotation,org.apache.jackrabbit.oak.api",org.ap

[jira] [Updated] (OAK-3480) Query engine: faster cost calculation (take 2)

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3480:

Fix Version/s: (was: 1.3.9)
   1.3.8

> Query engine: faster cost calculation (take 2)
> --
>
> Key: OAK-3480
> URL: https://issues.apache.org/jira/browse/OAK-3480
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.8
>
>
> OAK-2679 improves cost calculation, however there is a small bug in the code 
> that prevents the usage of getMinimumCost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3480) Query engine: faster cost calculation (take 2)

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-3480.
-
Resolution: Fixed

> Query engine: faster cost calculation (take 2)
> --
>
> Key: OAK-3480
> URL: https://issues.apache.org/jira/browse/OAK-3480
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.8
>
>
> OAK-2679 improves cost calculation, however there is a small bug in the code 
> that prevents the usage of getMinimumCost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3480) Query engine: faster cost calculation (take 2)

2015-10-22 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968702#comment-14968702
 ] 

Thomas Mueller commented on OAK-3480:
-

http://svn.apache.org/r1707553 (trunk, before the 1.3.8 release)

> Query engine: faster cost calculation (take 2)
> --
>
> Key: OAK-3480
> URL: https://issues.apache.org/jira/browse/OAK-3480
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.3.8
>
>
> OAK-2679 improves cost calculation, however there is a small bug in the code 
> that prevents the usage of getMinimumCost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2733) Option to convert "like" queries to range queries

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2733:

Fix Version/s: (was: 1.3.9)

> Option to convert "like" queries to range queries
> -
>
> Key: OAK-2733
> URL: https://issues.apache.org/jira/browse/OAK-2733
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
>
> Queries with "like" conditions of the form "x like 'abc%'" are currently 
> always converted to range queries. With Apache Lucene, using "like" in some 
> cases is a bit faster (but not much, according to our tests).
> Converting "like" to range queries should be disabled by default.
> Potential patch:
> {noformat}
> --- src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (revision 1672070)
> +++ src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (working copy)
> @@ -31,11 +31,21 @@
>  import org.apache.jackrabbit.oak.query.fulltext.LikePattern;
>  import org.apache.jackrabbit.oak.query.index.FilterImpl;
>  import org.apache.jackrabbit.oak.spi.query.PropertyValues;
> +import org.slf4j.Logger;
> +import org.slf4j.LoggerFactory;
>  
>  /**
>   * A comparison operation (including "like").
>   */
>  public class ComparisonImpl extends ConstraintImpl {
> +
> +static final Logger LOG = LoggerFactory.getLogger(ComparisonImpl.class);
> +
> +private final static boolean CONVERT_LIKE_TO_RANGE = 
> Boolean.getBoolean("oak.convertLikeToRange");
> +
> +static {
> +LOG.info("Converting like to range queries is " + 
> (CONVERT_LIKE_TO_RANGE ? "enabled" : "disabled"));
> +}
>  
>  private final DynamicOperandImpl operand1;
>  private final Operator operator;
> @@ -193,7 +203,7 @@
>  if (lowerBound.equals(upperBound)) {
>  // no wildcards
>  operand1.restrict(f, Operator.EQUAL, v);
> -} else if (operand1.supportsRangeConditions()) {
> +} else if (operand1.supportsRangeConditions() && 
> CONVERT_LIKE_TO_RANGE) {
>  if (lowerBound != null) {
>  PropertyValue pv = 
> PropertyValues.newString(lowerBound);
>  operand1.restrict(f, Operator.GREATER_OR_EQUAL, 
> pv);
> @@ -203,7 +213,7 @@
>  operand1.restrict(f, Operator.LESS_OR_EQUAL, pv);
>  }
>  } else {
> -// path conditions
> +// path conditions, or conversion is disabled
>  operand1.restrict(f, operator, v);
>  }
>  } else {
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3451) OrderedIndexIT fails

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-3451.
-
Resolution: Won't Fix

We don't plan to work on the (synchronous) ordered index right now.

> OrderedIndexIT fails
> 
>
> Key: OAK-3451
> URL: https://issues.apache.org/jira/browse/OAK-3451
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.9
>
>
> This test fails on oak-jcr:
> {noformat}
> mvn -PintegrationTesting clean install
> oak2035(org.apache.jackrabbit.oak.jcr.OrderedIndexIT)  Time elapsed: 0.979 
> sec  <<< FAILURE!
> java.lang.AssertionError: both path and date failed to match. Expected:
> /content/n1412 - 2012-12-24T23:00:00.000-05:00. 
> Obtained: 
> /content/n1092, 2012-12-24T20:00:00.000-08:00
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.jackrabbit.oak.jcr.OrderedIndexIT.assertRightOrder(OrderedIndexIT.java:232)
>   at 
> org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035(OrderedIndexIT.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2902) Code coverage

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2902:

Fix Version/s: (was: 1.4)

> Code coverage
> -
>
> Key: OAK-2902
> URL: https://issues.apache.org/jira/browse/OAK-2902
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: technical_debt
>
> We should have automated code coverage results, and then decide upon minimum 
> numbers we want to achieve (for example, initially 100% package or class 
> coverage). Once we reached the goal, we can increase the minimum coverage on 
> a module-by-module basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3387) Enable NodeLocalNameTest tests

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3387:

Fix Version/s: (was: 1.4)
   1.3.9

> Enable NodeLocalNameTest tests
> --
>
> Key: OAK-3387
> URL: https://issues.apache.org/jira/browse/OAK-3387
> Project: Jackrabbit Oak
>  Issue Type: Test
>  Components: jcr
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.9
>
>
> Enable the tests that were disabled in OAK-3265, once Jackrabbit is released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-327) XPath 'eq' support and related

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-327:
---
Fix Version/s: (was: 1.4)

> XPath 'eq' support and related 
> ---
>
> Key: OAK-327
> URL: https://issues.apache.org/jira/browse/OAK-327
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: jcr, query
>Reporter: Alex Parvulescu
>Assignee: Thomas Mueller
>Priority: Minor
>
> Failing test SimpleQueryTest#testGeneralComparison.
> There is no support for 'eq' comparison in xpath currently: "@text eq 'foo'" 
> fails to parse
> Jackrabbit 2.x actually supports more than just 'eq': it also supports not 
> equal, and so on (see XPath specification).
> This is not required by the JCR 1.0 XPath spec, but we might still want to 
> support it at some point for Jackrabbit 2.x compatibility. See also 
> http://www.day.com/specs/jcr/1.0/6.6.4.11_Comparison_Operators.html 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1642) Long size queries causes huge logs

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1642:

Fix Version/s: (was: 1.4)

> Long size queries causes huge logs
> --
>
> Key: OAK-1642
> URL: https://issues.apache.org/jira/browse/OAK-1642
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Rishabh Maurya
>Assignee: Thomas Mueller
>Priority: Minor
>
> Below query causes 320MB of logs per execution, showing same warning messages 
>  3000 times - 
> {code}
> /jcr:root/content/dam//element(*, dam:Asset) 
> [
> (@cq:tags = 'geometrixx-outdoors:apparel/shirt' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/shirt' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/shirt/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/shirt/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/gloves' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/gloves' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/gloves/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/gloves/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/glasses' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/glasses' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/glasses/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/glasses/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/coat' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/coat' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/coat/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/coat/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/hat' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/hat' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/hat/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/hat/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/pants' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/pants' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/pants/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/pants/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/helmet' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/helmet' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/helmet/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/helmet/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/shorts' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/shorts' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/shorts/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/shorts/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/footwear' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/footwear' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/footwear/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/footwear/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/pancho' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/pancho' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/pancho/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/pancho/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel/scarf' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel/scarf' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/scarf/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/scarf/%')) 
> or (@cq:tags = 'geometrixx-outdoors:apparel' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/apparel' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:apparel/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/apparel/%')) 
> or (@cq:tags = 'geometrixx-outdoors:gender/men' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/gender/men' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:gender/men/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/gender/men/%')) 
> or (@cq:tags = 'geometrixx-outdoors:gender/women' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/gender/women' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:gender/women/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/gender/women/%')) 
> or (@cq:tags = 'geometrixx-outdoors:gender/unisex' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/gender/unisex' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:gender/unisex/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/gender/unisex/%')) 
> or (@cq:tags = 'geometrixx-outdoors:gender' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/gender' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:gender/%')
> or jcr:like(@cq:tags, '/etc/tags/geometrixx-outdoors/gender/%')) 
> or (@cq:tags = 'geometrixx-outdoors:activity/running' or @cq:tags = 
> '/etc/tags/geometrixx-outdoors/activity/running' or jcr:like(@cq:tags, 
> 'geometrixx-outdoors:activity/running/%')
> or

[jira] [Resolved] (OAK-260) Avoid the "Turkish Locale Problem"

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-260.

Resolution: Incomplete

> Avoid the "Turkish Locale Problem"
> --
>
> Key: OAK-260
> URL: https://issues.apache.org/jira/browse/OAK-260
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.4
>
>
> We currently use String.toUpperCase() and String.toLowerCase() and in some 
> cases where it is not appropriate. When running using the Turkish profile, 
> this will not work as expected. See also 
> http://mattryall.net/blog/2009/02/the-infamous-turkish-locale-bug
> Problematic are String.toUpperCase(), String.toLowerCase(). 
> String.equalsIgnoreCase(..) isn't a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2466) DataStoreBlobStore: chunk ids should not contain the size

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2466:

Fix Version/s: (was: 1.4)

> DataStoreBlobStore: chunk ids should not contain the size
> -
>
> Key: OAK-2466
> URL: https://issues.apache.org/jira/browse/OAK-2466
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: datastore, performance
>
> The blob store garbage collection (data store garbage collection) uses the 
> chunk ids to identify binaries to be deleted. The blob ids contain the size 
> now (#), and the blob id is currently equal to the chunk 
> id.
> It would be more efficient to _not_ use the size, and instead just use the 
> content hash, for the chunk ids. That way, enumerating the entries that are 
> in the store is potentially faster. Also, it allows us to change the blob id 
> in the future, for example add more information to it (for example the 
> creation time, or the first few bytes of the content) if we ever want to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1304) createQuery(query,Query.JCR_JQOM) is returning QueryImpl instead of QueryObjectModel

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1304:

Fix Version/s: (was: 1.4)

> createQuery(query,Query.JCR_JQOM) is returning QueryImpl  instead of 
> QueryObjectModel
> -
>
> Key: OAK-1304
> URL: https://issues.apache.org/jira/browse/OAK-1304
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: jcr
>Reporter: Vijay Kumar j
>Assignee: Thomas Mueller
>Priority: Minor
>
> createQuery(query,Query.JCR_JQOM) is returning QueryImpl  instead of 
> QueryObjectModel
> According to the spec queryManager implementation will return an instance of 
> QueryObjectModel when passing Query.JCR_JQOM as the language.
> http://www.day.com/specs/jcr/2.0/6_Query.html#6.9%20Query%20Object
> http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/query/QueryManager.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1871) Support multi-column property indexes

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1871:

Fix Version/s: (was: 1.4)

> Support multi-column property indexes
> -
>
> Key: OAK-1871
> URL: https://issues.apache.org/jira/browse/OAK-1871
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>
> Currently, all property indexes are single-column. To speed up some use 
> cases, the property index should support multiple columns. Example use case: 
> Property "size" with low cardinality (low number of distinct values, for 
> example "S", "M", "L", "XL"). Property "color" with low number of cardinality 
> ("white", "black", "red",...). The query condition is "where size = 'L' and 
> color = 'white'". The number of matching nodes is small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-862) Aggregate (count, group by) queries

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-862:
---
Fix Version/s: (was: 1.4)

> Aggregate (count, group by) queries
> ---
>
> Key: OAK-862
> URL: https://issues.apache.org/jira/browse/OAK-862
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: sakshi arora
>Assignee: Thomas Mueller
>Priority: Minor
>
> ‘Group by’ query to get count(frequency) for each uid(could be a no. or 
> string). This will extend to 'group by' on multiple fields, with the usual 
> predicates ('where' - already available).
> The use-case involves, frequency-based summary charts.
> As charts are mostly frequency based, e.g. on duration, on category, on 
> date/month/time/year.
> The data collection could range from live data to the daily scheduled 
> synchronization.
> Required efficiency: Queries should be pretty fast, as usual location of 
> these charts is on dashboards (which are the home pages for most sites).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1150) NodeType index: don't index all primary and mixin types

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1150:

Fix Version/s: (was: 1.4)

> NodeType index: don't index all primary and mixin types
> ---
>
> Key: OAK-1150
> URL: https://issues.apache.org/jira/browse/OAK-1150
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>
> Currently, the nodetype index indexes all primary types and mixin types 
> (including nt:base I think).
> This results in many nodes in this index, which unnecessarily increases the 
> repository size, but doesn't really help executing queries (running a query 
> to get all nt:base nodes doesn't benefit much from using the nodetype index).
> It should also help reduce writes in updating the index, for example for 
> OAK-1099



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1910) The query engine cost calculation is incorrect

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-1910.
-
Resolution: Not A Problem

> The query engine cost calculation is incorrect
> --
>
> Key: OAK-1910
> URL: https://issues.apache.org/jira/browse/OAK-1910
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.4
>
>
> The cost calculation formula for the AdvancedQueryIndex does't take the cost 
> to load a node (from the repository) into account. It currently uses:
> {noformat}
> double c = p.getCostPerExecution() + entryCount * p.getCostPerEntry();
> {noformat}
> However cost per entry is the cost of the index, not the cost of the 
> repository. It should probably be
> {noformat}
> double c = p.getCostPerExecution() + entryCount * (1 + p.getCostPerEntry());
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1930) New method RangeIterator.getSize(int max)

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1930:

Fix Version/s: (was: 1.4)

> New method RangeIterator.getSize(int max)
> -
>
> Key: OAK-1930
> URL: https://issues.apache.org/jira/browse/OAK-1930
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core, jcr, query
>Affects Versions: 1.0
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>
> The method RangeIterator.getSize() is part of the JCR API, and returns the 
> number of items, but can also return -1 if not known.
> Currently, Oak doesn't return -1, but counts the items. This is slow 
> (potentially very slow) if there are many items, for example, in a query 
> result.
> I propose to add a new method RangeIterator.getSize(long max) that limits 
> counting the entries. That way, an application can use a reasonable limit, 
> and Oak doesn't need to count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1910) The query engine cost calculation is incorrect

2015-10-22 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968718#comment-14968718
 ] 

Thomas Mueller commented on OAK-1910:
-

Thinking about it, sometimes loading the node is not needed, so the cost 
calculation is OK. It is the responsibility to take into account the cost to 
load a node if needed.

> The query engine cost calculation is incorrect
> --
>
> Key: OAK-1910
> URL: https://issues.apache.org/jira/browse/OAK-1910
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
> Fix For: 1.4
>
>
> The cost calculation formula for the AdvancedQueryIndex does't take the cost 
> to load a node (from the repository) into account. It currently uses:
> {noformat}
> double c = p.getCostPerExecution() + entryCount * p.getCostPerEntry();
> {noformat}
> However cost per entry is the cost of the index, not the cost of the 
> repository. It should probably be
> {noformat}
> double c = p.getCostPerExecution() + entryCount * (1 + p.getCostPerEntry());
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1556) Document and test additional BlobStore contracts

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1556:

Fix Version/s: (was: 1.4)

> Document and test additional BlobStore contracts
> 
>
> Key: OAK-1556
> URL: https://issues.apache.org/jira/browse/OAK-1556
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: api, documentation, test
>
> The original BlobStore implementations support an additional contract, which 
> is tested, but so far the applications can't rely on. The contract is that 
> concatenating multiple blobIds is a valid blobId, and means the binaries are 
> concatenated. The use cases are to support partial and concurrent uploads / 
> transfers. Depending on the backend, this can speed up transfers quite a bit. 
> Also, it allows new use cases, for example "resume upload" without having to 
> re-upload or stream the existing binary. 
> The DataStore implementations don't support those use cases. Now, with the 
> DataStoreBlobStore compatibility wrapper, this contract can't be supported by 
> all BlobStore implementations. That's fine. However, the tests against the 
> other BlobStores should still test this contract.
> I will add a new marker interface "ChunkingBlobStore" so the unit tests can 
> verify the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1571) OSGi Configuration for Query Limits

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1571:

Fix Version/s: (was: 1.4)

> OSGi Configuration for Query Limits
> ---
>
> Key: OAK-1571
> URL: https://issues.apache.org/jira/browse/OAK-1571
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: configuration
> Attachments: OAK-1571.patch
>
>
> In OAK-1395 we added limits for long running queries. The limits can be 
> changed with system properties, now we should make the settings changeable 
> using OSGi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3534) Endless reindexing of async if no provider

2015-10-22 Thread Davide Giannella (JIRA)
Davide Giannella created OAK-3534:
-

 Summary: Endless reindexing of async if no provider
 Key: OAK-3534
 URL: https://issues.apache.org/jira/browse/OAK-3534
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, query
Affects Versions: 1.3.8
Reporter: Davide Giannella


Placeholder issue for the moment. Require more investigations and a
test case.

It seems that if an index definition has {{reindex=true}} but no index
provider is able to serve the index type, the re-indexing process will
retry endlessly.

Possible way to reproduce

- create an initial content with an index definition of type lucene
  and reindex=true
- start the repository without the lucene IndexProvider or having it
  disabled.
- see in the logs that the reindex will always re-index from scratch
  as there are no checkpoints.

Should the reindex fail and stop after some attempts?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1571) OSGi Configuration for Query Limits

2015-10-22 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968731#comment-14968731
 ] 

Thomas Mueller commented on OAK-1571:
-

Clarification: the limits are configurable via system properties night now, but 
not yet via OSGi.

The configuration can be changed at runtime via JMX, and doing so also affects 
queries that are currently running.

> OSGi Configuration for Query Limits
> ---
>
> Key: OAK-1571
> URL: https://issues.apache.org/jira/browse/OAK-1571
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: configuration
> Attachments: OAK-1571.patch
>
>
> In OAK-1395 we added limits for long running queries. The limits can be 
> changed with system properties, now we should make the settings changeable 
> using OSGi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3140) DataStore / BlobStore: add a method to pass a "type" when writing

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-3140:

Fix Version/s: (was: 1.3.9)

> DataStore / BlobStore: add a method to pass a "type" when writing
> -
>
> Key: OAK-3140
> URL: https://issues.apache.org/jira/browse/OAK-3140
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
>
> Currently, the BlobStore interface has a method "String writeBlob(InputStream 
> in)". This issue is about adding a new method "String writeBlob(String type, 
> InputStream in)", for the following reasons (in no particular order):
> * Store some binaries (for example Lucene index files) in a different place, 
> in order to safely and quickly run garbage collection just on those files.
> * Store some binaries in a slow, some in a fast storage or location.
> * Disable calculating the content hash (de-duplication) for some binaries.
> * Store some binaries in a shared storage (for fast cross-repository 
> copying), and some in local storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2037) Define standards for plan output

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2037:

Fix Version/s: (was: 1.3.9)

> Define standards for plan output
> 
>
> Key: OAK-2037
> URL: https://issues.apache.org/jira/browse/OAK-2037
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Justin Edelson
>Assignee: Thomas Mueller
>Priority: Minor
>  Labels: tooling
>
> Currently, the syntax for the plan output is chaotic as it varies 
> significantly from index to index. Whereas some of this is expected - each 
> index type will have different data to output, Oak should provide some 
> standards about how a plan will appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2686) Persistent cache: log activity and timing data, and possible optimizations

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2686:

Fix Version/s: (was: 1.3.9)

> Persistent cache: log activity and timing data, and possible optimizations
> --
>
> Key: OAK-2686
> URL: https://issues.apache.org/jira/browse/OAK-2686
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: tooling
>
> The persistent cache most likely reduce performance in some uses cases, but 
> currently it's hard to find out if that's the case or not.
> Activity should be captured (and logged with debug level) if possible, for 
> example writing, reading, writing in the foreground / background, opening and 
> closing, switching the generation, moving entries from old to new generation.
> Adding entries to the cache could be completely decoupled from the foreground 
> thread, if they are added to the persistent cache in a separate thread.
> It might be better to only write entries if they were accessed often. To do 
> this, entries could be put in the persistent cache once they are evicted from 
> the in-memory cache, instead of when they are added to the cache. If that's 
> done, we would maintain some data (for example access count) on which we can 
> filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2761) Persistent cache: add data in a different thread

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2761:

Fix Version/s: (was: 1.3.9)

> Persistent cache: add data in a different thread
> 
>
> Key: OAK-2761
> URL: https://issues.apache.org/jira/browse/OAK-2761
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: cache, core, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: resilience
>
> The persistent cache usually stores data in a background thread, but 
> sometimes (if a lot of data is added quickly) the foreground thread is 
> blocked.
> Even worse, switching the cache file can happen in a foreground thread, with 
> the following stack trace.
> {noformat}
> "127.0.0.1 [1428931262206] POST /bin/replicate.json HTTP/1.1" prio=5 
> tid=0x7fe5df819800 nid=0x9907 runnable [0x000113fc4000]
>java.lang.Thread.State: RUNNABLE
> ...
>   at org.h2.mvstore.MVStoreTool.compact(MVStoreTool.java:404)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1.closeStore(PersistentCache.java:213)
>   - locked <0x000782483050> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache$1)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache.switchGenerationIfNeeded(PersistentCache.java:350)
>   - locked <0x000782455710> (a 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.PersistentCache)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.write(NodeCache.java:85)
>   at 
> org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.put(NodeCache.java:130)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.applyChanges(DocumentNodeStore.java:1060)
>   at 
> org.apache.jackrabbit.oak.plugins.document.Commit.applyToCache(Commit.java:599)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.afterTrunkCommit(CommitQueue.java:127)
>   - locked <0x000781890788> (a 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue)
>   at 
> org.apache.jackrabbit.oak.plugins.document.CommitQueue.done(CommitQueue.java:83)
>   at 
> org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.done(DocumentNodeStore.java:637)
> {noformat}
> To avoid blocking the foreground thread, one solution is to store all data in 
> a separate thread. If there is too much data added, then some of the data is 
> not stored. If possible, the data that was not referenced a lot, and / or old 
> revisions of documents (if new revisions are available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2745) PersistentCache should rely on eviction callback to add entry to the persistent cache

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-2745:

Fix Version/s: (was: 1.3.9)

> PersistentCache should rely on eviction callback to add entry to the 
> persistent cache
> -
>
> Key: OAK-2745
> URL: https://issues.apache.org/jira/browse/OAK-2745
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Thomas Mueller
>  Labels: performance
>
> Currently when PersistentCache is enabled then any put results in addition of 
> the entry to in memory cache and also to the backing persistent cache. While 
> adding the entry to the persistent cache there is slight overhead of 
> serialization of the entry to be paid.
> To avoid such overheads at time of read/write to in memory cache it would be 
> better to move the logic to separate thread. PersistentCache can make use of 
> Guava cache eviction callback and then add the entry to the backend 
> persistent store



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1744) GQL queries with "jcr:primaryType='x'" don't use the node type index

2015-10-22 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-1744:

Fix Version/s: (was: 1.3.9)

> GQL queries with "jcr:primaryType='x'" don't use the node type index
> 
>
> Key: OAK-1744
> URL: https://issues.apache.org/jira/browse/OAK-1744
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Minor
>
> GQL queries (org.apache.jackrabbit.commons.query.GQL) with type restrictions 
> are converted to the XPath condition "jcr:primaryType = 'x'". This conditions 
> is not currently interpreted as a regular node type restriction in the query 
> engine or the node type index, as one would expect. 
> Such restrictions could still be processed efficiently using the property 
> index on "jcr:primaryType", but if that one is disabled (by setting the cost 
> manually very high, as it is done now), then such queries don't use the 
> expected index.
> I'm not sure yet where this should be best fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3535) Update jackrabbit version to 2.11.2

2015-10-22 Thread Joel Richard (JIRA)
Joel Richard created OAK-3535:
-

 Summary: Update jackrabbit version to 2.11.2
 Key: OAK-3535
 URL: https://issues.apache.org/jira/browse/OAK-3535
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Joel Richard
 Fix For: 1.3.9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3215) Solr test often fail with No such core: oak

2015-10-22 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968736#comment-14968736
 ] 

Davide Giannella commented on OAK-3215:
---

was looking at the amount of issues we have on jenkins and that are probably 
due to the lack of resources of the jenkins machine. An idea I was thinking 
that maybe you can try: having a local underpowered VM with jenkins in it could 
probably help.

> Solr test often fail with  No such core: oak
> 
>
> Key: OAK-3215
> URL: https://issues.apache.org/jira/browse/OAK-3215
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Chetan Mehrotra
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: CI
> Fix For: 1.3.9
>
>
> Often it can be seen that all test from oak-solr module fail. And in all such 
> failure following error is reported 
> {noformat}
> org.apache.solr.common.SolrException: No such core: oak
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:112)
>   at 
> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:118)
>   at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
>   at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
>   at 
> org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest.testQueryOnIgnoredExistingProperty(SolrQueryIndexTest.java:330)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> {noformat}
> Most recent failure in 
> https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/325/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3536) Indexing with Lucene and copy-on-read generate too much garbage in the BlobStore

2015-10-22 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-3536:
---

 Summary: Indexing with Lucene and copy-on-read generate too much 
garbage in the BlobStore
 Key: OAK-3536
 URL: https://issues.apache.org/jira/browse/OAK-3536
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: lucene
Affects Versions: 1.3.9
Reporter: Francesco Mari
Priority: Critical


The copy-on-read strategy when using Lucene indexing performs too many copies 
of the index files from the filesystem to the repository. Every copy discards 
the previously stored binary, that sits there as garbage until the binary 
garbage collection kicks in. When the load on the system is particularly 
intense, this behaviour makes the repository grow at an unreasonable high pace. 

I spotted this on a system where some content is generated every day at a 
specific time. The content generation process creates approx. 6 millions new 
nodes, where each node contains 5 properties with small string, random values. 
Nodes were saved in batches of 1000 nodes each. At the end of the content 
generation process, the nodes are deleted to deliberately generate garbage in 
the Segment Store. This is part of a testing effort to assess the efficiency of 
the online compaction.

I was never able to complete the tests because the system run out of disk space 
due to a lot of unused binary values. When debugging the system, on a 400 GB 
(full) disk, the segments containing nodes and property values occupied approx. 
3 GB. The rest of the space was occupied by binary values in form of bulk 
segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3534) Endless reindexing of async if no provider

2015-10-22 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968746#comment-14968746
 ] 

Francesco Mari edited comment on OAK-3534 at 10/22/15 8:01 AM:
---

Maybe the indexes should just be ignored during the reindexing instead, if a 
suitable {{IndexEditor}} is not installed in the system. 


was (Author: frm):
Maybe the indexes should just be ignored during the reindexing instead, if a 
suitable {{IndexProvider}} is not installed in the system. 

> Endless reindexing of async if no provider
> --
>
> Key: OAK-3534
> URL: https://issues.apache.org/jira/browse/OAK-3534
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Affects Versions: 1.3.8
>Reporter: Davide Giannella
>
> Placeholder issue for the moment. Require more investigations and a
> test case.
> It seems that if an index definition has {{reindex=true}} but no index
> provider is able to serve the index type, the re-indexing process will
> retry endlessly.
> Possible way to reproduce
> - create an initial content with an index definition of type lucene
>   and reindex=true
> - start the repository without the lucene IndexProvider or having it
>   disabled.
> - see in the logs that the reindex will always re-index from scratch
>   as there are no checkpoints.
> Should the reindex fail and stop after some attempts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3534) Endless reindexing of async if no provider

2015-10-22 Thread Francesco Mari (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968746#comment-14968746
 ] 

Francesco Mari commented on OAK-3534:
-

Maybe the indexes should just be ignored during the reindexing instead, if a 
suitable {{IndexProvider}} is not installed in the system. 

> Endless reindexing of async if no provider
> --
>
> Key: OAK-3534
> URL: https://issues.apache.org/jira/browse/OAK-3534
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Affects Versions: 1.3.8
>Reporter: Davide Giannella
>
> Placeholder issue for the moment. Require more investigations and a
> test case.
> It seems that if an index definition has {{reindex=true}} but no index
> provider is able to serve the index type, the re-indexing process will
> retry endlessly.
> Possible way to reproduce
> - create an initial content with an index definition of type lucene
>   and reindex=true
> - start the repository without the lucene IndexProvider or having it
>   disabled.
> - see in the logs that the reindex will always re-index from scratch
>   as there are no checkpoints.
> Should the reindex fail and stop after some attempts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2660) Wrong result when using multiple OR conditions, with a Lucene full-text index

2015-10-22 Thread Davide Giannella (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide Giannella resolved OAK-2660.
---
Resolution: Fixed

Consider this resolved as with OAK-1617 enabled the test pass successfully.

https://github.com/apache/jackrabbit-oak/blob/512c8bad4064f5bd392ee530990107f292bcac95/oak-lucene/src/test/java/org/apache/jackrabbit/oak/plugins/index/lucene/LuceneIndexQueryTestSQL2Optimisation.java

> Wrong result when using multiple OR conditions, with a Lucene full-text index
> -
>
> Key: OAK-2660
> URL: https://issues.apache.org/jira/browse/OAK-2660
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: query
>Affects Versions: 1.1.7
>Reporter: Davide Giannella
>Assignee: Davide Giannella
> Fix For: 1.3.9
>
>
> The following query returns the wrong result:
> {code}
> SELECT * 
> FROM [nt:unstructured] AS c
>  WHERE ( c.[name] = 'yes' 
> OR CONTAINS(c.[surname], 'yes') 
> OR CONTAINS(c.[description], 'yes') ) 
> AND ISDESCENDANTNODE(c, '/content') 
> ORDER BY added DESC 
> {code}
> There is a Lucene property index for the following properties: {{name, 
> surname, description, added}}.
> Internally, the FilterImpl passed to the indexes does not contains any 
> conditions except order and path restriction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3536) Indexing with Lucene and copy-on-read generate too much garbage in the BlobStore

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-3536:

Fix Version/s: 1.3.9

> Indexing with Lucene and copy-on-read generate too much garbage in the 
> BlobStore
> 
>
> Key: OAK-3536
> URL: https://issues.apache.org/jira/browse/OAK-3536
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.3.9
>Reporter: Francesco Mari
>Priority: Critical
> Fix For: 1.4
>
>
> The copy-on-read strategy when using Lucene indexing performs too many copies 
> of the index files from the filesystem to the repository. Every copy discards 
> the previously stored binary, that sits there as garbage until the binary 
> garbage collection kicks in. When the load on the system is particularly 
> intense, this behaviour makes the repository grow at an unreasonable high 
> pace. 
> I spotted this on a system where some content is generated every day at a 
> specific time. The content generation process creates approx. 6 millions new 
> nodes, where each node contains 5 properties with small string, random 
> values. Nodes were saved in batches of 1000 nodes each. At the end of the 
> content generation process, the nodes are deleted to deliberately generate 
> garbage in the Segment Store. This is part of a testing effort to assess the 
> efficiency of the online compaction.
> I was never able to complete the tests because the system run out of disk 
> space due to a lot of unused binary values. When debugging the system, on a 
> 400 GB (full) disk, the segments containing nodes and property values 
> occupied approx. 3 GB. The rest of the space was occupied by binary values in 
> form of bulk segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3536) Indexing with Lucene and copy-on-read generate too much garbage in the BlobStore

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-3536:

Fix Version/s: (was: 1.3.9)
   1.4

> Indexing with Lucene and copy-on-read generate too much garbage in the 
> BlobStore
> 
>
> Key: OAK-3536
> URL: https://issues.apache.org/jira/browse/OAK-3536
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Affects Versions: 1.3.9
>Reporter: Francesco Mari
>Priority: Critical
> Fix For: 1.4
>
>
> The copy-on-read strategy when using Lucene indexing performs too many copies 
> of the index files from the filesystem to the repository. Every copy discards 
> the previously stored binary, that sits there as garbage until the binary 
> garbage collection kicks in. When the load on the system is particularly 
> intense, this behaviour makes the repository grow at an unreasonable high 
> pace. 
> I spotted this on a system where some content is generated every day at a 
> specific time. The content generation process creates approx. 6 millions new 
> nodes, where each node contains 5 properties with small string, random 
> values. Nodes were saved in batches of 1000 nodes each. At the end of the 
> content generation process, the nodes are deleted to deliberately generate 
> garbage in the Segment Store. This is part of a testing effort to assess the 
> efficiency of the online compaction.
> I was never able to complete the tests because the system run out of disk 
> space due to a lot of unused binary values. When debugging the system, on a 
> 400 GB (full) disk, the segments containing nodes and property values 
> occupied approx. 3 GB. The rest of the space was occupied by binary values in 
> form of bulk segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2909) Review and improve Oak and Jcr repository setup

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2909:

Fix Version/s: (was: 1.4)

> Review and improve Oak and Jcr repository setup
> ---
>
> Key: OAK-2909
> URL: https://issues.apache.org/jira/browse/OAK-2909
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, jcr
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: modularization, technical_debt
>
> There is the {{Oak}} and {{Jcr}} builder classes for setting up Oak and Jcr 
> repositories. Both builders don't have clear semantics regarding the life 
> cycle of the individual components they register. On top of that the 
> requirements regarding those life cycles differ depending on whether the 
> individual components run within an OSGi container or not. In the former case 
> the container would already manage the life cycle so the builder should not. 
> IMO we should specify the builders to only be used for non OSGi deployments 
> and have the manage the life cycles of the components they instantiate. OTOH 
> for OSGi deployments we should leverage OSGi subsystems to properly set 
> things up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2932) Limit the scope of exported packages

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari updated OAK-2932:

Fix Version/s: (was: 1.4)

> Limit the scope of exported packages
> 
>
> Key: OAK-2932
> URL: https://issues.apache.org/jira/browse/OAK-2932
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>Reporter: Michael Dürig
>Assignee: Francesco Mari
>  Labels: modularization, osgi, technical_debt
>
> Oak currently exports *a lot* of packages even though those are only used by 
> Oak itself. We should probably leverage OSGi subsystems here and only export 
> the bare minimum to the outside world. This will simplify evolution of Oak 
> internal APIs as with the current approach changes to such APIs always leak 
> to the outside world. 
> That is, we should have an Oak OSGi sub-system as an deployment option. 
> Clients would then only need to deploy that into their OSGi container and 
> would only see APIs actually meant to be exported for everyone (like e.g. the 
> JCR API). At the same time Oak could go on leveraging OSGi inside this 
> subsystem.
> cc [~bosschaert] as you introduced us to this idea. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3534) Endless reindexing of async if no provider

2015-10-22 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968763#comment-14968763
 ] 

Davide Giannella commented on OAK-3534:
---

I think it should try X times and then set some properties. The
{{IndexEditorProvider}} could come and go as of OSGi and it could be
that at the first run it fails but it could succeed in the second or
third.

Probably a decent approach is retrying for 5 times. If keeps failing
set {{reindex=failed}} so that it will be visible in the repository as
well as tracking it in the logs.



> Endless reindexing of async if no provider
> --
>
> Key: OAK-3534
> URL: https://issues.apache.org/jira/browse/OAK-3534
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, query
>Affects Versions: 1.3.8
>Reporter: Davide Giannella
>
> Placeholder issue for the moment. Require more investigations and a
> test case.
> It seems that if an index definition has {{reindex=true}} but no index
> provider is able to serve the index type, the re-indexing process will
> retry endlessly.
> Possible way to reproduce
> - create an initial content with an index definition of type lucene
>   and reindex=true
> - start the repository without the lucene IndexProvider or having it
>   disabled.
> - see in the logs that the reindex will always re-index from scratch
>   as there are no checkpoints.
> Should the reindex fail and stop after some attempts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2106) Optimize reads from secondaries

2015-10-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968927#comment-14968927
 ] 

Tomek Rękawek commented on OAK-2106:


I'll continue work on this. Current state can be found on [my 
github|https://github.com/trekawek/jackrabbit-oak/tree/OAK-2106]. I've finished 
a [draft of the replication lag 
estimator|https://github.com/trekawek/jackrabbit-oak/blob/OAK-2106/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/mongo/ReplicationLagEstimator.java],
 now we should decide in which cases it can be used.

I asked [~catholicon] about the "trickier" case when the change is done 
locally. He meant following situation: 

1. There's a branch.
2. We want to read the document xyz, belonging to this branch, modified in 12:10
3. xyz can be found in cache, with the modification date 12:00 (as the cache 
doesn't reflect branch changes).
4. Safe time for secondaries is 12:05 (>12:00).
5. We read the document from secondary and get an old version.

So, basically we shouldn't ask the secondary instance for a document belonging 
to a branch.

> Optimize reads from secondaries
> ---
>
> Key: OAK-2106
> URL: https://issues.apache.org/jira/browse/OAK-2106
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: performance, scalability
>
> OAK-1645 introduced support for reads from secondaries under certain
> conditions. The current implementation checks the _lastRev on a potentially
> cached parent document and reads from a secondary if it has not been
> modified in the last 6 hours. This timespan is somewhat arbitrary but
> reflects the assumption that the replication lag of a secondary shouldn't
> be more than 6 hours.
> This logic should be optimized to take the actual replication lag into
> account. MongoDB provides information about the replication lag with
> the command rs.status().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2106) Optimize reads from secondaries

2015-10-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968997#comment-14968997
 ] 

Tomek Rękawek commented on OAK-2106:


There's also a second problem with the local changes, pointed by [~catholicon] 
(thanks!):

1. Last change on /a/b is done at 12:00
2. Safe time for secondaries is 12:05
3. /a/b/c is updated by the local node at 12:10
4. The background update, which updates lastRev on /a/b and ancestors is 
delayed and runs at 12:15.
5. At 12:12 we want to get /a/b/c. The lastRev on /a/b is still 12:00 (<12:05 
safe time), so we use a secondary instance and get an old version.

In case of the remote modification this problem doesn't exist, as the 
background read uses lastRev at / to check if something has changed (so it'll 
pull changes after the background update finishes its work).

> Optimize reads from secondaries
> ---
>
> Key: OAK-2106
> URL: https://issues.apache.org/jira/browse/OAK-2106
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: performance, scalability
>
> OAK-1645 introduced support for reads from secondaries under certain
> conditions. The current implementation checks the _lastRev on a potentially
> cached parent document and reads from a secondary if it has not been
> modified in the last 6 hours. This timespan is somewhat arbitrary but
> reflects the assumption that the replication lag of a secondary shouldn't
> be more than 6 hours.
> This logic should be optimized to take the actual replication lag into
> account. MongoDB provides information about the replication lag with
> the command rs.status().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2106) Optimize reads from secondaries

2015-10-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968997#comment-14968997
 ] 

Tomek Rękawek edited comment on OAK-2106 at 10/22/15 11:47 AM:
---

There's also a second problem with the local changes, pointed by [~catholicon] 
(thanks!):

1. Last change on /a/b is done at 12:00
2. Secondaries have been sync at 12:05
3. /a/b/c is updated by the local instance at 12:10
4. The background update process, which updates lastRev on /a/b and ancestors 
is delayed and runs at 12:15.
5. At 12:12 we want to get /a/b/c. The lastRev on /a/b in cache and Mongo is 
still set to 12:00 (<12:05 safe time), so we use a secondary instance and get 
an old version.

In case of the remote modification this problem doesn't exist, as the 
background read uses lastRev at / to check if something has changed (so it'll 
pull changes after the background update finishes its work).


was (Author: tomek.rekawek):
There's also a second problem with the local changes, pointed by [~catholicon] 
(thanks!):

1. Last change on /a/b is done at 12:00
2. Safe time for secondaries is 12:05
3. /a/b/c is updated by the local node at 12:10
4. The background update, which updates lastRev on /a/b and ancestors is 
delayed and runs at 12:15.
5. At 12:12 we want to get /a/b/c. The lastRev on /a/b is still 12:00 (<12:05 
safe time), so we use a secondary instance and get an old version.

In case of the remote modification this problem doesn't exist, as the 
background read uses lastRev at / to check if something has changed (so it'll 
pull changes after the background update finishes its work).

> Optimize reads from secondaries
> ---
>
> Key: OAK-2106
> URL: https://issues.apache.org/jira/browse/OAK-2106
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: performance, scalability
>
> OAK-1645 introduced support for reads from secondaries under certain
> conditions. The current implementation checks the _lastRev on a potentially
> cached parent document and reads from a secondary if it has not been
> modified in the last 6 hours. This timespan is somewhat arbitrary but
> reflects the assumption that the replication lag of a secondary shouldn't
> be more than 6 hours.
> This logic should be optimized to take the actual replication lag into
> account. MongoDB provides information about the replication lag with
> the command rs.status().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3503) Upgrade Maven Bundle Plugin to 3.0.0

2015-10-22 Thread Francesco Mari (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Mari resolved OAK-3503.
-
Resolution: Fixed

Fixed in r1709997.

> Upgrade Maven Bundle Plugin to 3.0.0
> 
>
> Key: OAK-3503
> URL: https://issues.apache.org/jira/browse/OAK-3503
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: parent
>Affects Versions: 1.3.7
>Reporter: Oliver Lietz
>Assignee: Francesco Mari
> Fix For: 1.3.9
>
> Attachments: OAK-3503.patch
>
>
> This solves a problem with {{Require-Capability}} header (OAK-3083):
> {{MANIFEST.MF}} with Maven Bundle Plugin {{2.5.3}}:
> {noformat}
> Manifest-Version: 1.0
> Bnd-LastModified: 1443377959783
> Build-Jdk: 1.7.0_51
> Built-By: amjain
> Bundle-Category: oak
> Bundle-Description: The goal of the Oak effort within the Apache Jackrab
>  bit™ project isto implement a scalable and performant hierarchica
>  l content repositoryfor use as the foundation of modern world-class
>   web sites and otherdemanding content applications.
> Bundle-DocURL: http://jackrabbit.apache.org/oak/
> Bundle-License: http://www.apache.org/licenses/LICENSE-2.0.txt
> Bundle-ManifestVersion: 2
> Bundle-Name: Oak Core
> Bundle-SymbolicName: org.apache.jackrabbit.oak-core
> Bundle-Vendor: The Apache Software Foundation
> Bundle-Version: 1.3.7
> Created-By: Apache Maven Bundle Plugin
> DynamicImport-Package: org.apache.felix.jaas.boot
> Embed-Transitive: true
> Export-Package: org.apache.jackrabbit.oak;version="1.1.0";uses:="javax.a
>  nnotation,javax.management,org.apache.jackrabbit.oak.api,org.apache.jac
>  krabbit.oak.plugins.index,org.apache.jackrabbit.oak.query,org.apache.ja
>  ckrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.lifecycle,org.apa
>  che.jackrabbit.oak.spi.query,org.apache.jackrabbit.oak.spi.security,org
>  .apache.jackrabbit.oak.spi.state,org.apache.jackrabbit.oak.spi.whiteboa
>  rd",org.apache.jackrabbit.oak.api;version="2.1";uses:="com.google.commo
>  n.base,javax.annotation,javax.jcr,javax.security.auth.login",org.apache
>  .jackrabbit.oak.api.jmx;version="2.0.0";uses:="javax.annotation,javax.m
>  anagement.openmbean,org.apache.jackrabbit.oak.api,org.apache.jackrabbit
>  .oak.commons.jmx",org.apache.jackrabbit.oak.stats;version="1.1";uses:="
>  javax.annotation,javax.management.openmbean,org.apache.jackrabbit.api.s
>  tats,org.apache.jackrabbit.oak.api.jmx,org.apache.jackrabbit.oak.spi.wh
>  iteboard,org.apache.jackrabbit.stats,org.slf4j",org.apache.jackrabbit.o
>  ak.json;version="1.0";uses:="org.apache.jackrabbit.oak.api,org.apache.j
>  ackrabbit.oak.commons.json,org.apache.jackrabbit.oak.spi.state",org.apa
>  che.jackrabbit.oak.management;version="1.1.0";uses:="javax.annotation,j
>  avax.management.openmbean,org.apache.jackrabbit.oak.api.jmx,org.apache.
>  jackrabbit.oak.commons.jmx,org.apache.jackrabbit.oak.spi.whiteboard",or
>  g.apache.jackrabbit.oak.util;version="1.3.0";uses:="com.google.common.i
>  o,javax.annotation,javax.jcr,javax.management.openmbean,org.apache.jack
>  rabbit.oak.api,org.apache.jackrabbit.oak.api.jmx,org.apache.jackrabbit.
>  oak.namepath,org.apache.jackrabbit.oak.spi.state,org.apache.jackrabbit.
>  oak.spi.whiteboard,org.slf4j",org.apache.jackrabbit.oak.namepath;versio
>  n="2.0";uses:="javax.annotation,javax.jcr,javax.jcr.nodetype,org.apache
>  .jackrabbit.oak.api,org.apache.jackrabbit.oak.plugins.identifier,org.ap
>  ache.jackrabbit.oak.spi.state",org.apache.jackrabbit.oak.osgi;version="
>  2.0";uses:="javax.annotation,org.apache.jackrabbit.oak.spi.commit,org.a
>  pache.jackrabbit.oak.spi.whiteboard,org.osgi.framework,org.osgi.service
>  .component,org.osgi.util.tracker",org.apache.jackrabbit.oak.plugins.ato
>  mic;version="1.0";uses:="javax.annotation,org.apache.jackrabbit.oak.api
>  ,org.apache.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.sta
>  te",org.apache.jackrabbit.oak.plugins.backup;version="1.0";uses:="javax
>  .annotation,javax.management.openmbean,org.apache.jackrabbit.oak.api,or
>  g.apache.jackrabbit.oak.spi.state",org.apache.jackrabbit.oak.plugins.co
>  mmit;version="1.1.0";uses:="javax.annotation,org.apache.jackrabbit.oak.
>  api,org.apache.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.
>  state",org.apache.jackrabbit.oak.plugins.identifier;version="1.0";uses:
>  ="javax.annotation,org.apache.jackrabbit.oak.api,org.apache.jackrabbit.
>  oak.spi.state",org.apache.jackrabbit.oak.plugins.index;version="3.0.0";
>  uses:="javax.annotation,javax.jcr,org.apache.jackrabbit.oak.api,org.apa
>  che.jackrabbit.oak.spi.commit,org.apache.jackrabbit.oak.spi.state,org.a
>  pache.jackrabbit.oak.spi.whiteboard,org.apache.jackrabbit.oak.util",org
>  .apache.jackrabbit.oak.plugins.index.fulltext;version="1.0.0";uses:="ja
>  vax.annotation,org.apache.jackrabbit.oak.api

[jira] [Commented] (OAK-2733) Option to convert "like" queries to range queries

2015-10-22 Thread Davide Giannella (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969017#comment-14969017
 ] 

Davide Giannella commented on OAK-2733:
---

With OAK-1617 we added a second layer of optimisations on the Query. In it we 
could generate both like as range and like as like and let the cost calculation 
decide whether it will be cheaper to run one versus the other.

> Option to convert "like" queries to range queries
> -
>
> Key: OAK-2733
> URL: https://issues.apache.org/jira/browse/OAK-2733
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>  Labels: performance
>
> Queries with "like" conditions of the form "x like 'abc%'" are currently 
> always converted to range queries. With Apache Lucene, using "like" in some 
> cases is a bit faster (but not much, according to our tests).
> Converting "like" to range queries should be disabled by default.
> Potential patch:
> {noformat}
> --- src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (revision 1672070)
> +++ src/main/java/org/apache/jackrabbit/oak/query/ast/ComparisonImpl.java 
> (working copy)
> @@ -31,11 +31,21 @@
>  import org.apache.jackrabbit.oak.query.fulltext.LikePattern;
>  import org.apache.jackrabbit.oak.query.index.FilterImpl;
>  import org.apache.jackrabbit.oak.spi.query.PropertyValues;
> +import org.slf4j.Logger;
> +import org.slf4j.LoggerFactory;
>  
>  /**
>   * A comparison operation (including "like").
>   */
>  public class ComparisonImpl extends ConstraintImpl {
> +
> +static final Logger LOG = LoggerFactory.getLogger(ComparisonImpl.class);
> +
> +private final static boolean CONVERT_LIKE_TO_RANGE = 
> Boolean.getBoolean("oak.convertLikeToRange");
> +
> +static {
> +LOG.info("Converting like to range queries is " + 
> (CONVERT_LIKE_TO_RANGE ? "enabled" : "disabled"));
> +}
>  
>  private final DynamicOperandImpl operand1;
>  private final Operator operator;
> @@ -193,7 +203,7 @@
>  if (lowerBound.equals(upperBound)) {
>  // no wildcards
>  operand1.restrict(f, Operator.EQUAL, v);
> -} else if (operand1.supportsRangeConditions()) {
> +} else if (operand1.supportsRangeConditions() && 
> CONVERT_LIKE_TO_RANGE) {
>  if (lowerBound != null) {
>  PropertyValue pv = 
> PropertyValues.newString(lowerBound);
>  operand1.restrict(f, Operator.GREATER_OR_EQUAL, 
> pv);
> @@ -203,7 +213,7 @@
>  operand1.restrict(f, Operator.LESS_OR_EQUAL, pv);
>  }
>  } else {
> -// path conditions
> +// path conditions, or conversion is disabled
>  operand1.restrict(f, operator, v);
>  }
>  } else {
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3537) Move the SegmentStore subsystem to its own set of bundles

2015-10-22 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-3537:
---

 Summary: Move the SegmentStore subsystem to its own set of bundles
 Key: OAK-3537
 URL: https://issues.apache.org/jira/browse/OAK-3537
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Francesco Mari
Assignee: Francesco Mari


The {{SegmentStore}} and its related code should be moved into their own 
bundles to ease the development and the deployment of this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3538) Move the o.a.j.o.api package to its own bundle

2015-10-22 Thread Francesco Mari (JIRA)
Francesco Mari created OAK-3538:
---

 Summary: Move the o.a.j.o.api package to its own bundle
 Key: OAK-3538
 URL: https://issues.apache.org/jira/browse/OAK-3538
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Francesco Mari
Assignee: Francesco Mari


The {{o.a.j.o.api}} package contains Oak's internal API. It should be moved to 
its own bundle to avoid circular dependencies with oak-core.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3539) Document interface should have entrySet() in addition to keySet()

2015-10-22 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3539:
---

 Summary: Document interface should have entrySet() in addition to 
keySet()
 Key: OAK-3539
 URL: https://issues.apache.org/jira/browse/OAK-3539
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk, rdbmk
Affects Versions: 1.0.22, 1.2.7, 1.3.8
Reporter: Julian Reschke
Assignee: Julian Reschke






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3539) Document interface should have entrySet() in addition to keySet()

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3539:

Fix Version/s: 1.3.9

> Document interface should have entrySet() in addition to keySet()
> -
>
> Key: OAK-3539
> URL: https://issues.apache.org/jira/browse/OAK-3539
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk, rdbmk
>Affects Versions: 1.3.8, 1.2.7, 1.0.22
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.9
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3300) Include parameter descriptions in test output when running parameterised tests

2015-10-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969297#comment-14969297
 ] 

Julian Reschke commented on OAK-3300:
-

Very cool indeed. Will backport.

> Include parameter descriptions in test output when running parameterised tests
> --
>
> Key: OAK-3300
> URL: https://issues.apache.org/jira/browse/OAK-3300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Robert Munteanu
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.3.9
>
> Attachments: 
> 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png
>
>
> JUnit 4.11 or newer allows describing parameters which makes it easier to 
> identify which fixture is running when not all tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-3540) DocumentStore tests: use named parametrization

2015-10-22 Thread Julian Reschke (JIRA)
Julian Reschke created OAK-3540:
---

 Summary: DocumentStore tests: use named parametrization
 Key: OAK-3540
 URL: https://issues.apache.org/jira/browse/OAK-3540
 Project: Jackrabbit Oak
  Issue Type: Technical task
  Components: core, mongomk, rdbmk
Reporter: Julian Reschke
Assignee: Julian Reschke
Priority: Trivial
 Fix For: 1.3.9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3540) DocumentStore tests: use named parametrization

2015-10-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969301#comment-14969301
 ] 

Julian Reschke commented on OAK-3540:
-

(because of the JUnit version change)

> DocumentStore tests: use named parametrization
> --
>
> Key: OAK-3540
> URL: https://issues.apache.org/jira/browse/OAK-3540
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, mongomk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.9
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3541) VersionableState.copy doesn't respect OPV flag in the subtree

2015-10-22 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3541:

Attachment: OAK-3541_test.patch

simple test case illustrating the problem (it's the second test that fails)

> VersionableState.copy doesn't respect OPV flag in the subtree
> -
>
> Key: OAK-3541
> URL: https://issues.apache.org/jira/browse/OAK-3541
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Priority: Critical
> Attachments: OAK-3541_test.patch
>
>
> while testing my work in OAK-1268 and OAK-2008, i found that items with OPV 
> IGNORE are being copied into the frozen node of a versionable node upon 
> checkin and only the first level child nodes are being tested for the OPV 
> flag.
> IMHO the OPV flag should be respected for all items in the subtree and act 
> accordingly. The current bug might prevent versionable child nodes from being 
> properly versioned and will copy items that are expected to be ignored (e.g. 
> access control content) into the version store.
> if i am not mistaken the properties are actually tested for the their OPV 
> flag... if that is true, we might even have a bigger issue as the content in 
> the version store is no longer complete and valid (e.g. 
> mandatory/protected/autocreated properties being ignored but the node still 
> being copied over and thus being invalid)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (OAK-3541) VersionableState.copy doesn't respect OPV flag in the subtree

2015-10-22 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela moved JCR-3921 to OAK-3541:
--

Component/s: (was: core)
 core
   Workflow: no-reopen-closed  (was: no-reopen-closed, patch-avail)
Key: OAK-3541  (was: JCR-3921)
Project: Jackrabbit Oak  (was: Jackrabbit Content Repository)

> VersionableState.copy doesn't respect OPV flag in the subtree
> -
>
> Key: OAK-3541
> URL: https://issues.apache.org/jira/browse/OAK-3541
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Priority: Critical
>
> while testing my work in OAK-1268 and OAK-2008, i found that items with OPV 
> IGNORE are being copied into the frozen node of a versionable node upon 
> checkin and only the first level child nodes are being tested for the OPV 
> flag.
> IMHO the OPV flag should be respected for all items in the subtree and act 
> accordingly. The current bug might prevent versionable child nodes from being 
> properly versioned and will copy items that are expected to be ignored (e.g. 
> access control content) into the version store.
> if i am not mistaken the properties are actually tested for the their OPV 
> flag... if that is true, we might even have a bigger issue as the content in 
> the version store is no longer complete and valid (e.g. 
> mandatory/protected/autocreated properties being ignored but the node still 
> being copied over and thus being invalid)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3541) VersionableState.copy doesn't respect OPV flag in the subtree

2015-10-22 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3541:

Attachment: OAK-3541.patch

the attached patch solves the OPV-IGNORE issue for me. however, since i am not 
too familiar with the other OPV-flags i don't know if it works properly for 
nodes with OPV-VERSION in the subtree.

one more thing: the original code used {{OPVForceCopy}} for all properties in 
the subtree. calling the {{createFrozenNode}} again for the subtree nodes (as 
proposed by the patch) changes this to use the anonymous inner implementation 
instead, which looks as follows:

{code}
new OPVProvider() {
@Override
public int getAction(NodeBuilder src,
 NodeBuilder dest,
 PropertyState prop)
throws RepositoryException {
String propName = prop.getName();
if (BASIC_FROZEN_PROPERTIES.contains(propName)) {
// OAK-940: do not overwrite basic frozen properties
return IGNORE;
} else if (isHiddenProperty(propName)) {
// don't copy hidden properties except for :childOrder
return IGNORE;
}
return getOPV(src, prop);
}
}
{code}
In general that looks better to me, but this definitely needs a second look as 
well as some test-coverage, verifying that this is really correct.

[~mreutegg], wdyt?

> VersionableState.copy doesn't respect OPV flag in the subtree
> -
>
> Key: OAK-3541
> URL: https://issues.apache.org/jira/browse/OAK-3541
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core
>Reporter: angela
>Priority: Critical
> Attachments: OAK-3541.patch, OAK-3541_test.patch
>
>
> while testing my work in OAK-1268 and OAK-2008, i found that items with OPV 
> IGNORE are being copied into the frozen node of a versionable node upon 
> checkin and only the first level child nodes are being tested for the OPV 
> flag.
> IMHO the OPV flag should be respected for all items in the subtree and act 
> accordingly. The current bug might prevent versionable child nodes from being 
> properly versioned and will copy items that are expected to be ignored (e.g. 
> access control content) into the version store.
> if i am not mistaken the properties are actually tested for the their OPV 
> flag... if that is true, we might even have a bigger issue as the content in 
> the version store is no longer complete and valid (e.g. 
> mandatory/protected/autocreated properties being ignored but the node still 
> being copied over and thus being invalid)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3541) VersionableState.copy doesn't respect OPV flag in the subtree

2015-10-22 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3541:

Component/s: jcr

> VersionableState.copy doesn't respect OPV flag in the subtree
> -
>
> Key: OAK-3541
> URL: https://issues.apache.org/jira/browse/OAK-3541
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr
>Reporter: angela
>Priority: Critical
>  Labels: versioning
> Attachments: OAK-3541.patch, OAK-3541_test.patch
>
>
> while testing my work in OAK-1268 and OAK-2008, i found that items with OPV 
> IGNORE are being copied into the frozen node of a versionable node upon 
> checkin and only the first level child nodes are being tested for the OPV 
> flag.
> IMHO the OPV flag should be respected for all items in the subtree and act 
> accordingly. The current bug might prevent versionable child nodes from being 
> properly versioned and will copy items that are expected to be ignored (e.g. 
> access control content) into the version store.
> if i am not mistaken the properties are actually tested for the their OPV 
> flag... if that is true, we might even have a bigger issue as the content in 
> the version store is no longer complete and valid (e.g. 
> mandatory/protected/autocreated properties being ignored but the node still 
> being copied over and thus being invalid)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3541) VersionableState.copy doesn't respect OPV flag in the subtree

2015-10-22 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-3541:

Labels: versioning  (was: )

> VersionableState.copy doesn't respect OPV flag in the subtree
> -
>
> Key: OAK-3541
> URL: https://issues.apache.org/jira/browse/OAK-3541
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, jcr
>Reporter: angela
>Priority: Critical
>  Labels: versioning
> Attachments: OAK-3541.patch, OAK-3541_test.patch
>
>
> while testing my work in OAK-1268 and OAK-2008, i found that items with OPV 
> IGNORE are being copied into the frozen node of a versionable node upon 
> checkin and only the first level child nodes are being tested for the OPV 
> flag.
> IMHO the OPV flag should be respected for all items in the subtree and act 
> accordingly. The current bug might prevent versionable child nodes from being 
> properly versioned and will copy items that are expected to be ignored (e.g. 
> access control content) into the version store.
> if i am not mistaken the properties are actually tested for the their OPV 
> flag... if that is true, we might even have a bigger issue as the content in 
> the version store is no longer complete and valid (e.g. 
> mandatory/protected/autocreated properties being ignored but the node still 
> being copied over and thus being invalid)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3539) Document interface should have entrySet() in addition to keySet()

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3539:

Fix Version/s: 1.2.8
   1.0.23

> Document interface should have entrySet() in addition to keySet()
> -
>
> Key: OAK-3539
> URL: https://issues.apache.org/jira/browse/OAK-3539
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk, rdbmk
>Affects Versions: 1.3.8, 1.2.7, 1.0.22
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3539) Document interface should have entrySet() in addition to keySet()

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3539.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1710031
1.2: http://svn.apache.org/r1710034
1.0: http://svn.apache.org/r1710043

> Document interface should have entrySet() in addition to keySet()
> -
>
> Key: OAK-3539
> URL: https://issues.apache.org/jira/browse/OAK-3539
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, mongomk, rdbmk
>Affects Versions: 1.3.8, 1.2.7, 1.0.22
>Reporter: Julian Reschke
>Assignee: Julian Reschke
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3300) Include parameter descriptions in test output when running parameterised tests

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3300:

Fix Version/s: 1.2.8

> Include parameter descriptions in test output when running parameterised tests
> --
>
> Key: OAK-3300
> URL: https://issues.apache.org/jira/browse/OAK-3300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Robert Munteanu
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.3.9, 1.2.8
>
> Attachments: 
> 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png
>
>
> JUnit 4.11 or newer allows describing parameters which makes it easier to 
> identify which fixture is running when not all tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3300) Include parameter descriptions in test output when running parameterised tests

2015-10-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969297#comment-14969297
 ] 

Julian Reschke edited comment on OAK-3300 at 10/22/15 4:18 PM:
---

Very cool indeed.

1.2: http://svn.apache.org/r1710053


was (Author: reschke):
Very cool indeed. Will backport.

> Include parameter descriptions in test output when running parameterised tests
> --
>
> Key: OAK-3300
> URL: https://issues.apache.org/jira/browse/OAK-3300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Robert Munteanu
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.3.9, 1.2.8
>
> Attachments: 
> 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png
>
>
> JUnit 4.11 or newer allows describing parameters which makes it easier to 
> identify which fixture is running when not all tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-3300) Include parameter descriptions in test output when running parameterised tests

2015-10-22 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969297#comment-14969297
 ] 

Julian Reschke edited comment on OAK-3300 at 10/22/15 4:36 PM:
---

Very cool indeed.

1.2: http://svn.apache.org/r1710053
1.0: http://svn.apache.org/r1710060


was (Author: reschke):
Very cool indeed.

1.2: http://svn.apache.org/r1710053

> Include parameter descriptions in test output when running parameterised tests
> --
>
> Key: OAK-3300
> URL: https://issues.apache.org/jira/browse/OAK-3300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Robert Munteanu
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
> Attachments: 
> 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png
>
>
> JUnit 4.11 or newer allows describing parameters which makes it easier to 
> identify which fixture is running when not all tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3540) DocumentStore tests: use named parametrization

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3540:

Fix Version/s: 1.2.8

> DocumentStore tests: use named parametrization
> --
>
> Key: OAK-3540
> URL: https://issues.apache.org/jira/browse/OAK-3540
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, mongomk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.9, 1.2.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3300) Include parameter descriptions in test output when running parameterised tests

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3300:

Fix Version/s: 1.0.23

> Include parameter descriptions in test output when running parameterised tests
> --
>
> Key: OAK-3300
> URL: https://issues.apache.org/jira/browse/OAK-3300
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Robert Munteanu
>Assignee: Francesco Mari
>Priority: Minor
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
> Attachments: 
> 0001-OAK-3300-Include-parameter-descriptions-in-test-outp.patch, OAK-3300.png
>
>
> JUnit 4.11 or newer allows describing parameters which makes it easier to 
> identify which fixture is running when not all tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3540) DocumentStore tests: use named parametrization

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-3540:

Fix Version/s: 1.0.23

> DocumentStore tests: use named parametrization
> --
>
> Key: OAK-3540
> URL: https://issues.apache.org/jira/browse/OAK-3540
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, mongomk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-3540) DocumentStore tests: use named parametrization

2015-10-22 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-3540.
-
Resolution: Fixed

trunk: http://svn.apache.org/r1710049
1.2: http://svn.apache.org/r1710059
1.0: http://svn.apache.org/r1710064

> DocumentStore tests: use named parametrization
> --
>
> Key: OAK-3540
> URL: https://issues.apache.org/jira/browse/OAK-3540
> Project: Jackrabbit Oak
>  Issue Type: Technical task
>  Components: core, mongomk, rdbmk
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Trivial
> Fix For: 1.3.9, 1.0.23, 1.2.8
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)