[jira] [Created] (OAK-2599) Allow excluding certain paths from getting indexed for particular index

2015-03-10 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2599:


 Summary: Allow excluding certain paths from getting indexed for 
particular index
 Key: OAK-2599
 URL: https://issues.apache.org/jira/browse/OAK-2599
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13


Currently an {{IndexEditor}} gets to index all nodes under the tree where it is 
defined (post OAK-1980).  Due to this IndexEditor would traverse the whole repo 
(or subtree if configured in non root path) to perform reindex. Depending on 
the repo size this process can take quite a bit of time. It would be faster if 
an IndexEditor can exclude certain paths from traversal

Consider an application like Adobe AEM and an index which only index dam:Asset 
or the default full text index. For a fulltext index it might make sense to 
avoid indexing the versionStore. So if the index editor skips such path then 
lots of redundant traversal can be avoided. 

Also see http://markmail.org/thread/4cuuicakagi6av4v



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2598) Provide option to run async index as sync in repository upgrade

2015-03-10 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2598:


 Summary: Provide option to run async index as sync in repository 
upgrade
 Key: OAK-2598
 URL: https://issues.apache.org/jira/browse/OAK-2598
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Reporter: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13


Currently when performing repository upgrade from JR2 to Oak the migration 
logic only runs the synchronous index editors. Async indexes like Lucene/Solr 
are run post migration after the system start.

Given that migration is a single threaded operation it would at times be 
helpful to allow executing such async indexes in sync mode during migration 
phase. This would avoid rescanning of complete repository again for such async 
indexes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2598) Provide option to run async index as sync in repository upgrade

2015-03-10 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354396#comment-14354396
 ] 

Chetan Mehrotra commented on OAK-2598:
--

One possible option can be to move the check for async in 
{{IndexUpdate#collectIndexEditors}} to an overridable method. Then in 
{{RepositoryUpgrade#createIndexEditorProvider}} we can override the default

[~alex.parvulescu] Thoughts?

 Provide option to run async index as sync in repository upgrade
 ---

 Key: OAK-2598
 URL: https://issues.apache.org/jira/browse/OAK-2598
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Reporter: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13


 Currently when performing repository upgrade from JR2 to Oak the migration 
 logic only runs the synchronous index editors. Async indexes like Lucene/Solr 
 are run post migration after the system start.
 Given that migration is a single threaded operation it would at times be 
 helpful to allow executing such async indexes in sync mode during migration 
 phase. This would avoid rescanning of complete repository again for such 
 async indexes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2601) PerfLogger for NodeObserver.contentChanged()

2015-03-10 Thread Marcel Reutegger (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354629#comment-14354629
 ] 

Marcel Reutegger commented on OAK-2601:
---

Done in trunk: http://svn.apache.org/r1665436

A message with timing is logged at DEBUG if it takes more than 10 ms.

 PerfLogger for NodeObserver.contentChanged()
 

 Key: OAK-2601
 URL: https://issues.apache.org/jira/browse/OAK-2601
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.1.8, 1.0.13


 There's existing performance logging available in EventGenerator, but the 
 scope is only for a single Continuation. It would be useful to have 
 information about how long it took to generate all events for the new root 
 state compared to the previousRoot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2601) PerfLogger for NodeObserver.contentChanged()

2015-03-10 Thread Marcel Reutegger (JIRA)
Marcel Reutegger created OAK-2601:
-

 Summary: PerfLogger for NodeObserver.contentChanged()
 Key: OAK-2601
 URL: https://issues.apache.org/jira/browse/OAK-2601
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.1.8, 1.0.13


There's existing performance logging available in EventGenerator, but the scope 
is only for a single Continuation. It would be useful to have information about 
how long it took to generate all events for the new root state compared to the 
previousRoot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2603) Failure in one of the batch in VersionGC might lead to orphaned nodes

2015-03-10 Thread Chetan Mehrotra (JIRA)
Chetan Mehrotra created OAK-2603:


 Summary: Failure in one of the batch in VersionGC might lead to 
orphaned nodes
 Key: OAK-2603
 URL: https://issues.apache.org/jira/browse/OAK-2603
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13


VersionGC logic currently performs deletion of nodes in batches. For GC to work 
properly NodeDocument should always be removed in bottom-up mode i.e. parent 
node should be removed *after* child has been removed

Currently the GC logic deletes the NodeDocument in undefined order. In such 
mode if one of the batch fails then its possible that parent might have got 
deleted but the child was not deleted. 

Now in next run the child node would not be recognized as a deleted node 
because the commit root would not be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-2262.

Resolution: Fixed

Fixed as discussed at: http://svn.apache.org/r1665478

For property events the info map now contains either or both keys 
{{beforeValue}} and {{afterValue}}. The values are of type {{Value}} for single 
values and {{Value[]}} for multi values. 

 Add metadata about the changed value to a PROPERTY_CHANGED event on a 
 multivalued property
 --

 Key: OAK-2262
 URL: https://issues.apache.org/jira/browse/OAK-2262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Affects Versions: 1.1.2
Reporter: Tommaso Teofili
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.1.8


 When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
 value can have actually changed so that handlers of such events do not need 
 any further information to process it and eventually work on the changed 
 value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
 (e.g. String[]) may relate to any of the values and that brings a source of 
 uncertainty on event handlers processing such changes because there's no mean 
 to understand which property value had been changed and therefore to them to 
 react accordingly.
 A workaround for that is to create Oak specific _Observers_ which can deal 
 with the diff between before and after state and create a specific event 
 containing the diff, however this would add a non trivial load to the 
 repository because of the _Observer_ itself and because of the additional 
 events being generated while it'd be great if the 'default' events would have 
 metadata e.g. of the changed value index or similar information that can help 
 understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2262) Add metadata about the changed value to a PROPERTY_CHANGED event on a multivalued property

2015-03-10 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354786#comment-14354786
 ] 

Tommaso Teofili commented on OAK-2262:
--

thanks Michael!

 Add metadata about the changed value to a PROPERTY_CHANGED event on a 
 multivalued property
 --

 Key: OAK-2262
 URL: https://issues.apache.org/jira/browse/OAK-2262
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Affects Versions: 1.1.2
Reporter: Tommaso Teofili
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.1.8


 When getting _PROPERTY_CHANGED_ events on non-multivalued properties only one 
 value can have actually changed so that handlers of such events do not need 
 any further information to process it and eventually work on the changed 
 value; on the other hand _PROPERTY_CHANGED_ events on multivalued properties 
 (e.g. String[]) may relate to any of the values and that brings a source of 
 uncertainty on event handlers processing such changes because there's no mean 
 to understand which property value had been changed and therefore to them to 
 react accordingly.
 A workaround for that is to create Oak specific _Observers_ which can deal 
 with the diff between before and after state and create a specific event 
 containing the diff, however this would add a non trivial load to the 
 repository because of the _Observer_ itself and because of the additional 
 events being generated while it'd be great if the 'default' events would have 
 metadata e.g. of the changed value index or similar information that can help 
 understanding which value has been changed (added, deleted, updated). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2601) PerfLogger for NodeObserver.contentChanged()

2015-03-10 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger resolved OAK-2601.
---
Resolution: Fixed

Merged into 1.0 branch: http://svn.apache.org/r1665443

 PerfLogger for NodeObserver.contentChanged()
 

 Key: OAK-2601
 URL: https://issues.apache.org/jira/browse/OAK-2601
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
Priority: Minor
 Fix For: 1.1.8, 1.0.13


 There's existing performance logging available in EventGenerator, but the 
 scope is only for a single Continuation. It would be useful to have 
 information about how long it took to generate all events for the new root 
 state compared to the previousRoot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2602) [Solr] Cost calculation takes time with solr pings even when not fulfilling query

2015-03-10 Thread Amit Jain (JIRA)
Amit Jain created OAK-2602:
--

 Summary: [Solr] Cost calculation takes time with solr pings even 
when not fulfilling query
 Key: OAK-2602
 URL: https://issues.apache.org/jira/browse/OAK-2602
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-solr
Affects Versions: 1.1.7, 1.0.12
Reporter: Amit Jain


Cost calculation for queries which are fired quite often [1] and which are not 
going to fulfilled by solr take time due to which the overall cost of the 
operation is high. 

[1]
SELECT * FROM [nt:base] WHERE PROPERTY([rep:members], 'WeakReference') = $uuid 
SELECT * FROM [nt:base] WHERE [jcr:uuid] = $id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2604) Backport LMSEstimator to branch 1.0

2015-03-10 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created OAK-2604:


 Summary: Backport LMSEstimator to branch 1.0
 Key: OAK-2604
 URL: https://issues.apache.org/jira/browse/OAK-2604
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-solr
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.0.13


Current estimation algorithm for {{AdvancedSolrQueryIndex}} is not smart as it 
also involves making a query which doesn't take into account the given filter, 
therefore the {{LMSEstimator}} which works better and is more performant should 
be backported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2587) observation processing too eager/unfair under load

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2587:
---
Fix Version/s: 1.2

 observation processing too eager/unfair under load
 --

 Key: OAK-2587
 URL: https://issues.apache.org/jira/browse/OAK-2587
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.12
Reporter: Stefan Egli
Priority: Critical
 Fix For: 1.2

 Attachments: OAK-2587.patch


 The current implementation of oak's observation event processing is too eager 
 and thus unfair under load scenarios. 
 Consider having many (eg 200) Eventlisteners but only a relatively small 
 threadpool (eg 5 as is the default in sling) backing them. When processing 
 changes for a particular BackgroundObserver, that one (in 
 BackgroundObserver.completionHandler.call) currently processes *all changes 
 irrespective of how many there are* - ie it is *eager*. Only once that 
 BackgroundObserver processed all changes will it let go and 'pass the thread' 
 to the next BackgroundObserver. Now if for some reason changes (ie commits) 
 are coming in while a BackgroundObserver is busy processing an earlier 
 change, this will lengthen that while loop. As a result the remaining (eg 
 195) *EventListeners will have to wait for a potentially long time* until 
 it's their turn - thus *unfair*.
 Now combine the above pattern with a scenario where mongo is used as the 
 underlying store. In that case in order to remain highly performant it is 
 important that the diffs (for compareAgainstBaseState) are served from the 
 MongoDiffCache for as many cases as possible to avoid doing a round-trip to 
 mongoD. The unfairness in the BackgroundObservers can now result in a large 
 delay between the 'first' observers getting the event and the 'last' one (of 
 those 200). When this delay increases due to a burst in the load, there is a 
 risk of the diffs to no longer be in the cache - those last observers are 
 basically kicked out of the (diff) cache. Once this happens, *the situation 
 gets even worse*, since now you have yet new commits coming in and old 
 changes still having to be processed - all of which are being processed 
 through in 'stripes of 5 listeners' before the next one gets a chance. This 
 at some point results in a totally inefficient cache behavior, or in other 
 words, at some point all diffs have to be read from mongoD.
 To avoid this there are probably a number of options - a few one that come to 
 mind:
 * increase thread-pool to match or be closer to the number of listeners (but 
 this has other disadvantages, eg cost of thread-switching)
 * make BackgroundObservers fairer by limiting the number of changes they 
 process before they give others a chance to be served by the pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1356) Expose the preferred transient space size as repository descriptor

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354963#comment-14354963
 ] 

Michael Dürig commented on OAK-1356:


I suggest to resolve this as won't fix as it is entirely unclear what 
'preferred transient space' means. Such a value might differ a great deal 
depending on the backend, cluster size, cache configuration, heap size, CPUs 
etc., which makes determining a reliable value rather unreliable. 

OTOH we should aim to make large transactions as performant as possible by e.g. 
following up on the issue with multiple passes mentioned by [~tmueller].



 Expose the preferred transient space size as repository descriptor 
 ---

 Key: OAK-1356
 URL: https://issues.apache.org/jira/browse/OAK-1356
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, jcr
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
  Labels: api

 The problem is that the different stores have different transient space 
 characteristics. for example the MongoMK is very slow when handling large 
 saves.
 suggest to expose a repository descriptor that can be used to estimate the 
 preferred transient space, for example when importing content.
 so either a boolean like: 
   {{option.infinite.transientspace}}
 or a number like:
   {{option.transientspace.preferred.size}}
 the later would denote the average number of modified node states that should 
 be put in the transient space before the persistence starts to degrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2476) Move our CI to Jenkins

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2476:
---
Assignee: Tommaso Teofili  (was: Michael Dürig)

 Move our CI to Jenkins
 --

 Key: OAK-2476
 URL: https://issues.apache.org/jira/browse/OAK-2476
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
Priority: Critical
  Labels: CI, build, infrastructure
 Fix For: 1.1.8, 1.2


 We should strive for stabilization of our CI setup, as of now we had Buildbot 
 and Travis.
 It seems ASF Jenkins can perform jobs on different environments (*nix, 
 Windows and others) so we can evaluate that and check if it better address 
 our needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2476) Move our CI to Jenkins

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-2476:
--

Assignee: Michael Dürig  (was: Tommaso Teofili)

 Move our CI to Jenkins
 --

 Key: OAK-2476
 URL: https://issues.apache.org/jira/browse/OAK-2476
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Tommaso Teofili
Assignee: Michael Dürig
Priority: Critical
  Labels: CI, build, infrastructure
 Fix For: 1.1.8, 1.2


 We should strive for stabilization of our CI setup, as of now we had Buildbot 
 and Travis.
 It seems ASF Jenkins can perform jobs on different environments (*nix, 
 Windows and others) so we can evaluate that and check if it better address 
 our needs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2602) [Solr] Cost calculation takes time with solr pings even when not fulfilling query

2015-03-10 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili reassigned OAK-2602:


Assignee: Tommaso Teofili

 [Solr] Cost calculation takes time with solr pings even when not fulfilling 
 query
 -

 Key: OAK-2602
 URL: https://issues.apache.org/jira/browse/OAK-2602
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-solr
Affects Versions: 1.0.12, 1.1.7
Reporter: Amit Jain
Assignee: Tommaso Teofili

 Cost calculation for queries which are fired quite often [1] and which are 
 not going to fulfilled by solr take time due to which the overall cost of the 
 operation is high. 
 [1]
 SELECT * FROM [nt:base] WHERE PROPERTY([rep:members], 'WeakReference') = 
 $uuid 
 SELECT * FROM [nt:base] WHERE [jcr:uuid] = $id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2185) Fix intermittent failure in JaasConfigSpiTest

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2185:
---
Labels: CI buildbot test  (was: )

 Fix intermittent failure in JaasConfigSpiTest
 -

 Key: OAK-2185
 URL: https://issues.apache.org/jira/browse/OAK-2185
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: oak-pojosr
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
Priority: Minor
  Labels: CI, buildbot, test
 Fix For: 1.2


 Intermittent failures on windows are observed in JaasConfigSpiTest with 
 following exception
 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.841 sec  
 FAILURE!
 defaultConfigSpiAuth(org.apache.jackrabbit.oak.run.osgi.JaasConfigSpiTest)  
 Time elapsed: 3.835 sec   ERROR!
 java.lang.reflect.UndeclaredThrowableException
   at $Proxy7.login(Unknown Source)
   at javax.jcr.Repository$login.call(Unknown Source)
   at 
 org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
   at 
 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
   at 
 org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
   at 
 org.apache.jackrabbit.oak.run.osgi.JaasConfigSpiTest.defaultConfigSpiAuth(JaasConfigSpiTest.groovy:75)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 Caused by: java.lang.reflect.InvocationTargetException
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.jackrabbit.oak.run.osgi.OakOSGiRepositoryFactory$RepositoryProxy.invoke(OakOSGiRepositoryFactory.java:325)
   ... 37 more
 Caused by: javax.jcr.LoginException: No LoginModules configured for 
 jackrabbit.oak
   at 
 org.apache.jackrabbit.oak.jcr.repository.RepositoryImpl.login(RepositoryImpl.java:264)
   at 
 

[jira] [Updated] (OAK-2075) Travis build failing on OrderedIndexConcurrentClusterIT.deleteConcurrently()

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2075:
---
Labels: CI build travis  (was: )

 Travis build failing on OrderedIndexConcurrentClusterIT.deleteConcurrently()
 

 Key: OAK-2075
 URL: https://issues.apache.org/jira/browse/OAK-2075
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: it
Reporter: Davide Giannella
Priority: Minor
  Labels: CI, build, travis

 build failing during the execution of 
 {{deleteConcurrently(org.apache.jackrabbit.oak.jcr.OrderedIndexConcurrentClusterIT)}}
 https://travis-ci.org/apache/jackrabbit-oak/jobs/34294991#L2796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1904) testClockDrift seems to consistently fail on CI

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1904:
---
Labels: CI buildbot  (was: )

 testClockDrift seems to consistently fail on CI
 ---

 Key: OAK-1904
 URL: https://issues.apache.org/jira/browse/OAK-1904
 Project: Jackrabbit Oak
  Issue Type: Bug
 Environment: Apache CI
Reporter: Davide Giannella
  Labels: CI, buildbot

 On Apache CI the {{testClockDrift()}} test seems to be constantly failing.
 http://ci.apache.org/builders/oak-trunk/builds/252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2597) expose mongo's clusterNodes info more prominently

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2597:
-
Fix Version/s: 1.0.13

Setting fix version 1.0.13 - up for discussion

 expose mongo's clusterNodes info more prominently
 -

 Key: OAK-2597
 URL: https://issues.apache.org/jira/browse/OAK-2597
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Affects Versions: 1.0.12
Reporter: Stefan Egli
 Fix For: 1.0.13


 Suggestion: {{db.clusterNodes}} contains very useful information wrt how many 
 instances are currently (and have been) active in the oak-mongo-cluster. 
 While this should in theory match the topology reported via sling's discovery 
 api, it might differ. It could be very helpful if this information was 
 exposed very prominently in a UI (assuming this is not yet the case) - eg in 
 a /system/console page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2587) observation processing too eager/unfair under load

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2587:
-
Fix Version/s: 1.0.13

Setting fix version also 1.0.13 - up for discussion

 observation processing too eager/unfair under load
 --

 Key: OAK-2587
 URL: https://issues.apache.org/jira/browse/OAK-2587
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.12
Reporter: Stefan Egli
Priority: Critical
 Fix For: 1.0.13, 1.2

 Attachments: OAK-2587.patch


 The current implementation of oak's observation event processing is too eager 
 and thus unfair under load scenarios. 
 Consider having many (eg 200) Eventlisteners but only a relatively small 
 threadpool (eg 5 as is the default in sling) backing them. When processing 
 changes for a particular BackgroundObserver, that one (in 
 BackgroundObserver.completionHandler.call) currently processes *all changes 
 irrespective of how many there are* - ie it is *eager*. Only once that 
 BackgroundObserver processed all changes will it let go and 'pass the thread' 
 to the next BackgroundObserver. Now if for some reason changes (ie commits) 
 are coming in while a BackgroundObserver is busy processing an earlier 
 change, this will lengthen that while loop. As a result the remaining (eg 
 195) *EventListeners will have to wait for a potentially long time* until 
 it's their turn - thus *unfair*.
 Now combine the above pattern with a scenario where mongo is used as the 
 underlying store. In that case in order to remain highly performant it is 
 important that the diffs (for compareAgainstBaseState) are served from the 
 MongoDiffCache for as many cases as possible to avoid doing a round-trip to 
 mongoD. The unfairness in the BackgroundObservers can now result in a large 
 delay between the 'first' observers getting the event and the 'last' one (of 
 those 200). When this delay increases due to a burst in the load, there is a 
 risk of the diffs to no longer be in the cache - those last observers are 
 basically kicked out of the (diff) cache. Once this happens, *the situation 
 gets even worse*, since now you have yet new commits coming in and old 
 changes still having to be processed - all of which are being processed 
 through in 'stripes of 5 listeners' before the next one gets a chance. This 
 at some point results in a totally inefficient cache behavior, or in other 
 words, at some point all diffs have to be read from mongoD.
 To avoid this there are probably a number of options - a few one that come to 
 mind:
 * increase thread-pool to match or be closer to the number of listeners (but 
 this has other disadvantages, eg cost of thread-switching)
 * make BackgroundObservers fairer by limiting the number of changes they 
 process before they give others a chance to be served by the pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2089) Allow storing some metadata while creating checkpoint

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354937#comment-14354937
 ] 

Michael Dürig commented on OAK-2089:


[~chetanm], I think this is a duplicate of OAK-2291 and can be resolved 
accordingly WDYT?

 Allow storing some metadata while creating checkpoint
 -

 Key: OAK-2089
 URL: https://issues.apache.org/jira/browse/OAK-2089
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.2


 As mentioned by [~mmarth] in OAK-2087 it would be useful to store some 
 metadata while creating checkpoint. Such metadata can be used to  
 differentiate between CPs created by backup, indexer, etc. A simple string 
 should serve the purpose



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1963) Expose file system path of Blob

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354975#comment-14354975
 ] 

Michael Dürig commented on OAK-1963:


I thing we should resolve this as won't fix as such a feature breaks all levels 
of encapsulation and most likely also introduces security concerns. 



 Expose file system path of Blob
 ---

 Key: OAK-1963
 URL: https://issues.apache.org/jira/browse/OAK-1963
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Pralaypati Ta
Assignee: Chetan Mehrotra

 In some situations direct file system path is more useful than repository 
 path e.g. native tools don't understand repository path, instead file system 
 path can be passed directly to native tools for processing binary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2440) local build 22 minutes

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2440:
---
Labels: build  (was: )

 local build 22 minutes
 --

 Key: OAK-2440
 URL: https://issues.apache.org/jira/browse/OAK-2440
 Project: Jackrabbit Oak
  Issue Type: Bug
Reporter: Davide Giannella
  Labels: build
 Attachments: OAK-2440-00.patch, elapsed-times-01.txt, 
 elasped-times.txt


 A build without any integrationTesting or pedantic profile completed locally 
 in 22:40.901s.
 While this is an exceptional case, on my machine is common for it to last 12 
 minutes.
 Extracted a list of longest tests
 {noformat}
 188.913   Running 
 org.apache.jackrabbit.oak.plugins.segment.standby.ExternalPrivateStoreIT
 187.421   Running 
 org.apache.jackrabbit.oak.plugins.segment.standby.ExternalSharedStoreIT
 66.123   Running 
 org.apache.jackrabbit.oak.plugins.document.BasicDocumentStoreTest
 50.314   Running org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT
 44.019   Running org.apache.jackrabbit.oak.jcr.query.QueryJcrTest
 40.476   Running 
 org.apache.jackrabbit.oak.plugins.segment.file.SegmentReferenceLimitTestIT
 35.61   Running org.apache.jackrabbit.oak.jcr.tck.ObservationIT
 32.004   Running org.apache.jackrabbit.oak.plugins.segment.standby.RecoverTest
 26.578   Running org.apache.jackrabbit.oak.core.RootFuzzIT
 24.861   Running 
 org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1763) OrderedIndex does not comply with JCR's compareTo semantics

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1763:
---
Fix Version/s: 1.2

 OrderedIndex does not comply with JCR's compareTo semantics
 ---

 Key: OAK-1763
 URL: https://issues.apache.org/jira/browse/OAK-1763
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Michael Dürig
 Fix For: 1.2


 The ordered index currently uses the lexicographical order of the string 
 representation of the values. This does not comply with [JCR's compareTo 
 sementics | 
 http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.6.5.1%20CompareTo%20Semantics]
  for e.g. double values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1576) SegmentMK: Implement refined conflict resolution for addExistingNode conflicts

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1576:
---
Fix Version/s: (was: 1.2)
   1.4

 SegmentMK: Implement refined conflict resolution for addExistingNode conflicts
 --

 Key: OAK-1576
 URL: https://issues.apache.org/jira/browse/OAK-1576
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
 Fix For: 1.4


 Implement refined conflict resolution for addExistingNode conflicts as 
 defined in the parent issue for the SegementMK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1698) Improve LargeOperationIT accuracy for document nodes store fixture

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1698:
---
Fix Version/s: (was: 1.2)
   1.4

 Improve LargeOperationIT accuracy for document nodes store fixture
 --

 Key: OAK-1698
 URL: https://issues.apache.org/jira/browse/OAK-1698
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: test
 Fix For: 1.4


 As [noted | 
 https://issues.apache.org/jira/browse/OAK-1414?focusedCommentId=13942016page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13942016]
  on OAK-1414 {{LargeOperationIT}} is somewhat inaccurate for the document 
 node store fixture where the collected data tends to be noisy. We should look 
 into ways to make  the tests results more accurate for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2556) do intermediate commit during async indexing

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2556:
-
Fix Version/s: 1.4

 do intermediate commit during async indexing
 

 Key: OAK-2556
 URL: https://issues.apache.org/jira/browse/OAK-2556
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-lucene
Affects Versions: 1.0.11
Reporter: Stefan Egli
 Fix For: 1.4


 A recent issue found at a customer unveils a potential issue with the async 
 indexer. Reading the AsyncIndexUpdate.updateIndex it looks like it is doing 
 the entire update of the async indexer *in one go*, ie in one commit.
 When there is - for some reason - however, a huge diff that the async indexer 
 has to process, the 'one big commit' can become gigantic. There is no limit 
 to the size of the commit in fact.
 So the suggestion is to do intermediate commits while the async indexer is 
 going on. The reason this is acceptable is the fact that by doing async 
 indexing, that index is anyway not 100% up-to-date - so it would not make 
 much of a difference if it would commit after every 100 or 1000 changes 
 either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2500) checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js

2015-03-10 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355194#comment-14355194
 ] 

Stefan Egli commented on OAK-2500:
--

[~chetanm], would it make sense to include that in 1.0 branch? (as it affects 
1.0.8 if I remember correctly)

 checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js
 ---

 Key: OAK-2500
 URL: https://issues.apache.org/jira/browse/OAK-2500
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Affects Versions: 1.0.8
Reporter: Stefan Egli
 Fix For: 1.0.13

 Attachments: oak-mongo-mod.js


 The oak-mongo.js currently contains checkHistory/fixHistory which are 
 cleaning up 'dangling revisions/split-documents' on a particular path.
 Now it would be good to have a command that goes through the entire 
 repository and checks/fixes these dangling revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2500) checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2500:
-
Affects Version/s: 1.0.8

 checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js
 ---

 Key: OAK-2500
 URL: https://issues.apache.org/jira/browse/OAK-2500
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Affects Versions: 1.0.8
Reporter: Stefan Egli
 Fix For: 1.0.13

 Attachments: oak-mongo-mod.js


 The oak-mongo.js currently contains checkHistory/fixHistory which are 
 cleaning up 'dangling revisions/split-documents' on a particular path.
 Now it would be good to have a command that goes through the entire 
 repository and checks/fixes these dangling revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leandro Reis updated OAK-2605:
--
Description: 
I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
process of testing the launch of such product on Japanese Windows 2012
Server R2, I came across the following exception: 
(java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
yet (feel free to submit a patch))

windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
returned by Charset.defaultCharset(), used in 
org.apache.commons.io.input.ReversedLinesFileReader [0].

A patch for this issue was provided in 
https://issues.apache.org/jira/browse/IO-471 .  

It also includes changes needed to support Chinese Simplified, Chinese 
Traditional and Korean.



  was:
I¹m working on a product that uses Commons IO via Jackrabbit Oak. In the
process of testing the launch of such product on Japanese Windows 2012
Server R2, I came across the following exception:
(java.io.UnsupportedEncodingException: Encoding windows-31j is not
supported yet (feel free to submit a patch))

windows-31j is the IANA name for Windows code page 932 (Japanese), and
is returned by Charset.defaultCharset(), used in 
org.apache.commons.io.input.ReversedLinesFileReader [0].

This issue can be resolved by adding a check for
'windows-31j' to ReversedLinesFileReader.

A patch for this issue was provided in 
https://issues.apache.org/jira/browse/IO-471 .  It also includes changes needed 
to support Chinese Simplified, Chinese Traditional and Korean.




 Support for additional encodings needed in ReversedLinesFileReader
 --

 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
 Windows 2012 R2 Korean
 Windows 2012 R2 Simplified Chinese
 Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis
Priority: Critical

 I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
 process of testing the launch of such product on Japanese Windows 2012
 Server R2, I came across the following exception: 
 (java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
 yet (feel free to submit a patch))
 windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
 returned by Charset.defaultCharset(), used in 
 org.apache.commons.io.input.ReversedLinesFileReader [0].
 A patch for this issue was provided in 
 https://issues.apache.org/jira/browse/IO-471 .  
 It also includes changes needed to support Chinese Simplified, Chinese 
 Traditional and Korean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1451) Expose index size

2015-03-10 Thread Michael Marth (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355351#comment-14355351
 ] 

Michael Marth commented on OAK-1451:


[~tmueller], afair we now use the index size to in the query planning. Maybe we 
can implement this MBean in 1.3.x?

 Expose index size
 -

 Key: OAK-1451
 URL: https://issues.apache.org/jira/browse/OAK-1451
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Priority: Minor
  Labels: production, resilience
 Fix For: 1.3.1


 At the moment some MK's disc needs are largely from the indexes. Maybe we can 
 do something about this, but in the meantime it would be helpful if we could 
 expose the index sizes (num of indexed nodes) via JMX so that they could be 
 easily monitored.
 This would also be helpful to see at which point an index becomes useless (if 
 the majority of content nodes are indexed one might as well not have an index)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1558) Expose FileStoreBackupRestoreMBean for supported NodeStores

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1558:
---
Fix Version/s: (was: 1.2)
   1.4

 Expose FileStoreBackupRestoreMBean for supported NodeStores
 ---

 Key: OAK-1558
 URL: https://issues.apache.org/jira/browse/OAK-1558
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk, segmentmk
Reporter: Michael Dürig
  Labels: monitoring
 Fix For: 1.4


 {{NodeStore}} implementations should expose the 
 {{FileStoreBackupRestoreMBean}} in order to be interoperable with 
 {{RepositoryManagementMBean}}. See OAK-1160.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2539:

Assignee: Davide Giannella

 SQL2 query not working with filter (s.[stringa] = 'a' OR 
 CONTAINS(s.[stringb], 'b'))
 

 Key: OAK-2539
 URL: https://issues.apache.org/jira/browse/OAK-2539
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Calvin Wong
Assignee: Davide Giannella

 Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
 Add 2 String properties: stringa = a, stringb = b.
 User query tool in CRX/DE to do SQL2 search:
 This search will find qtest:
 SELECT * FROM [nt:base] AS s WHERE 
 ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
 CONTAINS(s.[stringb], 'b'))
 This search will not find qtest:
 SELECT * FROM [nt:base] AS s WHERE 
 ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
 CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1844) Verify resilience goals

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1844:
---
Fix Version/s: (was: 1.2)
   1.4

 Verify resilience goals
 ---

 Key: OAK-1844
 URL: https://issues.apache.org/jira/browse/OAK-1844
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: resilience
 Fix For: 1.4


 This is a container issue for verifying the resilience goals set out for Oak. 
 See https://wiki.apache.org/jackrabbit/Resilience for the work in progress of 
 such goals. Once we have an agreement on that, subtasks of this issue could 
 be used to track the verification process of each of the individual goals. 
 Discussion here: http://jackrabbit.markmail.org/thread/5cndir5sjrc5dtla



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2382) Move NodeStore implementations to separate modules

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2382:

Fix Version/s: (was: 1.2)
   1.4

 Move NodeStore implementations to separate modules
 --

 Key: OAK-2382
 URL: https://issues.apache.org/jira/browse/OAK-2382
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core, mk, segmentmk
Reporter: angela
 Fix For: 1.4


 as discussed in the oak-call yesterday,  i think we should take another look 
 at the modularization of the oak-core module.
 some time ago i proposed to move the NodeStore implementations into separate 
 modules.
 to begin with i just tried 2 separate modules:
 - oak-ns-document:  everything below oak.plugins.document
 - oak-ns-segment:  everything below oak.plugins.segment  segment specific 
 parts of oak.plugins.backup
 i found the following issues:
 - org.apache.jackrabbit.oak.plugins.cache is not part of the exported 
 packages - oak.plugins.backup contains both public API and implementations 
 without separation - the following test-classes have a hard dependency on one 
 or more ns implementations:  KernelNodeStoreCacheTest  
 ClusterPermissionsTest  NodeStoreFixture to fix those we could need to be 
 able to run the tests with the individual nodestore modules and move those 
 tests that are just intended to work with a particular impl.
 such a move would not only prevent us from introducing unintended package 
 dependencies but would also reduce the number of dependencies present with 
 oak-core. 
 as discussed yesterday we may want to pick this up again this year.
 see also http://markmail.org/message/6cpbyuthub4jxase for the whole 
 discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2479) FileStoreBackup throws SegmentNotFoundException

2015-03-10 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355205#comment-14355205
 ] 

Stefan Egli commented on OAK-2479:
--

[~mduerig] what's your take on this one, 1.2 or 1.4 ? thx

 FileStoreBackup throws SegmentNotFoundException
 ---

 Key: OAK-2479
 URL: https://issues.apache.org/jira/browse/OAK-2479
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: run
Affects Versions: 1.1.5, 1.0.11
Reporter: Stefan Egli
 Attachments: BackupTest.java, oak-2479.cloneBinaries.partial.patch


 Running the FileStoreBackup (in oak-run) results in a 
 SegmentNotFoundException to be thrown.
 Narrowed this down to a regression introduced with 
 https://github.com/apache/jackrabbit-oak/commit/6129da4251d36e3f2e1ac4b72ebf3602d0073d47
 The issue seems to be related to the fact that it creates a Compactor with 
 the parameter cloneBinaries set to false. This results in blobs not being 
 copied by value but rather by reference. Which results in segments not being 
 found in the backup (since they only exist in the origin-store).
 Creating the Compactor with cloneBinaries set to true fixes this (not saying 
 that this is the correct fix - as I understood at least in online-compaction 
 case you want to have cloneBinaries set to false for example).
 Attaching a test case later which reproduces the problem and currently 
 results in the following exception (in trunk):
 {code}
 org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 
 7433ef00-fac5-48d5-b91d-923c517c4a5b not found
   at 
 org.apache.jackrabbit.oak.plugins.segment.file.FileStore.readSegment(FileStore.java:711)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentTracker.getSegment(SegmentTracker.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentId.getSegment(SegmentId.java:108)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Record.getSegment(Record.java:82)
   at 
 org.apache.jackrabbit.oak.plugins.segment.BlockRecord.read(BlockRecord.java:55)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentStream.read(SegmentStream.java:171)
   at com.google.common.io.ByteStreams.read(ByteStreams.java:828)
   at com.google.common.io.ByteSource.contentEquals(ByteSource.java:303)
   at com.google.common.io.ByteStreams.equal(ByteStreams.java:661)
   at 
 org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:58)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:211)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor.compact(Compactor.java:229)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor.compact(Compactor.java:185)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor.access$0(Compactor.java:181)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.propertyAdded(Compactor.java:115)
   at 
 org.apache.jackrabbit.oak.plugins.memory.EmptyNodeState.compareAgainstEmptyState(EmptyNodeState.java:155)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor$CompactDiff.childNodeAdded(Compactor.java:137)
   at 
 org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:488)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor.process(Compactor.java:92)
   at 
 org.apache.jackrabbit.oak.plugins.segment.Compactor.compact(Compactor.java:97)
   at 
 org.apache.jackrabbit.oak.plugins.backup.FileStoreBackup.backup(FileStoreBackup.java:81)
   at 
 org.apache.jackrabbit.oak.run.BackupTest.testInMemoryBackup(BackupTest.java:55)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at 

[jira] [Updated] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leandro Reis updated OAK-2605:
--
Priority: Major  (was: Critical)

 Support for additional encodings needed in ReversedLinesFileReader
 --

 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
 Windows 2012 R2 Korean
 Windows 2012 R2 Simplified Chinese
 Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis

 I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
 process of testing the launch of such product on Japanese Windows 2012
 Server R2, I came across the following exception: 
 (java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
 yet (feel free to submit a patch))
 windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
 returned by Charset.defaultCharset(), used in 
 org.apache.commons.io.input.ReversedLinesFileReader [0].
 A patch for this issue was provided in 
 https://issues.apache.org/jira/browse/IO-471 .  
 It also includes changes needed to support Chinese Simplified, Chinese 
 Traditional and Korean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1453) MongoMK failover support for replica sets (esp. shards)

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1453:
---
Fix Version/s: (was: 1.2)
   1.4

 MongoMK failover support for replica sets (esp. shards)
 ---

 Key: OAK-1453
 URL: https://issues.apache.org/jira/browse/OAK-1453
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Michael Marth
  Labels: production, resilience
 Fix For: 1.4


 With OAK-759 we have introduced replica support in MongoMK. I think we still 
 need to address the resilience for failover from primary to secoandary:
 Consider a case where Oak writes to the primary. Replication to secondary is 
 ongoing. During that period the primary goes down and the secondary becomes 
 primary. There could be some half-replicated MVCC revisions, which need to 
 be either discarded or be ignored after the failover.
 This might not be an issue if there is only one shard, as the commit root is 
 written last (and replicated last)
 But with 2 shards the the replication state of these 2 shards could be 
 inconsistent. Oak needs to handle such a situation without falling over.
 If we can detect a Mongo failover we could query Mongo which revisions are 
 fully replicated to the new primary and discard the potentially 
 half-replicated revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1451) Expose index size

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1451:
---
Fix Version/s: (was: 1.2)
   1.3.1

 Expose index size
 -

 Key: OAK-1451
 URL: https://issues.apache.org/jira/browse/OAK-1451
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Michael Marth
Priority: Minor
  Labels: production, resilience
 Fix For: 1.3.1


 At the moment some MK's disc needs are largely from the indexes. Maybe we can 
 do something about this, but in the meantime it would be helpful if we could 
 expose the index sizes (num of indexed nodes) via JMX so that they could be 
 easily monitored.
 This would also be helpful to see at which point an index becomes useless (if 
 the majority of content nodes are indexed one might as well not have an index)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2246) UUID collision check is not does not work in transient space

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2246:
---
Fix Version/s: (was: 1.2)
   1.4

 UUID collision check is not does not work in transient space
 

 Key: OAK-2246
 URL: https://issues.apache.org/jira/browse/OAK-2246
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: jcr
Affects Versions: 1.1.1
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
 Fix For: 1.4


 I think OAK-1037 broke the system view import.
 test case:
 1. create a new node with a uuid (referenceable, or new user)
 2. import systemview with IMPORT_UUID_COLLISION_REPLACE_EXISTING
 3. save()
 result:
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: OakConstraint0030: 
 Uniqueness constraint violated at path [/] for one of the property in 
 [jcr:uuid] having value e358efa4-89f5-3062-b10d-d7316b65649e
 {noformat}
 expected:
 * imported content should replace the existing node - even in transient space.
 note:
 * if you perform a save() after step 1, everything works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1368) Only one Observer per session

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-1368:
---
Fix Version/s: (was: 1.2)
   1.4

 Only one Observer per session
 -

 Key: OAK-1368
 URL: https://issues.apache.org/jira/browse/OAK-1368
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: jcr
Reporter: Jukka Zitting
Assignee: Michael Dürig
  Labels: observation
 Fix For: 1.4


 As mentioned in OAK-1332, a case where a single session registers multiple 
 observation listeners can be troublesome if events are delivered concurrently 
 to all of those listeners, since in such a case the {{NamePathMapper}} and 
 other session internals will likely suffer from lock contention.
 A good way to avoid this would be to have all the listeners registered within 
 a single session be tied to a single {{Observer}} and thus processed 
 sequentially.
 Doing so would also improve performance as the listeners could leverage the 
 same content diff. As the listeners come from a single session and thus 
 presumably from a single client, there's no need to worry about one client 
 blocking the work of another.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1649) NamespaceException: OakNamespace0005 on save, after replica crash

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-1649:
-
Fix Version/s: (was: 1.2)
   1.4

moving this to 1.2 - this is a rather old issue which would require being 
reproduced on a replica set first

 NamespaceException: OakNamespace0005 on save, after replica crash
 -

 Key: OAK-1649
 URL: https://issues.apache.org/jira/browse/OAK-1649
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 0.19
 Environment: 0.20-SNAPSHOT as of March 31
Reporter: Stefan Egli
 Fix For: 1.4

 Attachments: OverwritePropertyTest.java


 After running a test that produces couple thousands of nodes, and overwrites 
 the same properties couple thousand times, then crashing the replica primary, 
 the exception below occurs.
 The exception can be reproduced on the db and with the test case I'll attach 
 in a minute
 {code}javax.jcr.NamespaceException: OakNamespace0005: Namespace modification 
 not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:227)
   at 
 org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:679)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:553)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:417)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl$8.perform(SessionImpl.java:1)
   at 
 org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:308)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.perform(SessionImpl.java:127)
   at 
 org.apache.jackrabbit.oak.jcr.session.SessionImpl.save(SessionImpl.java:414)
   at 
 org.apache.jackrabbit.oak.run.OverwritePropertyTest.testReplicaCrashResilience(OverwritePropertyTest.java:74)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 Caused by: org.apache.jackrabbit.oak.api.CommitFailedException: 
 OakNamespace0005: Namespace modification not allowed: rep:nsdata
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.modificationNotAllowed(NamespaceEditor.java:122)
   at 
 org.apache.jackrabbit.oak.plugins.name.NamespaceEditor.childNodeChanged(NamespaceEditor.java:140)
   at 
 org.apache.jackrabbit.oak.spi.commit.CompositeEditor.childNodeChanged(CompositeEditor.java:122)
   at 
 org.apache.jackrabbit.oak.spi.commit.EditorDiff.childNodeChanged(EditorDiff.java:143)
   at 
 

[jira] [Updated] (OAK-2500) checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js

2015-03-10 Thread Stefan Egli (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Egli updated OAK-2500:
-
Fix Version/s: 1.0.13

 checkDeepHistory/fixDeepHistory/prepareDeepHistory for oak-mongo.js
 ---

 Key: OAK-2500
 URL: https://issues.apache.org/jira/browse/OAK-2500
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: run
Affects Versions: 1.0.8
Reporter: Stefan Egli
 Fix For: 1.0.13

 Attachments: oak-mongo-mod.js


 The oak-mongo.js currently contains checkHistory/fixHistory which are 
 cleaning up 'dangling revisions/split-documents' on a particular path.
 Now it would be good to have a command that goes through the entire 
 repository and checks/fixes these dangling revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2582) RDB: improve memory cache handling

2015-03-10 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-2582:

Fix Version/s: 1.0.13

 RDB: improve memory cache handling
 --

 Key: OAK-2582
 URL: https://issues.apache.org/jira/browse/OAK-2582
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: rdbmk
Affects Versions: 1.0.11, 1.1.6
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.1.8, 1.0.13


 Improve memory cache handling:
 - invalidateCache should just mark cache entries as to be revalidated
 - to-be revalidated cache entries can be used for conditional retrieval from 
 DB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2246) UUID collision check is not does not work in transient space

2015-03-10 Thread angela (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355164#comment-14355164
 ] 

angela commented on OAK-2246:
-

[~anchela], to look on whether  OAK-1037 is the cuprit.

 UUID collision check is not does not work in transient space
 

 Key: OAK-2246
 URL: https://issues.apache.org/jira/browse/OAK-2246
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: jcr
Affects Versions: 1.1.1
Reporter: Tobias Bocanegra
Assignee: Chetan Mehrotra
 Fix For: 1.4


 I think OAK-1037 broke the system view import.
 test case:
 1. create a new node with a uuid (referenceable, or new user)
 2. import systemview with IMPORT_UUID_COLLISION_REPLACE_EXISTING
 3. save()
 result:
 {noformat}
 javax.jcr.nodetype.ConstraintViolationException: OakConstraint0030: 
 Uniqueness constraint violated at path [/] for one of the property in 
 [jcr:uuid] having value e358efa4-89f5-3062-b10d-d7316b65649e
 {noformat}
 expected:
 * imported content should replace the existing node - even in transient space.
 note:
 * if you perform a save() after step 1, everything works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2539) SQL2 query not working with filter (s.[stringa] = 'a' OR CONTAINS(s.[stringb], 'b'))

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2539:
---
Fix Version/s: 1.1.8

 SQL2 query not working with filter (s.[stringa] = 'a' OR 
 CONTAINS(s.[stringb], 'b'))
 

 Key: OAK-2539
 URL: https://issues.apache.org/jira/browse/OAK-2539
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Calvin Wong
Assignee: Davide Giannella
 Fix For: 1.1.8


 Create node /content/usergenerated/qtest with jcr:primaryType nt:unstrucuted.
 Add 2 String properties: stringa = a, stringb = b.
 User query tool in CRX/DE to do SQL2 search:
 This search will find qtest:
 SELECT * FROM [nt:base] AS s WHERE 
 ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'a' OR 
 CONTAINS(s.[stringb], 'b'))
 This search will not find qtest:
 SELECT * FROM [nt:base] AS s WHERE 
 ISDESCENDANTNODE([/content/usergenerated/]) AND (s.[stringa] = 'x' OR 
 CONTAINS(s.[stringb], 'b'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1327) Cleanup NodeStore and MK implementations

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1327:

Fix Version/s: (was: 1.2)
   1.4

 Cleanup NodeStore and MK implementations
 

 Key: OAK-1327
 URL: https://issues.apache.org/jira/browse/OAK-1327
 Project: Jackrabbit Oak
  Issue Type: Wish
  Components: core, mk, segmentmk
Reporter: angela
 Fix For: 1.4

 Attachments: OAK-1327.patch


 as discussed during the oak-call today, i would like to cleanup the code base 
 before we officially release OAK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2480) Incremental (FileStore)Backup copies the entire source instead of just the delta

2015-03-10 Thread Stefan Egli (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355201#comment-14355201
 ] 

Stefan Egli commented on OAK-2480:
--

[~alex.parvulescu], what's your take on this one, something for 1.2 or 1.4 ? 
thx.

 Incremental (FileStore)Backup copies the entire source instead of just the 
 delta
 

 Key: OAK-2480
 URL: https://issues.apache.org/jira/browse/OAK-2480
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: run
Affects Versions: 1.1.5
Reporter: Stefan Egli
 Attachments: IncrementalBackupTest.java, 
 oak-2480.incremental.partial.patch


 Running the FileStoreBackup (in oak-run) sequentially should correspond to an 
 incremental backup. This implies the expectation, that the incremental backup 
 is very resource-friendly, ie that it only adds the delta/diff that changed 
 since the last backup. Instead what can be een at the moment, is that it 
 copies the entire source-store again on each 'incremental' backup.
 Tested with the latest trunk snapshot.
 Suspecting the problem to be as follows: on the first backup the 
 FileStoreBackup stores a checkpoint created in the source-store and adds it 
 as a property checkpoint to the backup root node, besides the actual backup 
 which is stored in '/root'. 
 On subsequent incremental runs, the backup tries to retrieve said property 
 checkpoint from the backup and uses that in the compactor to do the diff 
 based upon.
 Now the problem seems to be that in Compactor.compact it goes to call 
 process(), which does a writer.writeNode(before) (where before is the 
 checkpoint in the origin store but writer is a writer of the backup store). 
 And in this SegmentWriter.writeNode() it fails to find the 'before' segment, 
 and thus traverses the entire tree and copies it from the origin to the 
 backup.
 So the problem looks to be in the area where it assumes to find this 
 'checkpoint-before' in the backup but that's not the case.
 So a solution would have been to not do the diff between the checkpoint and 
 the current origin-head, but between the backup-head and the origin-head 
 instead. Now apparently this was not the intention though, as that would mean 
 to read through the entire backup for doing the diffing - and that would be 
 inefficient...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2165) Observation tests sporadically failing

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2165:
---
Labels: CI buildbot observation test  (was: observation test)

 Observation tests sporadically failing
 --

 Key: OAK-2165
 URL: https://issues.apache.org/jira/browse/OAK-2165
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: jcr
 Environment: http://ci.apache.org/builders/oak-trunk-win7/
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: CI, buildbot, observation, test

 {{JackrabbitNodeTest#testRenameEventHandling}} fails sporadically on the 
 Apache buildbot with missing events (e.g. 
 http://ci.apache.org/builders/oak-trunk-win7/builds/642). 
 Same holds for other tests in the {{ObservationIT}} suite. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2587) observation processing too eager/unfair under load

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-2587:
--

Assignee: Michael Dürig

 observation processing too eager/unfair under load
 --

 Key: OAK-2587
 URL: https://issues.apache.org/jira/browse/OAK-2587
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.12
Reporter: Stefan Egli
Assignee: Michael Dürig
Priority: Critical
 Fix For: 1.1.8

 Attachments: OAK-2587.patch


 The current implementation of oak's observation event processing is too eager 
 and thus unfair under load scenarios. 
 Consider having many (eg 200) Eventlisteners but only a relatively small 
 threadpool (eg 5 as is the default in sling) backing them. When processing 
 changes for a particular BackgroundObserver, that one (in 
 BackgroundObserver.completionHandler.call) currently processes *all changes 
 irrespective of how many there are* - ie it is *eager*. Only once that 
 BackgroundObserver processed all changes will it let go and 'pass the thread' 
 to the next BackgroundObserver. Now if for some reason changes (ie commits) 
 are coming in while a BackgroundObserver is busy processing an earlier 
 change, this will lengthen that while loop. As a result the remaining (eg 
 195) *EventListeners will have to wait for a potentially long time* until 
 it's their turn - thus *unfair*.
 Now combine the above pattern with a scenario where mongo is used as the 
 underlying store. In that case in order to remain highly performant it is 
 important that the diffs (for compareAgainstBaseState) are served from the 
 MongoDiffCache for as many cases as possible to avoid doing a round-trip to 
 mongoD. The unfairness in the BackgroundObservers can now result in a large 
 delay between the 'first' observers getting the event and the 'last' one (of 
 those 200). When this delay increases due to a burst in the load, there is a 
 risk of the diffs to no longer be in the cache - those last observers are 
 basically kicked out of the (diff) cache. Once this happens, *the situation 
 gets even worse*, since now you have yet new commits coming in and old 
 changes still having to be processed - all of which are being processed 
 through in 'stripes of 5 listeners' before the next one gets a chance. This 
 at some point results in a totally inefficient cache behavior, or in other 
 words, at some point all diffs have to be read from mongoD.
 To avoid this there are probably a number of options - a few one that come to 
 mind:
 * increase thread-pool to match or be closer to the number of listeners (but 
 this has other disadvantages, eg cost of thread-switching)
 * make BackgroundObservers fairer by limiting the number of changes they 
 process before they give others a chance to be served by the pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2492) Flag Document having many children

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2492:
---
Fix Version/s: (was: 1.1.8)
   (was: 1.2)
   1.4

 Flag Document having many children
 --

 Key: OAK-2492
 URL: https://issues.apache.org/jira/browse/OAK-2492
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.4


 Current DocumentMK logic while performing a diff for child nodes works as 
 below
 # Get children for _before_ revision upto MANY_CHILDREN_THRESHOLD (which 
 defaults to 50). Further note that current logic of fetching children nodes 
 also add children {{NodeDocument}} to {{Document}} cache and also reads the 
 complete Document for those children
 # Get children for _after_ revision with limits as above
 # If the child list is complete then it does a direct diff on the fetched 
 children
 # if the list is not complete i.e. number of children are more than the 
 threshold then it for a query based diff (also see OAK-1970)
 So in those cases where number of children are large then all work done in #1 
 above is wasted and should be avoided. To do that we can mark those parent 
 nodes which have many children via special flag like {{_manyChildren}}. One 
 such nodes are marked the diff logic can check for the flag and skip the work 
 done in #1
 This is kind of similar to way we mark nodes which have at least one child 
 (OAK-1117)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2399) Custom scorer for modifying score per documents

2015-03-10 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller reassigned OAK-2399:
---

Assignee: Thomas Mueller  (was: Chetan Mehrotra)

 Custom scorer for modifying score per documents
 ---

 Key: OAK-2399
 URL: https://issues.apache.org/jira/browse/OAK-2399
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: oak-lucene
Reporter: Rishabh Maurya
Assignee: Thomas Mueller
 Fix For: 1.1.8, 1.2

 Attachments: OAK-2399_scorer.patch


 We have search enhancements requests based on search results relevance like -
 1. boosting score of recently modified documents.
 2. boosting documents which are created/last updated by current session 
 user.(OR boosting on basis specific field value).
 3. boosting documents with a field value in certain range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2477) Move suggester specific config to own configuration node

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2477:

Assignee: Tommaso Teofili

 Move suggester specific config to own configuration node
 

 Key: OAK-2477
 URL: https://issues.apache.org/jira/browse/OAK-2477
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-lucene
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 1.1.8, 1.2


 Currently suggester configuration is controlled via properties defined on 
 main config / props node but it'd be good if we would have its own place to 
 configure the whole suggest feature to not mix up configuration of other 
 features / parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-495) Run TCK with local namespace remapping

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-495:
---
Fix Version/s: (was: 1.2)
   1.4

 Run TCK with local namespace remapping
 --

 Key: OAK-495
 URL: https://issues.apache.org/jira/browse/OAK-495
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: it
Reporter: angela
  Labels: test
 Fix For: 1.4


 jukka suggested that we run the TCK with session local namespace
 remappings in order to be able to detect more issues we might have
 with usage of oak vs jcr names.
 +1 from my side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1736) Support for Faceted Search

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-1736:
---
Fix Version/s: (was: 1.2)
   1.4

 Support for Faceted Search
 --

 Key: OAK-1736
 URL: https://issues.apache.org/jira/browse/OAK-1736
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: oak-lucene, oak-solr, query
Reporter: Thomas Mueller
Assignee: Tommaso Teofili
 Fix For: 1.4


 Details to be defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-260) Avoid the Turkish Locale Problem

2015-03-10 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller updated OAK-260:
---
Fix Version/s: (was: 1.2)
   1.4

 Avoid the Turkish Locale Problem
 --

 Key: OAK-260
 URL: https://issues.apache.org/jira/browse/OAK-260
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, jcr
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 1.4


 We currently use String.toUpperCase() and String.toLowerCase() and in some 
 cases where it is not appropriate. When running using the Turkish profile, 
 this will not work as expected. See also 
 http://mattryall.net/blog/2009/02/the-infamous-turkish-locale-bug
 Problematic are String.toUpperCase(), String.toLowerCase(). 
 String.equalsIgnoreCase(..) isn't a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-1956) Set correct OSGi package export version

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-1956:
--

Assignee: Michael Dürig

 Set correct OSGi package export version
 ---

 Key: OAK-1956
 URL: https://issues.apache.org/jira/browse/OAK-1956
 Project: Jackrabbit Oak
  Issue Type: Task
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Critical
 Fix For: 1.2


 This issue serves as a reminder to set the correct OSGi package export 
 versions before we release 1.2.
 OAK-1536 added support for the BND baseline feature: the baseline.xml files 
 in the target directories should help us figuring out the correct versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1515) Implement low disk space and low memory monitoring

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig resolved OAK-1515.

Resolution: Won't Fix

Won't fix as discussed

 Implement low disk space and low memory monitoring
 --

 Key: OAK-1515
 URL: https://issues.apache.org/jira/browse/OAK-1515
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Michael Dürig
Assignee: Michael Dürig
Priority: Minor
  Labels: monitoring
 Fix For: 1.2


 We should implement these monitoring for those MKs where it makes sense. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)
Leandro Reis created OAK-2605:
-

 Summary: Support for additional encodings needed in 
ReversedLinesFileReader
 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
Windows 2012 R2 Korean
Windows 2012 R2 Simplified Chinese
Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis
Priority: Critical


I¹m working on a product that uses Commons IO via Jackrabbit Oak. In the
process of testing the launch of such product on Japanese Windows 2012
Server R2, I came across the following exception:
(java.io.UnsupportedEncodingException: Encoding windows-31j is not
supported yet (feel free to submit a patch))

windows-31j is the IANA name for Windows code page 932 (Japanese), and
is returned by Charset.defaultCharset(), used in 
org.apache.commons.io.input.ReversedLinesFileReader [0].

This issue can be resolved by adding a check for
'windows-31j' to ReversedLinesFileReader.

A patch for this issue was provided in 
https://issues.apache.org/jira/browse/IO-471 .  It also includes changes needed 
to support Chinese Simplified, Chinese Traditional and Korean.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1449) Troubleshooting tool to inspect hidden items

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth resolved OAK-1449.

   Resolution: Won't Fix
Fix Version/s: (was: 1.2)

 Troubleshooting tool to inspect hidden items
 

 Key: OAK-1449
 URL: https://issues.apache.org/jira/browse/OAK-1449
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Michael Marth
Priority: Minor
  Labels: production, resilience, tools

 For troubleshooting borked instances we should have a way to inspect (and 
 write to?) the complete tree, including items that are hidden. Permission 
 tree, version tree, indexes, etc come to mind.
 We need to design this tool in a way that does not compromise security, but 
 on the other hand I think we need write capabilities in order to fix broken 
 items if needed. Not sure, how to best do this...
 The tool ideally can be limited to look at
 * specific paths only and
 * ranges of MVCC revisions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-1450) Run repair tool if repo cannot start (automatically or explicitly)

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth resolved OAK-1450.

Resolution: Won't Fix

 Run repair tool if repo cannot start (automatically or explicitly)
 --

 Key: OAK-1450
 URL: https://issues.apache.org/jira/browse/OAK-1450
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: segmentmk
Reporter: Michael Marth
Priority: Minor
  Labels: production, resilience, tools
 Fix For: 1.2


 If the repo does not come up should we automatically run the repair tool 
 (OAK-1446) and then retry...? Not sure about this one. Maybe starting the 
 repo again with an explicit flag --repair would be better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356107#comment-14356107
 ] 

Leandro Reis edited comment on OAK-2605 at 3/11/15 2:17 AM:


[~mduerig], your patch correctly includes my changes. There are 3 unit tests 
[*] for the ReversedLinesFileReader class, I uploaded a patch to 
https://issues.apache.org/jira/browse/IO-471 that adds tests for the 4 
encodings I added support for, as well as test files. 

[*]
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestSimple.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamFile.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamBlockSize.java

test files - 
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/resources/



was (Author: lreis):
@Michael Dürig, your patch correctly includes my changes. There are 3 unit 
tests [*] for the ReversedLinesFileReader class, I uploaded a patch to 
https://issues.apache.org/jira/browse/IO-471 that adds tests for the 4 
encodings I added support for, as well as test files. 

[*]
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestSimple.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamFile.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamBlockSize.java

test files - 
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/resources/


 Support for additional encodings needed in ReversedLinesFileReader
 --

 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
 Windows 2012 R2 Korean
 Windows 2012 R2 Simplified Chinese
 Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis
Assignee: Michael Dürig
 Fix For: 1.1.8

 Attachments: OAK-2605.patch


 I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
 process of testing the launch of such product on Japanese Windows 2012
 Server R2, I came across the following exception: 
 (java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
 yet (feel free to submit a patch))
 windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
 returned by Charset.defaultCharset(), used in 
 org.apache.commons.io.input.ReversedLinesFileReader [0].
 A patch for this issue was provided in 
 https://issues.apache.org/jira/browse/IO-471 .  
 It also includes changes needed to support Chinese Simplified, Chinese 
 Traditional and Korean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356107#comment-14356107
 ] 

Leandro Reis edited comment on OAK-2605 at 3/11/15 2:16 AM:


@Michael Dürig, your patch correctly includes my changes. There are 3 unit 
tests [*] for the ReversedLinesFileReader class, I uploaded a patch to 
https://issues.apache.org/jira/browse/IO-471 that adds tests for the 4 
encodings I added support for, as well as test files. 

[*]
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestSimple.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamFile.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamBlockSize.java

test files - 
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/resources/



was (Author: lreis):
@Michael Dürig, your patch correctly includes my changes. There are 3 unit 
tests (*) for the ReversedLinesFileReader class, I uploaded a patch to 
https://issues.apache.org/jira/browse/IO-471 that adds tests for the 4 
encodings I added support for, as well as test files. 

(*)
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestSimple.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamFile.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamBlockSize.java

test files - 
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/resources/


 Support for additional encodings needed in ReversedLinesFileReader
 --

 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
 Windows 2012 R2 Korean
 Windows 2012 R2 Simplified Chinese
 Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis
Assignee: Michael Dürig
 Fix For: 1.1.8

 Attachments: OAK-2605.patch


 I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
 process of testing the launch of such product on Japanese Windows 2012
 Server R2, I came across the following exception: 
 (java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
 yet (feel free to submit a patch))
 windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
 returned by Charset.defaultCharset(), used in 
 org.apache.commons.io.input.ReversedLinesFileReader [0].
 A patch for this issue was provided in 
 https://issues.apache.org/jira/browse/IO-471 .  
 It also includes changes needed to support Chinese Simplified, Chinese 
 Traditional and Korean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-10 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14352867#comment-14352867
 ] 

Chetan Mehrotra edited comment on OAK-2557 at 3/11/15 4:25 AM:
---

After an offline discussion with [~mreutegg] we come to following conclusion

# Deletion logic has to work like creation logic and has to perform deletion in 
bottom to top way. Creation currently works from top to bottom i.e. ensures 
that parent get created first and child thereafter. So deletion logic has to 
complement that
# Current logic also has a potential issue where if the system performing GC 
crashes in between then it might lead to a state where parent would have got 
removed before child and in that case such child document can never be GCed. So 
as a fix we should first sort the batch via {{PathComparator}} in reverse and 
then perform deletion from child to parent. -{color:brown}Open a new issue for 
that{color}- OAK-2603
# For large deletion (like current case) we make use of {{ExternalSort}} where 
sorting is performed on disk and we read for paths stored in file. This would 
make use of all the support developed in Blob GC in 
{{MarkSweepGarbageCollector}}

All in all this would not be a simple fix that I initially thought :(


was (Author: chetanm):
After an offline discussion with [~mreutegg] we come to following conclusion

# Deletion logic has to work like creation logic and has to perform deletion in 
bottom to top way. Creation currently works from top to bottom i.e. ensures 
that parent get created first and child thereafter. So deletion logic has to 
complement that
# Current logic also has a potential issue where if the system performing GC 
crashes in between then it might lead to a state where parent would have got 
removed before child and in that case such child document can never be GCed. So 
as a fix we should first sort the batch via {{PathComparator}} in reverse and 
then perform deletion from child to parent. {color:brown}Open a new issue for 
that{color}
# For large deletion (like current case) we make use of {{ExternalSort}} where 
sorting is performed on disk and we read for paths stored in file. This would 
make use of all the support developed in Blob GC in 
{{MarkSweepGarbageCollector}}

All in all this would not be a simple fix that I initially thought :(

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 1.0.13, 1.2

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2603) Failure in one of the batch in VersionGC might lead to orphaned nodes

2015-03-10 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2603.
--
Resolution: Fixed

Fixed the issue by sorting the list before passing to {{DocumentStore}} for 
deletion. This ensures that child get deleted before parent
* trunk - http://svn.apache.org/r1665758
* 1.0 - http://svn.apache.org/r1665759

 Failure in one of the batch in VersionGC might lead to orphaned nodes
 -

 Key: OAK-2603
 URL: https://issues.apache.org/jira/browse/OAK-2603
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.0.13


 VersionGC logic currently performs deletion of nodes in batches. For GC to 
 work properly NodeDocument should always be removed in bottom-up mode i.e. 
 parent node should be removed *after* child has been removed
 Currently the GC logic deletes the NodeDocument in undefined order. In such 
 mode if one of the batch fails then its possible that parent might have got 
 deleted but the child was not deleted. 
 Now in next run the child node would not be recognized as a deleted node 
 because the commit root would not be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2605) Support for additional encodings needed in ReversedLinesFileReader

2015-03-10 Thread Leandro Reis (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356107#comment-14356107
 ] 

Leandro Reis commented on OAK-2605:
---

@Michael Dürig, your patch correctly includes my changes. There are 3 unit 
tests (*) for the ReversedLinesFileReader class, I uploaded a patch to 
https://issues.apache.org/jira/browse/IO-471 that adds tests for the 4 
encodings I added support for, as well as test files. 

(*)
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestSimple.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamFile.java
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/java/org/apache/commons/io/input/ReversedLinesFileReaderTestParamBlockSize.java

test files - 
http://svn.apache.org/viewvc/commons/proper/io/trunk/src/test/resources/


 Support for additional encodings needed in ReversedLinesFileReader
 --

 Key: OAK-2605
 URL: https://issues.apache.org/jira/browse/OAK-2605
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: segmentmk
Affects Versions: 1.1.7
 Environment: Windows 2012 R2 Japanese
 Windows 2012 R2 Korean
 Windows 2012 R2 Simplified Chinese
 Windows 2012 R2 Traditional Chinese
Reporter: Leandro Reis
Assignee: Michael Dürig
 Fix For: 1.1.8

 Attachments: OAK-2605.patch


 I'm working on a product that uses Commons IO via Jackrabbit Oak. In the 
 process of testing the launch of such product on Japanese Windows 2012
 Server R2, I came across the following exception: 
 (java.io.UnsupportedEncodingException: Encoding windows-31j is not supported 
 yet (feel free to submit a patch))
 windows-31j is the IANA name for Windows code page 932 (Japanese), and is 
 returned by Charset.defaultCharset(), used in 
 org.apache.commons.io.input.ReversedLinesFileReader [0].
 A patch for this issue was provided in 
 https://issues.apache.org/jira/browse/IO-471 .  
 It also includes changes needed to support Chinese Simplified, Chinese 
 Traditional and Korean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2586) Support including and excluding paths during upgrade

2015-03-10 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated OAK-2586:

Attachment: OAK-2586.patch

Fixed a minor issue with the patch.

 Support including and excluding paths during upgrade
 

 Key: OAK-2586
 URL: https://issues.apache.org/jira/browse/OAK-2586
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.1.6
Reporter: Julian Sedding
  Labels: patch
 Attachments: OAK-2586.patch


 When upgrading a Jackrabbit 2 to an Oak repository it can be desirable to 
 constrain which paths/sub-trees should be copied from the source repository. 
 Not least because this can (drastically) reduce the amount of content that 
 needs to be traversed, copied and indexed.
 I suggest to allow filtering the content visible from the source repository 
 by wrapping the JackrabbitNodeState instance and hiding selected paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2586) Support including and excluding paths during upgrade

2015-03-10 Thread Julian Sedding (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Sedding updated OAK-2586:

Attachment: (was: OAK-2586.patch)

 Support including and excluding paths during upgrade
 

 Key: OAK-2586
 URL: https://issues.apache.org/jira/browse/OAK-2586
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: upgrade
Affects Versions: 1.1.6
Reporter: Julian Sedding
  Labels: patch
 Attachments: OAK-2586.patch


 When upgrading a Jackrabbit 2 to an Oak repository it can be desirable to 
 constrain which paths/sub-trees should be copied from the source repository. 
 Not least because this can (drastically) reduce the amount of content that 
 needs to be traversed, copied and indexed.
 I suggest to allow filtering the content visible from the source repository 
 by wrapping the JackrabbitNodeState instance and hiding selected paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1849) DataStore GC support for heterogeneous deployments using a shared datastore

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-1849:

Component/s: blob

 DataStore GC support for heterogeneous deployments using a shared datastore
 ---

 Key: OAK-1849
 URL: https://issues.apache.org/jira/browse/OAK-1849
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: blob
Reporter: Amit Jain
Assignee: Thomas Mueller
 Fix For: 1.1.8, 1.2

 Attachments: OAK-1849-PART-MBEAN.patch, OAK-1849-PART-TEST.patch, 
 OAK-1849-PART1.patch, OAK-1849-v2.patch, OAK-1849.patch


 If the deployment is such that there are 2 or more different instances with a 
 shared datastore, triggering Datastore GC from one instance will result in 
 blobs used by another instance getting deleted, causing data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2357) Aggregate index: Abnormal behavior of and not(..) clause in XPath

2015-03-10 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-2357.
-
Resolution: Won't Fix

We decided to not fix this issue. Instead, the new Lucene full-text index 
should be used that aggregates at index time (rather than at query time), if 
this feature is needed.

The old Lucene full-text index is still needed in case the added security is 
needed (which is not possible with the new Lucene index).

 Aggregate index: Abnormal behavior of and not(..) clause in XPath
 ---

 Key: OAK-2357
 URL: https://issues.apache.org/jira/browse/OAK-2357
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: query
Affects Versions: 1.0.8, 1.1.3
Reporter: Geoffroy Schneck
Assignee: Thomas Mueller
Priority: Minor
 Fix For: 1.0.13, 1.2


 Create a node {{/tmp/node1}} with property {{prop1 = 'foobar'}}. 
 Perform following query :
 {noformat}
 /jcr:root/tmp//*[prop1 = 'foobar' and not(prop1 = 'fooba')]
 {noformat}
 {{/tmp/node1}} is returned by the search.
 Now replace the = clause by the {{jcr:contains()}} :
 {noformat}
 /jcr:root/tmp//*[jcr:contains(., 'foobar') and not(jcr:contains(., 
 'foobar'))]
 {noformat}
 No result is returned. 
 Despite the presence of {{/tmp/node1}} in the results of 
 {{/jcr:root/tmp//\*\[not(jcr:contains(., 'foobar'))\]}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2588) MultiDocumentStoreTest.testInvalidateCache failing for Mongo

2015-03-10 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved OAK-2588.
-
Resolution: Fixed

 MultiDocumentStoreTest.testInvalidateCache failing for Mongo
 

 Key: OAK-2588
 URL: https://issues.apache.org/jira/browse/OAK-2588
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Julian Reschke
Priority: Minor
 Fix For: 1.1.8


 {{MultiDocumentStoreTest.testInvalidateCache}} failing for Mongo
 {noformat}
 Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.255 sec  
 FAILURE!
 testInvalidateCache[0](org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest)
   Time elapsed: 0.343 sec   FAILURE!
 java.lang.AssertionError: modcount should have incremented again expected:3 
 but was:2
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at 
 org.apache.jackrabbit.oak.plugins.document.MultiDocumentStoreTest.testInvalidateCache(MultiDocumentStoreTest.java:184)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2110) performance issues with VersionGarbageCollector

2015-03-10 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned OAK-2110:
---

Assignee: Julian Reschke

 performance issues with VersionGarbageCollector
 ---

 Key: OAK-2110
 URL: https://issues.apache.org/jira/browse/OAK-2110
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Julian Reschke
Assignee: Julian Reschke
 Fix For: 1.4


 This one currently special-cases Mongo. For other persistences, it
 - fetches *all* documents
 - filters by SD_TYPE
 - filters by lastmod of versions
 - deletes what remains
 This is not only inefficient but also fails with OutOfMemory for any larger 
 repo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2413) Clarify Editor.childNodeChanged()

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355007#comment-14355007
 ] 

Michael Dürig commented on OAK-2413:


Committed improved Javadoc at http://svn.apache.org/r1665571.

[~anchela], could you have a look at the implementation of 
{{PrivilegeValidator.childNodeChanged}} to see whether it assumes there are 
indeed changes? If so, we should fix this. 

 Clarify Editor.childNodeChanged()
 -

 Key: OAK-2413
 URL: https://issues.apache.org/jira/browse/OAK-2413
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Priority: Minor
 Fix For: 1.2


 The current contract for {{Editor.childNodeChanged()}} does not specify if 
 this method may also be called when the child node did not actually change. 
 The method {{NodeStateDiff.childNodeChanged()}} explicitly states that there 
 may be such calls. Looking at the implementation connecting the two classes, 
 {{EditorDiff.childNodeChange()}} simply calls the editor without checking 
 whether the child node did in fact change.
 I think we either have to change the {{EditorDiff}} or update the contract 
 for the Editor and adjust implementations. E.g. right now, PrivilegeValidator 
 (implements Editor), assumes a call to {{childNodeChange()}} indeed means the 
 child node changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1963) Expose file system path of Blob

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355031#comment-14355031
 ] 

Michael Dürig commented on OAK-1963:


I remember a similar discussion from a while back, which came to the conclusion 
not to expose paths directly. Need to find the reference though...

 Expose file system path of Blob
 ---

 Key: OAK-1963
 URL: https://issues.apache.org/jira/browse/OAK-1963
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Pralaypati Ta
Assignee: Chetan Mehrotra
 Fix For: 1.2


 In some situations direct file system path is more useful than repository 
 path e.g. native tools don't understand repository path, instead file system 
 path can be passed directly to native tools for processing binary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2405) Monitoring to track old NodeStates

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2405:
---
Fix Version/s: (was: 1.0.13)

 Monitoring to track old NodeStates
 --

 Key: OAK-2405
 URL: https://issues.apache.org/jira/browse/OAK-2405
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: segmentmk
Reporter: Michael Dürig
Assignee: Michael Dürig
  Labels: gc, monitoring
 Fix For: 1.2


 We should add some monitoring that allows us to track old node states, 
 which potentially block revision gc. 
 Possible approaches:
 * Add monitoring too old revisions (root node states) along with the stack 
 traces from where they have been acquired.
 * Include RecordId of root node state in the {{SessionMBean}}.
 * Add additional tooling on top of the {{SessionMBean}} to make it easier to 
 make sense of the wealth of information provided. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2106) Optimize reads from secondaries

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2106:
---
Fix Version/s: (was: 1.0.13)

 Optimize reads from secondaries
 ---

 Key: OAK-2106
 URL: https://issues.apache.org/jira/browse/OAK-2106
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mongomk
Reporter: Marcel Reutegger
Assignee: Marcel Reutegger
 Fix For: 1.2


 OAK-1645 introduced support for reads from secondaries under certain
 conditions. The current implementation checks the _lastRev on a potentially
 cached parent document and reads from a secondary if it has not been
 modified in the last 24 hours. This timespan is somewhat arbitrary but
 reflects the assumption that the replication lag of a secondary shouldn't
 be more than 24 hours.
 This logic should be optimized to take the actual replication lag into
 account. MongoDB provides information about the replication lag with
 the command rs.status().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2494) Shared DataStore GC support for S3DataStore

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2494:
---
Fix Version/s: 1.1.8

 Shared DataStore GC support for S3DataStore
 ---

 Key: OAK-2494
 URL: https://issues.apache.org/jira/browse/OAK-2494
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: Amit Jain
Assignee: Amit Jain
 Fix For: 1.1.8, 1.2

 Attachments: OAK-2494.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2310) [Property Index] Adding new propertyName to an existing index doesn't update index to reflect the same

2015-03-10 Thread Thomas Mueller (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-2310.
-
   Resolution: Won't Fix
Fix Version/s: (was: 1.1.8)

We decided that an explicit (manual) re-index is needed.

 [Property Index] Adding new propertyName to an existing index doesn't update 
 index to reflect the same
 --

 Key: OAK-2310
 URL: https://issues.apache.org/jira/browse/OAK-2310
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Affects Versions: 1.0.7
Reporter: Vikas Saurabh
Assignee: Thomas Mueller
Priority: Critical
 Fix For: 1.2


 It seems intuitive to add multiple property names to a given property index. 
 But, currently, adding a new {{propertyName}} to an existing index doesn't 
 update indexed content to reflect the newly added property name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2444) Enable the persistent cache by default

2015-03-10 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355138#comment-14355138
 ] 

Thomas Mueller commented on OAK-2444:
-

Should be enabled to get a higher test coverage.

 Enable the persistent cache by default
 --

 Key: OAK-2444
 URL: https://issues.apache.org/jira/browse/OAK-2444
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Thomas Mueller
Assignee: Thomas Mueller
 Fix For: 1.1.8, 1.2


 The persistent cache (for MongoDB and RDBMS storage) should be enabled and 
 tested by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2413) Clarify Editor.childNodeChanged()

2015-03-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/OAK-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355144#comment-14355144
 ] 

Michael Dürig commented on OAK-2413:


bq. is there an easy way to identify if there are changes?

If you don't need the diff, just use {{equals}}.

Re. a test: just call {{PrivilegeValidator#childNodeChanged}} directly from a 
unit test passing the same node state for {{before}} and {{after}}. 


 Clarify Editor.childNodeChanged()
 -

 Key: OAK-2413
 URL: https://issues.apache.org/jira/browse/OAK-2413
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Marcel Reutegger
Assignee: Michael Dürig
Priority: Minor
 Fix For: 1.2


 The current contract for {{Editor.childNodeChanged()}} does not specify if 
 this method may also be called when the child node did not actually change. 
 The method {{NodeStateDiff.childNodeChanged()}} explicitly states that there 
 may be such calls. Looking at the implementation connecting the two classes, 
 {{EditorDiff.childNodeChange()}} simply calls the editor without checking 
 whether the child node did in fact change.
 I think we either have to change the {{EditorDiff}} or update the contract 
 for the Editor and adjust implementations. E.g. right now, PrivilegeValidator 
 (implements Editor), assumes a call to {{childNodeChange()}} indeed means the 
 child node changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2602) [Solr] Cost calculation takes time with solr pings even when not fulfilling query

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2602:
---
Issue Type: Improvement  (was: Bug)

 [Solr] Cost calculation takes time with solr pings even when not fulfilling 
 query
 -

 Key: OAK-2602
 URL: https://issues.apache.org/jira/browse/OAK-2602
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-solr
Affects Versions: 1.0.12, 1.1.7
Reporter: Amit Jain
Assignee: Tommaso Teofili
 Fix For: 1.1.8, 1.0.13


 Cost calculation for queries which are fired quite often [1] and which are 
 not going to fulfilled by solr take time due to which the overall cost of the 
 operation is high. 
 [1]
 SELECT * FROM [nt:base] WHERE PROPERTY([rep:members], 'WeakReference') = 
 $uuid 
 SELECT * FROM [nt:base] WHERE [jcr:uuid] = $id



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2460) Resolve the base directory path of persistent cache against repository home

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2460:
---
Fix Version/s: (was: 1.1.8)
   (was: 1.2)
   1.4

 Resolve the base directory path of persistent cache against repository home
 ---

 Key: OAK-2460
 URL: https://issues.apache.org/jira/browse/OAK-2460
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
Priority: Minor
 Fix For: 1.4


 Currently PersistentCache uses the directory path directly. Various other 
 parts in Oak which need access to the filesystem currently make use of 
 {{repository.home}} framework property in OSGi env [1]
 Same should also be used in PersistentCache
 [1] http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2492) Flag Document having many children

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2492:
---
Fix Version/s: (was: 1.0.13)

 Flag Document having many children
 --

 Key: OAK-2492
 URL: https://issues.apache.org/jira/browse/OAK-2492
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Chetan Mehrotra
Assignee: Chetan Mehrotra
 Fix For: 1.1.8, 1.2


 Current DocumentMK logic while performing a diff for child nodes works as 
 below
 # Get children for _before_ revision upto MANY_CHILDREN_THRESHOLD (which 
 defaults to 50). Further note that current logic of fetching children nodes 
 also add children {{NodeDocument}} to {{Document}} cache and also reads the 
 complete Document for those children
 # Get children for _after_ revision with limits as above
 # If the child list is complete then it does a direct diff on the fetched 
 children
 # if the list is not complete i.e. number of children are more than the 
 threshold then it for a query based diff (also see OAK-1970)
 So in those cases where number of children are large then all work done in #1 
 above is wasted and should be avoided. To do that we can mark those parent 
 nodes which have many children via special flag like {{_manyChildren}}. One 
 such nodes are marked the diff logic can check for the flag and skip the work 
 done in #1
 This is kind of similar to way we mark nodes which have at least one child 
 (OAK-1117)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2533) TimeSeries for DocumentStoreException

2015-03-10 Thread Marcel Reutegger (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcel Reutegger updated OAK-2533:
--
Fix Version/s: (was: 1.1.8)
   1.4

 TimeSeries for DocumentStoreException
 -

 Key: OAK-2533
 URL: https://issues.apache.org/jira/browse/OAK-2533
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core, mongomk
Reporter: Marcel Reutegger
Priority: Minor
 Fix For: 1.4


 Create a TimeSeries that counts the runtime DocumentStoreException on the 
 DocumentStore level. This allows to monitor I/O exceptions to the backend 
 system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2008) authorization setup for closed user groups

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2008:

Fix Version/s: (was: 1.2)

 authorization setup for closed user groups
 --

 Key: OAK-2008
 URL: https://issues.apache.org/jira/browse/OAK-2008
 Project: Jackrabbit Oak
  Issue Type: Sub-task
  Components: core
Reporter: angela
Assignee: angela





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-190) Use JCR API defined by JSR-333

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved OAK-190.

Resolution: Later

 Use JCR API defined by JSR-333
 --

 Key: OAK-190
 URL: https://issues.apache.org/jira/browse/OAK-190
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core, jcr
Reporter: angela
  Labels: api
 Attachments: OAK-190.patch, OAK-190_2.patch, OAK-190_3.patch


 there are quite some improvements in JSR-333 (both spec and api wise) and
 i think it would make sense to develop jackrabbit3 on the latest version
 of the specification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2231) Searching authorizables with ' and ] in authorizable id and/or principal name

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2231:

Fix Version/s: (was: 1.2)
   1.4

 Searching authorizables with ' and ] in authorizable id and/or principal name
 -

 Key: OAK-2231
 URL: https://issues.apache.org/jira/browse/OAK-2231
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, query
Reporter: angela
Assignee: Dominique Jäggi
Priority: Minor
 Fix For: 1.4

 Attachments: OAK-2231_jr.patch, OAK-2231_oak.patch


 see attached test cases for oak and for jackrabbit 2.x
 note that it seems that only the test using an authorizable query fails in 
 jackrabbit 2.x while all tests fail in oak.
 addressing this issue involves both analysis of the query engine and the user 
 query part as both can be involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2557) VersionGC uses way too much memory if there is a large pile of garbage

2015-03-10 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra reassigned OAK-2557:


Assignee: Chetan Mehrotra

 VersionGC uses way too much memory if there is a large pile of garbage
 --

 Key: OAK-2557
 URL: https://issues.apache.org/jira/browse/OAK-2557
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core, mongomk
Affects Versions: 1.0.11
Reporter: Stefan Egli
Assignee: Chetan Mehrotra
Priority: Blocker
 Fix For: 1.0.13, 1.2

 Attachments: OAK-2557.patch


 It has been noticed that on a system where revision-gc 
 (VersionGarbageCollector of mongomk) did not run for a few days (due to not 
 interfering with some tests/large bulk operations) that there was such a 
 large pile of garbage accumulating, that the following code
 {code}
 VersionGarbageCollector.collectDeletedDocuments
 {code}
 in the for loop, creates such a large list of NodeDocuments to delete 
 (docIdsToDelete) that it uses up too much memory, causing the JVM's GC to 
 constantly spin in Full-GCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2086) tarmk-failover: test failures in MBeanTest.testClientAndServerEmptyConfig

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2086:

Component/s: oak-tarmk-standby

 tarmk-failover: test failures in MBeanTest.testClientAndServerEmptyConfig
 -

 Key: OAK-2086
 URL: https://issues.apache.org/jira/browse/OAK-2086
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: oak-tarmk-standby
Reporter: Julian Reschke

 On a Win7 corei7 desktop machines, this test frequently fails; either:
 Failed tests:   
 testClientAndServerEmptyConfig(org.apache.jackrabbit.oak.plugins
 .segment.failover.MBeanTest): expected:2 but was:1
 or because later on, restarting the slave doesn't work (failure when checking 
 the status). In that case I also see:
 assertEquals(true, jmxServer.getAttribute(clientStatus, 
 Running));
 and a
 java.net.ConnectException: Connection refused: no further information: 
 /127.0.0.1:52808
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
 ~[na:1.7.0_40]
   at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) 
 ~[na:1.7.0_40]
   at 
 io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
  ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:287)
  ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 
 ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
  ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
 ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
 ~[netty-transport-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
  ~[netty-common-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
  ~[netty-common-4.0.23.Final.jar:4.0.23.Final]
   at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_40]
 apparently thrown in FailoverClient:
 try {
 // Start the client.
 running = true;
 state = STATUS_RUNNING;
 ChannelFuture f = b.connect(host, port).sync();
 // Wait until the connection is closed.
 f.channel().closeFuture().sync();
 } catch (Exception e) {
 log.error(Failed synchronizing state., e);
 stop();
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2545) oak-core IT run out of memory

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig updated OAK-2545:
---
Labels: CI travis  (was: )

 oak-core IT run out of memory
 -

 Key: OAK-2545
 URL: https://issues.apache.org/jira/browse/OAK-2545
 Project: Jackrabbit Oak
  Issue Type: Test
  Components: core
Reporter: Marcel Reutegger
Assignee: Alex Parvulescu
  Labels: CI, travis
 Fix For: 1.1.8


 Seen on the 1.0 branch only so far when running ITs on my local machine, but 
 travis reports the same:
 https://travis-ci.org/apache/jackrabbit-oak/builds/51589769
 It doesn't necessarily mean the problem is with SegmentReferenceLimitTestIT 
 even though the heap dump shows most of the memory consumed by Segments and 
 SegmentWriter. A recent build on trunk was successful for me where we have 
 the same test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-1963) Expose file system path of Blob

2015-03-10 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355009#comment-14355009
 ] 

Chetan Mehrotra commented on OAK-1963:
--

Not necessarily. For some large batch processing requirement such an approach 
has provided considerable better throughput in the test run. So would prefer to 
have it supported

 Expose file system path of Blob
 ---

 Key: OAK-1963
 URL: https://issues.apache.org/jira/browse/OAK-1963
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Pralaypati Ta
Assignee: Chetan Mehrotra
 Fix For: 1.2


 In some situations direct file system path is more useful than repository 
 path e.g. native tools don't understand repository path, instead file system 
 path can be passed directly to native tools for processing binary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1963) Expose file system path of Blob

2015-03-10 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra updated OAK-1963:
-
Fix Version/s: 1.2

 Expose file system path of Blob
 ---

 Key: OAK-1963
 URL: https://issues.apache.org/jira/browse/OAK-1963
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Pralaypati Ta
Assignee: Chetan Mehrotra
 Fix For: 1.2


 In some situations direct file system path is more useful than repository 
 path e.g. native tools don't understand repository path, instead file system 
 path can be passed directly to native tools for processing binary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (OAK-2089) Allow storing some metadata while creating checkpoint

2015-03-10 Thread Chetan Mehrotra (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Mehrotra resolved OAK-2089.
--
   Resolution: Duplicate
Fix Version/s: (was: 1.2)

 Allow storing some metadata while creating checkpoint
 -

 Key: OAK-2089
 URL: https://issues.apache.org/jira/browse/OAK-2089
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Reporter: Chetan Mehrotra
Priority: Minor

 As mentioned by [~mmarth] in OAK-2087 it would be useful to store some 
 metadata while creating checkpoint. Such metadata can be used to  
 differentiate between CPs created by backup, indexer, etc. A simple string 
 should serve the purpose



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2543) Service user session creation isn't fast enough

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-2543:

Component/s: core

 Service user session creation isn't fast enough
 ---

 Key: OAK-2543
 URL: https://issues.apache.org/jira/browse/OAK-2543
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Laurie byrum
  Labels: performance

 We have some (very commonly hit) bits of code that need to read configs (for 
 example), and thus need higher privileges. At one point, we were advised to 
 make short-lived service sessions to handle this. We did this and found our
 performance was absolutely abysmal. We're on our 3rd bottleneck that we
 are working through. They have all pointed to session creation. Maybe each
 creation isn't too bad, but in aggregate, it's much slower than, for
 example, the actual reads or anything else.
 I was able to make the code usually avoid session creation in the first 2
 cases, but earlier this week we hit the third example where the answer seems 
 to
 be one of 1) make creating sessions ignorably fast even when they are
 created a lot 2) cache whatever read is requiring the escalation and clean
 up in event listeners (those listeners will invariably have to listen to
 non-local events, but the events should be uncommon so far) 3) long-lived
 sessions used for reads across threads.
 Per Michael Duerig, #1 is the goal. Can we see if the current situation can 
 be improved? Because it isn't ignorably fast today. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (OAK-2596) more (jmx) instrumentation for observation queue

2015-03-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/OAK-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Dürig reassigned OAK-2596:
--

Assignee: Michael Dürig

 more (jmx) instrumentation for observation queue
 

 Key: OAK-2596
 URL: https://issues.apache.org/jira/browse/OAK-2596
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.12
Reporter: Stefan Egli
Assignee: Michael Dürig
 Fix For: 1.0.13


 While debugging issues with the observation queue it would be handy to have 
 more detailed information available. At the moment you can only see one value 
 wrt length of the queue: that is the maximum of all queues. It is unclear if 
 the queue is that long for only one or many listeners. And it is unclear from 
 that if the listener is slow or the engine that produces the events for the 
 listener.
 So I'd suggest to add the following details - possible exposed via JMX? :
 * add queue length details to each of the observation listeners
 * have a history of the last, eg 1000 events per listener showing a) how long 
 the event took to be created/generated and b) how long the listener took to 
 process. Sometimes averages are not detailed enough so such a in-depth 
 information might become useful. (Not sure about the feasibility of '1000' 
 here - maybe that could be configurable though - just putting the idea out 
 here).
 ** have some information about whether a listener is currently 'reading 
 events from the cache' or whether it has to go to eg mongo 
 * maybe have a 'top 10' listeners that have the largest queue at the moment 
 to easily allow navigation instead of having to go through all (eg 200) 
 listeners manually each time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2597) expose mongo's clusterNodes info more prominently

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2597:
---
Fix Version/s: 1.1.8

 expose mongo's clusterNodes info more prominently
 -

 Key: OAK-2597
 URL: https://issues.apache.org/jira/browse/OAK-2597
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Affects Versions: 1.0.12
Reporter: Stefan Egli
 Fix For: 1.1.8, 1.0.13


 Suggestion: {{db.clusterNodes}} contains very useful information wrt how many 
 instances are currently (and have been) active in the oak-mongo-cluster. 
 While this should in theory match the topology reported via sling's discovery 
 api, it might differ. It could be very helpful if this information was 
 exposed very prominently in a UI (assuming this is not yet the case) - eg in 
 a /system/console page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2037) Define standards for plan output

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2037:
---
Issue Type: Improvement  (was: Task)

 Define standards for plan output
 

 Key: OAK-2037
 URL: https://issues.apache.org/jira/browse/OAK-2037
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: query
Reporter: Justin Edelson
Assignee: Thomas Mueller
 Fix For: 1.1.8, 1.2


 Currently, the syntax for the plan output is chaotic as it varies 
 significantly from index to index. Whereas some of this is expected - each 
 index type will have different data to output, Oak should provide some 
 standards about how a plan will appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2110) performance issues with VersionGarbageCollector

2015-03-10 Thread Michael Marth (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Marth updated OAK-2110:
---
Fix Version/s: (was: 1.1.8)
   1.4

 performance issues with VersionGarbageCollector
 ---

 Key: OAK-2110
 URL: https://issues.apache.org/jira/browse/OAK-2110
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mongomk
Reporter: Julian Reschke
 Fix For: 1.4


 This one currently special-cases Mongo. For other persistences, it
 - fetches *all* documents
 - filters by SD_TYPE
 - filters by lastmod of versions
 - deletes what remains
 This is not only inefficient but also fails with OutOfMemory for any larger 
 repo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-790) ResultRow#getSize() always returns -1

2015-03-10 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela updated OAK-790:
---
Fix Version/s: (was: 1.2)

 ResultRow#getSize() always returns -1
 -

 Key: OAK-790
 URL: https://issues.apache.org/jira/browse/OAK-790
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, query
Reporter: angela
  Labels: api

 just had a failing test due to the fact that my code was 
 trying to find out if the query produce any result before
 starting to look over the result entries.
 that didn't work since ResultRow#getSize() always returned -1.
 if fixed my problem by just getting rid of ResultRow#getSize().
 i would suggest to either
 - implement getSize() for 'nothing-found' and/or a few results found
 - drop the method from the OAK interface altogether if we are never
   going to implement
 - add #isEmpty for those cases were someone just wanted to know if a
   query found something without the need of knowing the exact number.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >