[jira] [Commented] (OAK-2843) Broadcasting cache

2015-11-22 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021522#comment-15021522
 ] 

Amit Jain commented on OAK-2843:


Disabled the test temporarily with r1715726
Failed on travis too https://travis-ci.org/apache/jackrabbit-oak/builds/92624568

> Broadcasting cache
> --
>
> Key: OAK-2843
> URL: https://issues.apache.org/jira/browse/OAK-2843
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.12
>
>
> In a cluster environment, we could speed up reading if the cache(s) broadcast 
> data to other instances. This would avoid bottlenecks at the storage layer 
> (MongoDB, RDBMs).
> The configuration metadata (IP addresses and ports of where to send data to, 
> a unique identifier of the repository and the cluster nodes, possibly 
> encryption key) rarely changes and can be stored in the same place as we 
> store cluster metadata (cluster info collection). That way, in many cases no 
> manual configuration is needed. We could use TCP/IP and / or UDP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-3658) Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling

2015-11-22 Thread Amit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021520#comment-15021520
 ] 

Amit Jain commented on OAK-3658:


Disabled tests temporarily with r1715725

> Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling
> 
>
> Key: OAK-3658
> URL: https://issues.apache.org/jira/browse/OAK-3658
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: jcr
>Reporter: Amit Jain
>Priority: Minor
> Fix For: 1.3.12
>
>
> Tests fail regularly on trunk - {{JackrabbitNodeTest#testRename}} and 
> {{JackrabbitNodeTest#testRenameEventHandling}}.
> {noformat}
> Test set: org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest
> ---
> Tests run: 8, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 0.106 sec <<< 
> FAILURE!
> testRenameEventHandling(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  
> Time elapsed: 0.01 sec  <<< ERROR!
> javax.jcr.nodetype.ConstraintViolationException: Item is protected.
>   at 
> org.apache.jackrabbit.oak.jcr.session.ItemImpl$ItemWriteOperation.checkPreconditions(ItemImpl.java:98)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:614)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:270)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl.rename(NodeImpl.java:1485)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRenameEventHandling(JackrabbitNodeTest.java:124)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at 
> org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:464)
>   at junit.framework.TestSuite.runTest(TestSuite.java:252)
>   at junit.framework.TestSuite.run(TestSuite.java:247)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> testRename(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  Time elapsed: 
> 0.007 sec  <<< FAILURE!
> junit.framework.ComparisonFailure: expected:<[a]> but was:<[rep:policy]>
>   at junit.framework.Assert.assertEquals(Assert.java:100)
>   at junit.framework.Assert.assertEquals(Assert.java:107)
>   at junit.framework.TestCase.assertEquals(TestCase.java:269)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRename(JackrabbitNodeTest.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at 

[jira] [Resolved] (OAK-3509) Lucene suggestion results should have 1 row per suggestion with appropriate column names

2015-11-22 Thread Vikas Saurabh (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Saurabh resolved OAK-3509.

Resolution: Fixed
  Assignee: Vikas Saurabh  (was: Tommaso Teofili)

Fixed in [r1715716|http://svn.apache.org/r1715716] and 
[r1715717|http://svn.apache.org/r1715717] (documentation)

[~teofili], does committing lucene.md to trunk update the site automatically?

> Lucene suggestion results should have 1 row per suggestion with appropriate 
> column names
> 
>
> Key: OAK-3509
> URL: https://issues.apache.org/jira/browse/OAK-3509
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.11
>
>
> Currently suggest query returns just one row with {{rep:suggest()}} column 
> containing a string that needs to be parsed.
> It'd better if each suggestion is returned as individual row with column 
> names such as {{suggestion}}, {{weight}}(???), etc.
> (cc [~teofili])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3090) Caching BlobStore implementation

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3090:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Caching BlobStore implementation 
> -
>
> Key: OAK-3090
> URL: https://issues.apache.org/jira/browse/OAK-3090
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: blob
>Reporter: Chetan Mehrotra
>  Labels: performance, resilience
> Fix For: 1.3.12
>
>
> Storing binaries in Mongo puts lots of pressure on the MongoDB for reads. To 
> reduce the read load it would be useful to have a filesystem based cache of 
> frequently used binaries. 
> This would be similar to CachingFDS (OAK-3005) but would be implemented on 
> top of BlobStore API. 
> Requirements
> * Specify the max binary size which can be cached on file system
> * Limit the size of all binary content present in the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1819) oak-solr-core test failures on Java 8

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-1819:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> oak-solr-core test failures on Java 8
> -
>
> Key: OAK-1819
> URL: https://issues.apache.org/jira/browse/OAK-1819
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Affects Versions: 1.0
> Environment: {noformat}
> Apache Maven 3.1.0 (893ca28a1da9d5f51ac03827af98bb730128f9f2; 2013-06-27 
> 22:15:32-0400)
> Maven home: c:\Program Files\apache-maven-3.1.0
> Java version: 1.8.0, vendor: Oracle Corporation
> Java home: c:\Program Files\Java\jdk1.8.0\jre
> Default locale: en_US, platform encoding: Cp1252
> OS name: "windows 7", version: "6.1", arch: "amd64", family: "dos"
> {noformat}
>Reporter: Jukka Zitting
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: java8, test
> Fix For: 1.3.12
>
>
> The following {{oak-solr-core}} test failures occur when building Oak with 
> Java 8:
> {noformat}
> Failed tests:
>   
> testNativeMLTQuery(org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTest):
>  expected: but was:
>   
> testNativeMLTQueryWithStream(org.apache.jackrabbit.oak.plugins.index.solr.query.SolrIndexQueryTest):
>  expected: but was:
> {noformat}
> The cause of this might well be something as simple as the test case 
> incorrectly expecting a specific ordering of search results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3303) FileStore flush thread can get stuck

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3303:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> FileStore flush thread can get stuck
> 
>
> Key: OAK-3303
> URL: https://issues.apache.org/jira/browse/OAK-3303
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
> Fix For: 1.3.12
>
>
> In some very rare circumstances the flush thread was seen as possibly stuck 
> for a while following a restart of the system. This results in data loss on 
> restart (the system will roll back to the latest persisted revision on 
> restart), and worse off there's no way of extracting the latest head revision 
> using the tar files, so recovery is not (yet) possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3436) Prevent missing checkpoint due to unstable topology from causing complete reindexing

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3436:
---
Fix Version/s: (was: 1.0.25)
   (was: 1.2.9)
   (was: 1.3.11)
   1.3.12

> Prevent missing checkpoint due to unstable topology from causing complete 
> reindexing
> 
>
> Key: OAK-3436
> URL: https://issues.apache.org/jira/browse/OAK-3436
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
> Attachments: AsyncIndexUpdateClusterTest.java, OAK-3436-0.patch
>
>
> Async indexing logic relies on embedding application to ensure that async 
> indexing job is run as a singleton in a cluster. For Sling based apps it 
> depends on Sling Discovery support. At times it is being seen that if 
> topology is not stable then different cluster nodes can consider them as 
> leader and execute the async indexing job concurrently.
> This can cause problem as both cluster node might not see same repository 
> state (due to write skew and eventual consistency) and might remove the 
> checkpoint which other cluster node is still relying upon. For e.g. consider 
> a 2 node cluster N1 and N2 where both are performing async indexing.
> # Base state - CP1 is the checkpoint for "async" job
> # N2 starts indexing and removes changes CP1 to CP2. For Mongo the 
> checkpoints are saved in {{settings}} collection
> # N1 also decides to execute indexing but has yet not seen the latest 
> repository state so still thinks that CP1 is the base checkpoint and tries to 
> read it. However CP1 is already removed from {{settings}} and this makes N1 
> think that checkpoint is missing and it decides to reindex everything!
> To avoid this topology must be stable but at Oak level we should still handle 
> such a case and avoid doing a full reindexing. So we would need to have a 
> {{MissingCheckpointStrategy}} similar to {{MissingIndexEditorStrategy}} as 
> done in OAK-2203 
> Possible approaches
> # A1 - Fail the indexing run if checkpoint is missing - Checkpoint being 
> missing can have valid reason and invalid reason. Need to see what are valid 
> scenarios where a checkpoint can go missing
> # A2 - When a checkpoint is created also store the creation time. When a 
> checkpoint is found to be missing and its a *recent* checkpoint then fail the 
> run. For e.g. we would fail the run till checkpoint found to be missing is 
> less than an hour old (for just started take startup time into account)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2910) oak-jcr bundle should be usable as a standalone bundle

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2910:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> oak-jcr bundle should be usable as a standalone bundle
> --
>
> Key: OAK-2910
> URL: https://issues.apache.org/jira/browse/OAK-2910
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: jcr
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: modularization, osgi, technical_debt
> Fix For: 1.3.12
>
>
> Currently oak-jcr bundle needs to be embedded within some other bundle if the 
> Oak needs to be properly configured in OSGi env. Need to revisit this aspect 
> and see what needs to be done to enable Oak to be properly configured without 
> requiring the oak-jcr bundle to be embedded in the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3159:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Extend documentation for SegmentNodeStoreService in 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
> ---
>
> Key: OAK-3159
> URL: https://issues.apache.org/jira/browse/OAK-3159
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: doc
>Reporter: Konrad Windszus
> Fix For: 1.3.12
>
>
> Currently the documentation at 
> http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only 
> documents the properties
> # repository.home and
> # tarmk.size
> All the other properties like customBlobStore, tarmk.mode,  are not 
> documented. Please extend that. Also it would be good, if the table could be 
> extended with what type is supported for the individual properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2722) IndexCopier fails to delete older index directory upon reindex

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2722:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> IndexCopier fails to delete older index directory upon reindex
> --
>
> Key: OAK-2722
> URL: https://issues.apache.org/jira/browse/OAK-2722
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: resilience
> Fix For: 1.3.12
>
>
> {{IndexCopier}} tries to remove the older index directory incase of reindex. 
> This might fails on platform like Windows if the files are still memory 
> mapped or are locked.
> For deleting directories we would need to take similar approach like being 
> done with deleting old index files i.e. do retries later.
> Due to this following test fails on Windows (Per [~julian.resc...@gmx.de] )
> {noformat}
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec <<< 
> FAILURE!
> deleteOldPostReindex(org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest)
>   Time elapsed: 0.02 sec  <<< FAILURE!
> java.lang.AssertionError: Old index directory should have been removed
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.deleteOldPostReindex(IndexCopierTest.java:160)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3450) Configuration to have case insensitive suggestions

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3450:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Configuration to have case insensitive suggestions
> --
>
> Key: OAK-3450
> URL: https://issues.apache.org/jira/browse/OAK-3450
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.12
>
>
> Currently suggestions follow the same case as requested in query parameter. 
> It makes sense to allow for it to be case insensitive. e.g. Asking for 
> suggestions for {{cat}} should give {{Cat is an animal}} as well as 
> {{category needs to be assigned}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2629) Cleanup Oak Travis jobs

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2629:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Cleanup Oak Travis jobs
> ---
>
> Key: OAK-2629
> URL: https://issues.apache.org/jira/browse/OAK-2629
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: it
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>  Labels: CI
> Fix For: 1.3.12
>
>
> Since we're moving toward Jenkins, let's remove the Travis jobs for Oak. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3362) Estimate compaction based on diff to previous compacted head state

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3362:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Estimate compaction based on diff to previous compacted head state
> --
>
> Key: OAK-3362
> URL: https://issues.apache.org/jira/browse/OAK-3362
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: segmentmk
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: compaction, gc
> Fix For: 1.3.12
>
>
> Food for thought: try to base the compaction estimation on a diff between the 
> latest compacted state and the current state.
> Pros
> * estimation duration would be proportional to number of changes on the 
> current head state
> * using the size on disk as a reference, we could actually stop the 
> estimation early when we go over the gc threshold.
> * data collected during this diff could in theory be passed as input to the 
> compactor so it could focus on compacting a specific subtree
> Cons
> * need to keep a reference to a previous compacted state. post-startup and 
> pre-compaction this might prove difficult (except maybe if we only persist 
> the revision similar to what the async indexer is doing currently)
> * coming up with a threshold for running compaction might prove difficult
> * diff might be costly, but still cheaper than the current full diff



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2592) Commit does not ensure w:majority

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2592:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Commit does not ensure w:majority
> -
>
> Key: OAK-2592
> URL: https://issues.apache.org/jira/browse/OAK-2592
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: core, mongomk
>Reporter: Marcel Reutegger
>Assignee: Marcel Reutegger
>  Labels: resilience
> Fix For: 1.3.12
>
>
> The MongoDocumentStore uses {{findAndModify()}} to commit a transaction. This 
> operation does not allow an application specified write concern and always 
> uses the MongoDB default write concern {{Acknowledged}}. This means a commit 
> may not make it to a majority of a replica set when the primary fails. From a 
> MongoDocumentStore perspective it may appear as if a write was successful and 
> later reverted. See also the test in OAK-1641.
> To fix this, we'd probably have to change the MongoDocumentStore to avoid 
> {{findAndModify()}} and use {{update()}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2835) TARMK Cold Standby inefficient cleanup

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2835:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> TARMK Cold Standby inefficient cleanup
> --
>
> Key: OAK-2835
> URL: https://issues.apache.org/jira/browse/OAK-2835
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: segmentmk, tarmk-standby
>Reporter: Alex Parvulescu
>Assignee: Alex Parvulescu
>Priority: Critical
>  Labels: compaction, gc, production, resilience
> Fix For: 1.3.12
>
> Attachments: OAK-2835.patch
>
>
> Following OAK-2817, it turns out that patching the data corruption issue 
> revealed an inefficiency of the cleanup method. similar to the online 
> compaction situation, the standby has issues clearing some of the in-memory 
> references to old revisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-937) Query engine index selection tweaks: shortcut and hint

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-937:
--
Fix Version/s: (was: 1.3.11)
   1.3.12

> Query engine index selection tweaks: shortcut and hint
> --
>
> Key: OAK-937
> URL: https://issues.apache.org/jira/browse/OAK-937
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core, query
>Reporter: Alex Parvulescu
>Priority: Minor
>  Labels: performance
> Fix For: 1.3.12
>
>
> This issue covers 2 different changes related to the way the QueryEngine 
> selects a query index:
>  Firstly there could be a way to end the index selection process early via a 
> known constant value: if an index returns a known value token (like -1000) 
> then the query engine would effectively stop iterating through the existing 
> index impls and use that index directly.
>  Secondly it would be nice to be able to specify a desired index (if one is 
> known to perform better) thus skipping the existing selection mechanism (cost 
> calculation and comparison). This could be done via certain query hints [0].
> [0] http://en.wikipedia.org/wiki/Hint_(SQL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3185) Port and refactor jackrabbit-webapp module to Oak

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3185:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Port and refactor jackrabbit-webapp module to Oak 
> --
>
> Key: OAK-3185
> URL: https://issues.apache.org/jira/browse/OAK-3185
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: webapp
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
>
> As mentioned at [1] we should port the jackrabbit-webapp [2] module to Oak 
> and refactor it to run complete Oak stack. Purpose of this module would be to 
> demonstrate
> # How to embed Oak in standalone web applications which are not based on OSGi
> # Configure various aspect of Oak via config
> h3. Proposed Appraoch
> # Copy jackrabbit-webapp to Oak repo under oak-webapp
> # Refactor the repository initialization logic to use Oak Pojosr to configure 
> Repository [3]
> # Bonus configure Felix WebConsole to enable users to see what all OSGi 
> services are exposed and what config options are supported
> This would also enable us to document what all thirdparty dependencies are 
> required for getting Oak to work in such environments
> [1] 
> http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201508.mbox/%3CCAHCW-mkbpS6qSkgFe1h1anFcD-dYWFrcr9xBWx9dpKaxr91Q3Q%40mail.gmail.com%3E
> [2] 
> https://jackrabbit.apache.org/jcr/components/jackrabbit-web-application.html
> [3] https://github.com/apache/jackrabbit-oak/tree/trunk/oak-pojosr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1610) Improved default indexing by JCR type in SolrIndexEditor

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-1610:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Improved default indexing by JCR type in SolrIndexEditor
> 
>
> Key: OAK-1610
> URL: https://issues.apache.org/jira/browse/OAK-1610
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: solr
>Reporter: Tommaso Teofili
> Fix For: 1.3.12
>
>
> It'd be good to provide a typed indexing default so that properties of a 
> certain type can be mapped to certain Solr dynamic fields with dedicated 
> types. The infrastructure for doing that is already in place as per 
> OakSolrConfiguration#getFieldNameFor(Type) but the default configuration is 
> not properly set with a good mapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2891) Use more efficient approach to manage in memory map in LengthCachingDataStore

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2891:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Use more efficient approach to manage in memory map in LengthCachingDataStore
> -
>
> Key: OAK-2891
> URL: https://issues.apache.org/jira/browse/OAK-2891
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: upgrade
>Reporter: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.12
>
>
> LengthCachingDataStore introduced in OAK-2882 has an in memory map for 
> keeping the mapping between blobId and length. This would pose issue when 
> number of binaries are very large.
> Instead of in memory map we should use some off heap store like MVStore of 
> MapDB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2477) Move suggester specific config to own configuration node

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2477:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Move suggester specific config to own configuration node
> 
>
> Key: OAK-2477
> URL: https://issues.apache.org/jira/browse/OAK-2477
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>  Labels: technical_debt
> Fix For: 1.3.12
>
>
> Currently suggester configuration is controlled via properties defined on 
> main config / props node but it'd be good if we would have its own place to 
> configure the whole suggest feature to not mix up configuration of other 
> features / parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2843) Broadcasting cache

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2843:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Broadcasting cache
> --
>
> Key: OAK-2843
> URL: https://issues.apache.org/jira/browse/OAK-2843
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
> Fix For: 1.3.12
>
>
> In a cluster environment, we could speed up reading if the cache(s) broadcast 
> data to other instances. This would avoid bottlenecks at the storage layer 
> (MongoDB, RDBMs).
> The configuration metadata (IP addresses and ports of where to send data to, 
> a unique identifier of the repository and the cluster nodes, possibly 
> encryption key) rarely changes and can be stored in the same place as we 
> store cluster metadata (cluster info collection). That way, in many cases no 
> manual configuration is needed. We could use TCP/IP and / or UDP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2719) Warn about local copy size different than remote copy in oak-lucene with copyOnRead enabled

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2719:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Warn about local copy size different than remote copy in oak-lucene with 
> copyOnRead enabled
> ---
>
> Key: OAK-2719
> URL: https://issues.apache.org/jira/browse/OAK-2719
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
>  Labels: resilience
> Fix For: 1.3.12
>
>
> At times following warning is seen in logs
> {noformat}
> 31.03.2015 14:04:57.610 *WARN* [pool-6-thread-7] 
> org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopier Found local copy 
> for _0.cfs in 
> NIOFSDirectory@/path/to/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
>  
> lockFactory=NativeFSLockFactory@/path/to/index/e5a943cdec3000bd8ce54924fd2070ab5d1d35b9ecf530963a3583d43bf28293/1
>  but size of local 1040384 differs from remote 1958385. Content would be read 
> from remote file only
> {noformat}
> The file length check provides a weak check around index file consistency. In 
> some cases this warning is misleading. For e.g. 
> # Index version Rev1 - Task submitted to copy index file F1 
> # Index updated to Rev2 - Directory bound to Rev1 is closed
> # Read is performed with Rev2 for F1 - Here as the file would be locally 
> created the size would be different as the copying is in progress
> In such a case the logic should ensure that once copy is done the local file 
> gets used



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2478) Move spellcheck config to own configuration node

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2478:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Move spellcheck config to own configuration node
> 
>
> Key: OAK-2478
> URL: https://issues.apache.org/jira/browse/OAK-2478
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Tommaso Teofili
>  Labels: technical_debt
> Fix For: 1.3.12
>
>
> Currently spellcheck configuration is controlled via properties defined on 
> main config / props node but it'd be good if we would have its own place to 
> configure the whole spellcheck feature to not mix up configuration of other 
> features / parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3071) Add a compound index for _modified + _id

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3071:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Add a compound index for _modified + _id
> 
>
> Key: OAK-3071
> URL: https://issues.apache.org/jira/browse/OAK-3071
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: mongomk
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>  Labels: performance, resilience
> Fix For: 1.3.12
>
>
> As explained in OAK-1966 diff logic makes a call like
> bq. db.nodes.find({ _id: { $gt: "3:/content/foo/01/", $lt: 
> "3:/content/foo010" }, _modified: { $gte: 1405085300 } }).sort({_id:1})
> For better and deterministic query performance we would need to create a 
> compound index like \{_modified:1, _id:1\}. This index would ensure that 
> Mongo does not have to perform object scan while evaluating such a query.
> Care must be taken that index is only created by default for fresh setup. For 
> existing setup we should expose a JMX operation which can be invoked by 
> system admin to create the required index as per maintenance window



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2847) Dependency cleanup

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2847:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Dependency cleanup 
> ---
>
> Key: OAK-2847
> URL: https://issues.apache.org/jira/browse/OAK-2847
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>Reporter: Michael Dürig
>Assignee: Vikas Saurabh
>  Labels: technical_debt
> Fix For: 1.3.12
>
>
> Early in the next release cycle we should go through the list of Oak's 
> dependencies and decide whether we have candidates we want to upgrade and 
> remove orphaned dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1828) Improved SegmentWriter

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-1828:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Improved SegmentWriter
> --
>
> Key: OAK-1828
> URL: https://issues.apache.org/jira/browse/OAK-1828
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: segmentmk
>Reporter: Jukka Zitting
>Assignee: Alex Parvulescu
>Priority: Minor
>  Labels: technical_debt
> Fix For: 1.3.12
>
>
> At about 1kLOC and dozens of methods, the SegmentWriter class currently a bit 
> too complex for one of the key components of the TarMK. It also uses a 
> somewhat non-obvious mix of synchronized and unsynchronized code to 
> coordinate multiple concurrent threads that may be writing content at the 
> same time. The synchronization blocks are also broader than what really would 
> be needed, which in some cases causes unnecessary lock contention in 
> concurrent write loads.
> To improve the readability and maintainability of the code, and to increase 
> performance of concurrent writes, it would be useful to split part of the 
> SegmentWriter functionality to a separate RecordWriter class that would be 
> responsible for writing individual records into a segment. The 
> SegmentWriter.prepare() method would return a new RecordWriter instance, and 
> the higher-level SegmentWriter methods would use the returned instance for 
> all the work that's currently guarded in synchronization blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2797) Closeable aspect of Analyzer should be accounted for

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2797:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Closeable aspect of Analyzer should be accounted for
> 
>
> Key: OAK-2797
> URL: https://issues.apache.org/jira/browse/OAK-2797
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>  Labels: technical_debt
> Fix For: 1.3.12
>
>
> Lucene {{Analyzer}} implements {{Closeable}} [1] interface and internally it 
> has a ThreadLocal storage of some persistent resource
> So far in oak-lucene we do not take care of closing any analyzer. In fact we 
> use a singleton Analyzer in all cases. Opening this bug to think about this 
> aspect and see if our usage of Analyzer follows the best practices
> [1] 
> http://lucene.apache.org/core/4_7_0/core/org/apache/lucene/analysis/Analyzer.html#close%28%29
> /cc [~teofili] [~alex.parvulescu]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3149) SuggestHelper should manage a suggestor per index definition

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3149:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> SuggestHelper should manage a suggestor per index definition
> 
>
> Key: OAK-3149
> URL: https://issues.apache.org/jira/browse/OAK-3149
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Tommaso Teofili
> Fix For: 1.3.12
>
>
> {{SuggestHelper}} currently keeps a static reference to suggestor and thus 
> have a singleton suggestor for whole repo. Instead it should be implemented 
> in such a way that a suggestor instance is associated with index definition. 
> Logically the suggestor instance should be part of IndexNode similar to how 
> {{IndexSearcher}} instances are managed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-1695) Document Solr index

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-1695:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Document Solr index
> ---
>
> Key: OAK-1695
> URL: https://issues.apache.org/jira/browse/OAK-1695
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: doc, solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>  Labels: documentation
> Fix For: 1.3.12
>
>
> Provide documentation about the Oak Solr index. That should contain 
> information about the design and how to configure it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3642) Enable failing of commit upon Missing IndexEditor implementation by default

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3642:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Enable failing of commit upon Missing IndexEditor implementation by default
> ---
>
> Key: OAK-3642
> URL: https://issues.apache.org/jira/browse/OAK-3642
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.12
>
>
> With OAK-3505 we exposed config option to enable failing of commit if any 
> IndexEditor is missing. The default should be changed to always fail the 
> commit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3286) Persistent Cache improvements

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3286:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Persistent Cache improvements
> -
>
> Key: OAK-3286
> URL: https://issues.apache.org/jira/browse/OAK-3286
> Project: Jackrabbit Oak
>  Issue Type: Epic
>  Components: cache
>Reporter: Michael Marth
>Priority: Minor
> Fix For: 1.3.12
>
>
> Issue for collecting various improvements to the persistent cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3406) Configuration to rank exact match suggestions over partial match suggestions

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3406:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Configuration to rank exact match suggestions over partial match suggestions
> 
>
> Key: OAK-3406
> URL: https://issues.apache.org/jira/browse/OAK-3406
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Vikas Saurabh
>Assignee: Vikas Saurabh
>Priority: Minor
> Fix For: 1.3.12
>
>
> Currently, a suggestion query ranks the results according to popularity. But, 
> at times, it's intended to have suggested phrases based on exact matches to 
> be ranked above a more popular suggestion based on a partial match. e.g. a 
> repository might have a 1000 docs with {{windows is a very popular OS}} and 
> say 4 with {{win over them}} - it's a useful case to configure suggestions 
> such that for a suggestions query for {{win}}, we'd get {{win over them}} as 
> a higher ranked suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2808) Active deletion of 'deleted' Lucene index files from DataStore without relying on full scale Blob GC

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2808:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Active deletion of 'deleted' Lucene index files from DataStore without 
> relying on full scale Blob GC
> 
>
> Key: OAK-2808
> URL: https://issues.apache.org/jira/browse/OAK-2808
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Thomas Mueller
>  Labels: datastore, performance
> Fix For: 1.3.12
>
> Attachments: OAK-2808-1.patch, copyonread-stats.png
>
>
> With storing of Lucene index files within DataStore our usage pattern
> of DataStore has changed between JR2 and Oak.
> With JR2 the writes were mostly application based i.e. if application
> stores a pdf/image file then that would be stored in DataStore. JR2 by
> default would not write stuff to DataStore. Further in deployment
> where large number of binary content is present then systems tend to
> share the DataStore to avoid duplication of storage. In such cases
> running Blob GC is a non trivial task as it involves a manual step and
> coordination across multiple deployments. Due to this systems tend to
> delay frequency of GC
> Now with Oak apart from application the Oak system itself *actively*
> uses the DataStore to store the index files for Lucene and there the
> churn might be much higher i.e. frequency of creation and deletion of
> index file is lot higher. This would accelerate the rate of garbage
> generation and thus put lot more pressure on the DataStore storage
> requirements.
> Discussion thread http://markmail.org/thread/iybd3eq2bh372zrl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3598) Export cache related classes for usage in other oak bundle

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3598:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Export cache related classes for usage in other oak bundle
> --
>
> Key: OAK-3598
> URL: https://issues.apache.org/jira/browse/OAK-3598
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: cache
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.3.12
>
>
> For OAK-3092 oak-lucene would need to access classes from 
> {{org.apache.jackrabbit.oak.cache}} package. For now its limited to 
> {{CacheStats}} to expose the cache related statistics.
> This task is meant to determine steps needed to export the package 
> * Update the pom.xml to export the package
> * Review current set of classes to see if they need to be reviewed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2675) Include change type information in perf logs for diff logic

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2675:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Include change type information in perf logs for diff logic
> ---
>
> Key: OAK-2675
> URL: https://issues.apache.org/jira/browse/OAK-2675
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: core
>Reporter: Chetan Mehrotra
>Priority: Minor
>  Labels: observation, performance, resilience, tooling
> Fix For: 1.3.12
>
>
> Currently the diff perf logs in {{NodeObserver}} does not indicate what type 
> of change was processed i.e. was the change an internal one or an external 
> one. 
> Having this information would allow us to determine how the cache is being 
> used. For e.g. if we see slower number even for local changes then that would 
> indicate that there is some issue with the diff cache and its not be being 
> utilized effectively 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3629) Index corruption seen with CopyOnRead when index defnition is recreated

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3629:
---
Fix Version/s: (was: 1.0.25)
   (was: 1.2.9)
   (was: 1.3.11)
   1.3.12

> Index corruption seen with CopyOnRead when index defnition is recreated
> ---
>
> Key: OAK-3629
> URL: https://issues.apache.org/jira/browse/OAK-3629
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: lucene
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
>Priority: Minor
> Fix For: 1.3.12
>
>
> CopyOnRead logic relies on {{reindexCount}} to determine the name of 
> directory in which index files would be copied. In normal flow if the index 
> is reindexed then this count would get increased and newer index files would 
> get copied to a new directory.
> However if the index definition node gets recreated due to some deployment 
> process then this count gets reset to 0. Due to which newly created index 
> files from reindexing would start getting copied to already existing 
> directory and that can lead to corruption.
> So what happened here was
> # System started with index definition I1 and indexing got complete with 
> index files saved under index/hash(indexpath)/1 (where 1 is current reindex 
> count)
> # A new index definition package was deployed which reset the index count. 
> Now reindex happened again and the CopyOnRead logic per current design reused 
> the existing index directory. And it so happens that Lucene create file with 
> same name and same size but different content. This trips the CopyOnRead 
> defense of length based index corruption check and thus cause new lucene 
> index to corrupt
> *Note that here corruption is transient i.e. persisted index is not 
> corrupted*. Just that locally copied index gets corrupted. Cleaning up the 
> index directory would fix the issue and that can be used as a workaround.
> *Fix*
> After discussing with [~tmueller] following approach can be used.
> Instead of relying on reindex count we can maintain a hidden randomly 
> generated uuid and store it in the index config. This would be used to derive 
> the name of directory on filesystem. If the index definition gets reset then 
> the uuid can be regenerated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3368) Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3368:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Speed up ExternalPrivateStoreIT and ExternalSharedStoreIT
> -
>
> Key: OAK-3368
> URL: https://issues.apache.org/jira/browse/OAK-3368
> Project: Jackrabbit Oak
>  Issue Type: Sub-task
>  Components: tarmk-standby
>Reporter: Marcel Reutegger
>Assignee: Manfred Baedke
> Fix For: 1.3.12
>
>
> Both tests run for more than 5 minutes. Most of the time the tests are 
> somehow stuck in shutting down the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3658) Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3658:
---
Fix Version/s: 1.3.12

> Test failures: JackrabbitNodeTest#testRename and testRenameEventHandling
> 
>
> Key: OAK-3658
> URL: https://issues.apache.org/jira/browse/OAK-3658
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: jcr
>Reporter: Amit Jain
>Priority: Minor
> Fix For: 1.3.12
>
>
> Tests fail regularly on trunk - {{JackrabbitNodeTest#testRename}} and 
> {{JackrabbitNodeTest#testRenameEventHandling}}.
> {noformat}
> Test set: org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest
> ---
> Tests run: 8, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 0.106 sec <<< 
> FAILURE!
> testRenameEventHandling(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  
> Time elapsed: 0.01 sec  <<< ERROR!
> javax.jcr.nodetype.ConstraintViolationException: Item is protected.
>   at 
> org.apache.jackrabbit.oak.jcr.session.ItemImpl$ItemWriteOperation.checkPreconditions(ItemImpl.java:98)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:614)
>   at 
> org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.performVoid(SessionDelegate.java:270)
>   at 
> org.apache.jackrabbit.oak.jcr.session.NodeImpl.rename(NodeImpl.java:1485)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRenameEventHandling(JackrabbitNodeTest.java:124)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at 
> org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:464)
>   at junit.framework.TestSuite.runTest(TestSuite.java:252)
>   at junit.framework.TestSuite.run(TestSuite.java:247)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> testRename(org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest)  Time elapsed: 
> 0.007 sec  <<< FAILURE!
> junit.framework.ComparisonFailure: expected:<[a]> but was:<[rep:policy]>
>   at junit.framework.Assert.assertEquals(Assert.java:100)
>   at junit.framework.Assert.assertEquals(Assert.java:107)
>   at junit.framework.TestCase.assertEquals(TestCase.java:269)
>   at 
> org.apache.jackrabbit.oak.jcr.JackrabbitNodeTest.testRename(JackrabbitNodeTest.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at 

[jira] [Updated] (OAK-2556) do intermediate commit during async indexing

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2556:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> do intermediate commit during async indexing
> 
>
> Key: OAK-2556
> URL: https://issues.apache.org/jira/browse/OAK-2556
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.0.11
>Reporter: Stefan Egli
>  Labels: resilience
> Fix For: 1.3.12
>
>
> A recent issue found at a customer unveils a potential issue with the async 
> indexer. Reading the AsyncIndexUpdate.updateIndex it looks like it is doing 
> the entire update of the async indexer *in one go*, ie in one commit.
> When there is - for some reason - however, a huge diff that the async indexer 
> has to process, the 'one big commit' can become gigantic. There is no limit 
> to the size of the commit in fact.
> So the suggestion is to do intermediate commits while the async indexer is 
> going on. The reason this is acceptable is the fact that by doing async 
> indexing, that index is anyway not 100% up-to-date - so it would not make 
> much of a difference if it would commit after every 100 or 1000 changes 
> either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-2618) Improve performance of queries with ORDER BY and multiple OR filters

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-2618:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Improve performance of queries with ORDER BY and multiple OR filters
> 
>
> Key: OAK-2618
> URL: https://issues.apache.org/jira/browse/OAK-2618
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Amit Jain
>Assignee: Amit Jain
>  Labels: performance
> Fix For: 1.3.12
>
>
> When multiple OR constraints are specified in the XPATH query, itis broken up 
> into union of multiple clauses. If query includes an order by clause, the 
> sorting in this case is done by traversing the result set in memory leading 
> to slow query performance.
> Possible improvements could include:
> * For indexes which can support multiple filters (like lucene, solr) such 
> queries should be efficient and the query engine can pass-thru the query as 
> is.
> ** Possibly also needed for other cases also. So, we can have some sort of 
> capability advertiser for indexes which can hint the query engine 
> and/or
> * Batched merging of the sorted iterators returned for the multiple union 
> queries (possible externally).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3253) Support caching in FileDataStoreService

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3253:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Support caching in FileDataStoreService
> ---
>
> Key: OAK-3253
> URL: https://issues.apache.org/jira/browse/OAK-3253
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: blob
>Affects Versions: 1.3.3
>Reporter: Shashank Gupta
>Assignee: Shashank Gupta
>  Labels: candidate_oak_1_0, candidate_oak_1_2, docs-impacting, 
> features, performance
> Fix For: 1.3.12
>
>
> FDS on SAN/NAS storage is not efficient as it involves network call. In OAK. 
> indexes are stored SAN/NAS and even idle system does lot of read system 
> generated data. 
> Enable caching in FDS so the reads are done locally and async upload to 
> SAN/NAS
> See [previous 
> discussions|https://issues.apache.org/jira/browse/OAK-3005?focusedCommentId=14700801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14700801]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3176) Provide an option to include a configured boost query while searching

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3176:
---
Fix Version/s: (was: 1.3.11)
   1.3.12

> Provide an option to include a configured boost query while searching
> -
>
> Key: OAK-3176
> URL: https://issues.apache.org/jira/browse/OAK-3176
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: solr
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 1.2.9, 1.3.12
>
>
> For tweaking relevancy it's sometimes useful to include a boost query that 
> gets applied at query time and modifies the ranking accordingly.
> This can be done also by setting it by hand as a default parameter to the 
> /select request handler, but for convenience it'd be good if the Solr 
> instance configuration files wouldn't be touched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (OAK-3436) Prevent missing checkpoint due to unstable topology from causing complete reindexing

2015-11-22 Thread Amit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Jain updated OAK-3436:
---
Fix Version/s: 1.0.25
   1.2.9

> Prevent missing checkpoint due to unstable topology from causing complete 
> reindexing
> 
>
> Key: OAK-3436
> URL: https://issues.apache.org/jira/browse/OAK-3436
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: query
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.2.9, 1.0.25, 1.3.12
>
> Attachments: AsyncIndexUpdateClusterTest.java, OAK-3436-0.patch
>
>
> Async indexing logic relies on embedding application to ensure that async 
> indexing job is run as a singleton in a cluster. For Sling based apps it 
> depends on Sling Discovery support. At times it is being seen that if 
> topology is not stable then different cluster nodes can consider them as 
> leader and execute the async indexing job concurrently.
> This can cause problem as both cluster node might not see same repository 
> state (due to write skew and eventual consistency) and might remove the 
> checkpoint which other cluster node is still relying upon. For e.g. consider 
> a 2 node cluster N1 and N2 where both are performing async indexing.
> # Base state - CP1 is the checkpoint for "async" job
> # N2 starts indexing and removes changes CP1 to CP2. For Mongo the 
> checkpoints are saved in {{settings}} collection
> # N1 also decides to execute indexing but has yet not seen the latest 
> repository state so still thinks that CP1 is the base checkpoint and tries to 
> read it. However CP1 is already removed from {{settings}} and this makes N1 
> think that checkpoint is missing and it decides to reindex everything!
> To avoid this topology must be stable but at Oak level we should still handle 
> such a case and avoid doing a full reindexing. So we would need to have a 
> {{MissingCheckpointStrategy}} similar to {{MissingIndexEditorStrategy}} as 
> done in OAK-2203 
> Possible approaches
> # A1 - Fail the indexing run if checkpoint is missing - Checkpoint being 
> missing can have valid reason and invalid reason. Need to see what are valid 
> scenarios where a checkpoint can go missing
> # A2 - When a checkpoint is created also store the creation time. When a 
> checkpoint is found to be missing and its a *recent* checkpoint then fail the 
> run. For e.g. we would fail the run till checkpoint found to be missing is 
> less than an hour old (for just started take startup time into account)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (OAK-2549) Persistent Cache: support append-only mode

2015-11-22 Thread Dheeraj Khanna (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021162#comment-15021162
 ] 

Dheeraj Khanna commented on OAK-2549:
-

Hi [~tmueller],

We saw the same *WARN* message again while shutting down the 561->61 upgraded 
server after the  first restart of AEM61
1. I see this is a warning message only but want to check would it have any 
impact, other then unable to read particular node. 
2. Shall we be deleting the persistentCache directory whenever this issue comes?

AEM61/MongoDB/OAK122-R1689912

{noformat}
22.11.2015 18:39:14.985 *INFO* [FelixStartLevel] 
org.apache.sling.commons.threads.impl.DefaultThreadPool Thread pool 
[ThreadPool-6ebc78e4-3ec4-454b-9c35-d781e66b52c5 
(com.apple.haiku.services.bulkoperation.impl.BulkConfigImpl)] is shut down.
22.11.2015 18:39:14.985 *INFO* [FelixStartLevel] 
org.apache.sling.commons.threads.impl.DefaultThreadPoolManager Stopped Apache 
Sling Thread Pool Manager
22.11.2015 18:39:14.986 *INFO* [FelixStartLevel] 
org.apache.sling.commons.threads BundleEvent STOPPED
22.11.2015 18:39:14.986 *INFO* [FelixStartLevel] org.apache.sling.commons.json 
BundleEvent STOPPING
22.11.2015 18:39:14.987 *INFO* [FelixStartLevel] org.apache.sling.commons.json 
BundleEvent STOPPED
22.11.2015 18:39:14.987 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-jcr-rmi BundleEvent STOPPING
22.11.2015 18:39:14.987 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-jcr-rmi BundleEvent STOPPED
22.11.2015 18:39:14.988 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-jcr-commons BundleEvent STOPPING
22.11.2015 18:39:14.988 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-jcr-commons BundleEvent STOPPED
22.11.2015 18:39:14.988 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-api BundleEvent STOPPING
22.11.2015 18:39:14.988 *INFO* [FelixStartLevel] 
org.apache.jackrabbit.jackrabbit-api BundleEvent STOPPED
22.11.2015 18:39:14.989 *INFO* [FelixStartLevel] 
com.day.commons.day-commons-text BundleEvent STOPPING
22.11.2015 18:39:14.989 *INFO* [FelixStartLevel] 
com.day.commons.day-commons-text BundleEvent STOPPED
22.11.2015 18:39:14.990 *INFO* [FelixStartLevel] com.day.crx.crx-api 
BundleEvent STOPPING
22.11.2015 18:39:14.990 *INFO* [FelixStartLevel] com.day.crx.crx-api 
BundleEvent STOPPED
22.11.2015 18:39:14.990 *WARN* [pool-5-thread-2] 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap Re-opening 
map NODE
java.lang.IllegalStateException: Chunk 3709 no longer exists [1.4.185/9]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:773)
at org.h2.mvstore.MVStore.getChunkIfFound(MVStore.java:888)
at org.h2.mvstore.MVStore.getChunk(MVStore.java:871)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1821)
at org.h2.mvstore.MVMap.readPage(MVMap.java:736)
at org.h2.mvstore.Page.getChildPage(Page.java:216)
at org.h2.mvstore.MVMap.binarySearch(MVMap.java:468)
at org.h2.mvstore.MVMap.get(MVMap.java:450)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.CacheMap.get(CacheMap.java:87)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.MultiGenerationMap.get(MultiGenerationMap.java:56)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.readIfPresent(NodeCache.java:80)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.getIfPresent(NodeCache.java:101)
at 
org.apache.jackrabbit.oak.plugins.document.persistentCache.NodeCache.get(NodeCache.java:112)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStore.getNode(DocumentNodeStore.java:814)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeState.getChildNode(DocumentNodeState.java:252)
at 
org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.(MemoryNodeBuilder.java:143)
at 
org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilder.(AbstractDocumentNodeBuilder.java:42)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeBuilder.(DocumentNodeBuilder.java:47)
at 
org.apache.jackrabbit.oak.plugins.document.DocumentNodeBuilder.createChildBuilder(DocumentNodeBuilder.java:63)
at 
org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilder.getChildNode(AbstractDocumentNodeBuilder.java:74)
at 
org.apache.jackrabbit.oak.plugins.document.AbstractDocumentNodeBuilder.getChildNode(AbstractDocumentNodeBuilder.java:34)
at 
org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory.openInput(OakDirectory.java:129)
at 
org.apache.lucene.codecs.lucene40.BitVector.(BitVector.java:327)
at 
org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:104)
at 

[jira] [Commented] (OAK-3654) Integrate with Metrics for various stats collection

2015-11-22 Thread Chetan Mehrotra (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021081#comment-15021081
 ] 

Chetan Mehrotra commented on OAK-3654:
--

{quote}
The Timer.update(...) isn't very clean compared to the other patterns that 
leave performing the timing operation to the implementation.
Perhaps consider exposing a wrapper for Context in the TimerStats interface.
{quote}
Ack and that would come next. For the first cut I wanted to reduce the amount 
of refactoring required 

bq. nanoTimer and performance. I have measured the overhead of the metrics 
library to be about 10% timing calls of about 1E-7s.

Good to know then that overhead is not appreciable.

> Integrate with Metrics for various stats collection 
> 
>
> Key: OAK-3654
> URL: https://issues.apache.org/jira/browse/OAK-3654
> Project: Jackrabbit Oak
>  Issue Type: New Feature
>  Components: core
>Reporter: Chetan Mehrotra
>Assignee: Chetan Mehrotra
> Fix For: 1.4
>
> Attachments: OAK-3654-v1.patch, query-stats.png
>
>
> As suggested by [~ianeboston] in OAK-3478 current approach of collecting 
> TimeSeries data is not easily consumable by other monitoring systems. Also 
> just extracting the last moment data and exposing it in simple form would not 
> be useful. 
> Instead of that we should look into using Metrics library [1] for collecting 
> metrics. To avoid having dependency on Metrics API all over in Oak we can 
> come up with minimal interfaces which can be used in Oak and then provide an 
> implementation backed by Metric.
> This task is meant to explore that aspect and come up with proposed changes 
> to see if its feasible to make this change
> * metrics-core ~100KB in size with no dependency
> * ASL Licensee
> [1] http://metrics.dropwizard.io



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)