[jira] [Updated] (OAK-4726) Add Metrics coverage to the Lucene Indexer
[ https://issues.apache.org/jira/browse/OAK-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Saar updated OAK-4726: Fix Version/s: 1.5.17 > Add Metrics coverage to the Lucene Indexer > -- > > Key: OAK-4726 > URL: https://issues.apache.org/jira/browse/OAK-4726 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Affects Versions: 1.5.8 >Reporter: Ian Boston >Assignee: Chetan Mehrotra > Fix For: 1.6, 1.5.17 > > > Although there are some stats surrounding the Lucene indexing processes it > would be useful to have metrics style stats available. > Looking at the code, the implementations IndexWriter look like the right > place to add the metrics. These could be global aggregate metrics, ie one set > of metrics covering all IndexWriter implementations, or there could be > individual metrics for each Lucene index definition. The latter would be more > useful as it will allow detailed stats on individual metrics. > These metrics will only give information on the writer operations, and not > any Tokenizing operations. > Patch in the form of a pull request will flow. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4809) JMX Stats for NRT Indexing
[ https://issues.apache.org/jira/browse/OAK-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Saar updated OAK-4809: Fix Version/s: 1.5.17 > JMX Stats for NRT Indexing > -- > > Key: OAK-4809 > URL: https://issues.apache.org/jira/browse/OAK-4809 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra >Priority: Minor > Fix For: 1.6, 1.5.17 > > > The hybrid index feature currently exposes some stats via metrics. It would > be good to expose some more stats around > # How many times NRT index got updated in refreshed > # Current generation of index - NRT Index gets switched to new gen after each > async index cycle and old gen are deleted. So some stats around that > # Expose NRT Index size as part of LuceneIndexStats MBean -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4960) Review the support for wildcards in bundling pattern
[ https://issues.apache.org/jira/browse/OAK-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Saar updated OAK-4960: Fix Version/s: 1.5.17 > Review the support for wildcards in bundling pattern > > > Key: OAK-4960 > URL: https://issues.apache.org/jira/browse/OAK-4960 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Labels: bundling > Fix For: 1.6, 1.5.17 > > > Bundling pattern currently supports wild card pattern. This makes it powerful > but at same time can cause issue if it misconfigured. > We should review this aspect before 1.6 release to determine if this feature > needs to be exposed or not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4959) Review the security aspect of bundling configuration
[ https://issues.apache.org/jira/browse/OAK-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Saar updated OAK-4959: Fix Version/s: 1.5.17 > Review the security aspect of bundling configuration > > > Key: OAK-4959 > URL: https://issues.apache.org/jira/browse/OAK-4959 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Labels: bundling > Fix For: 1.6, 1.5.17 > > > The config for node bundling feature in DocumentNodeStore is currently stored > under {{jcr:system/rep:documentStore/bundlor}}. This task is meant to > * Review the access control aspect - This config should be only updatetable > by system admin > * Config under here should be writeable via JCR api -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5135) The flush of written data via TarRevisions is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683894#comment-15683894 ] Francesco Mari commented on OAK-5135: - The timeout doesn't mean data loss. In the particular context of the flush background operation, a timeout is tolerable. The Segment Store will be anyway flushed synchronously before closing the underlying resources, to avoid losing commits that happened between the latest asynchronous flush and the close operation. > The flush of written data via TarRevisions is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5135) The flush of written data via TarRevisions is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683884#comment-15683884 ] Axel Hanikel commented on OAK-5135: --- [~frm] Why don't we wait forever? A timeout always means data loss, doesn't it? > The flush of written data via TarRevisions is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5098: Issue Type: Technical task (was: Task) Parent: OAK-1266 > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: documentmk, rdbmk >Affects Versions: 1.5.14 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Fix For: 1.5.15 > > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-5098. - Resolution: Fixed Fix Version/s: 1.5.15 > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Affects Versions: 1.5.14 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Fix For: 1.5.15 > > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683809#comment-15683809 ] Julian Reschke commented on OAK-5098: - trunk: [r1770694|http://svn.apache.org/r1770694] > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Affects Versions: 1.5.14 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Fix For: 1.5.15 > > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5098: Affects Version/s: 1.5.14 > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Affects Versions: 1.5.14 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683729#comment-15683729 ] Chetan Mehrotra commented on OAK-5098: -- +1. Looks fine > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5134) temporarily allow prefiltering test mode to be configured via an osgi config
[ https://issues.apache.org/jira/browse/OAK-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli resolved OAK-5134. -- Resolution: Fixed there is now an osgi config (default: false) which allows to turn on the prefiltering test mode (instead of the system property) - PID: {{org.apache.jackrabbit.oak.jcr.observation.ChangeProcessorPrefilterTestConfig}} * http://svn.apache.org/viewvc?rev=1770692&view=rev * http://svn.apache.org/viewvc?rev=1770693&view=rev > temporarily allow prefiltering test mode to be configured via an osgi config > > > Key: OAK-5134 > URL: https://issues.apache.org/jira/browse/OAK-5134 > Project: Jackrabbit Oak > Issue Type: Task > Components: jcr >Affects Versions: 1.5.14 >Reporter: Stefan Egli >Assignee: Stefan Egli >Priority: Trivial > Fix For: 1.5.15 > > > The prefiltering feature (of ChangeProcessor) currently has a System.property > "{{oak.observation.prefilteringTestMode}}" which allows turning on/off a so > called _test mode_. In the test mode prefiltering is evaluated but its result > is not applied. Instead of applying, the prefiltering result (include vs > exclude) is compared to whether any event is actually delivered to the > listener. If an event is delivered and prefiltering result is _exclude_ then > a log.warn is issued. > While generally the System.property mechanism is slightly suboptimal, in this > case it makes testing in a broader (AEM) context more difficult. It would be > useful to be able to temporarily (ie until before 1.6) have an osgi config to > turn this feature on or off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5136) remove prefiltering testmode (feature/config) before 1.6
Stefan Egli created OAK-5136: Summary: remove prefiltering testmode (feature/config) before 1.6 Key: OAK-5136 URL: https://issues.apache.org/jira/browse/OAK-5136 Project: Jackrabbit Oak Issue Type: Task Components: jcr Affects Versions: 1.5.14 Reporter: Stefan Egli Assignee: Stefan Egli Priority: Critical Fix For: 1.6 OAK-5134 introduces an osgi config for the prefiltering test mode (previously it was a system property). This prefiltering test mode should be removed before 1.6 - it should not become part of the stable release - it's only meant for early testing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5135) The flush of written data via TarRevisions is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683670#comment-15683670 ] Francesco Mari commented on OAK-5135: - At r1770690 I increased the amount of time the Scheduler waits for its tasks to complete. Moreover, if a timeout occurs, a forced shutdown is started. [~arkadius], can you try this solution locally and provide feedback? > The flush of written data via TarRevisions is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5135) The flush of written data of FileStore is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-5135: Component/s: segment-tar > The flush of written data of FileStore is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5135) The flush of written data of FileStore is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari reassigned OAK-5135: --- Assignee: Francesco Mari > The flush of written data of FileStore is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5135) The flush of written data via TarRevisions is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arek Kita updated OAK-5135: --- Summary: The flush of written data via TarRevisions is asynchronous in relation to FileStore.close() (was: The flush of written data of FileStore is asynchronous in relation to FileStore.close()) > The flush of written data via TarRevisions is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Francesco Mari >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5135) The flush of written data of FileStore is asynchronous in relation to FileStore.close()
[ https://issues.apache.org/jira/browse/OAK-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-5135: Fix Version/s: 1.5.15 > The flush of written data of FileStore is asynchronous in relation to > FileStore.close() > --- > > Key: OAK-5135 > URL: https://issues.apache.org/jira/browse/OAK-5135 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.13 >Reporter: Arek Kita >Priority: Critical > Fix For: 1.6, 1.5.15 > > > *Expected behavior*: {{FileStore.close()}} should wait for any written data > to be flushed to disk > *Actual behavior*: {{FileStore.close()}} does not wait for flushing data > {code:title=Migration log} > 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore > background tasks takes too long to shutdown > 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /data/cq/crx-quickstart/repository/segmentstore > ... > ### The above directories are swapped/moved here etc ### > ... > 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the > TarMK at > /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore > java.io.IOException: Stream Closed > at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] > at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) > ~[na:1.8.0_101] > at > org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) > > at > org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) > at > org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) > at > org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_101] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_101] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_101] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_101] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5135) The flush of written data of FileStore is asynchronous in relation to FileStore.close()
Arek Kita created OAK-5135: -- Summary: The flush of written data of FileStore is asynchronous in relation to FileStore.close() Key: OAK-5135 URL: https://issues.apache.org/jira/browse/OAK-5135 Project: Jackrabbit Oak Issue Type: Bug Affects Versions: 1.5.13 Reporter: Arek Kita Priority: Critical Fix For: 1.6 *Expected behavior*: {{FileStore.close()}} should wait for any written data to be flushed to disk *Actual behavior*: {{FileStore.close()}} does not wait for flushing data {code:title=Migration log} 21.11.2016 12:00:59.792 WARN o.a.j.o.s.f.Scheduler: The scheduler FileStore background tasks takes too long to shutdown 21.11.2016 12:01:10.363 INFO o.a.j.o.s.f.FileStore: TarMK closed: /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore 21.11.2016 12:01:10.625 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: /data/cq/crx-quickstart/repository/segmentstore ... ### The above directories are swapped/moved here etc ### ... 21.11.2016 12:01:10.646 WARN o.a.j.o.s.f.FileStore: Failed to flush the TarMK at /data/cq/crx-quickstart/repository-segment-tar-20161121-120039/segmentstore java.io.IOException: Stream Closed at java.io.RandomAccessFile.writeBytes(Native Method) ~[na:1.8.0_101] at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:1100) ~[na:1.8.0_101] at org.apache.jackrabbit.oak.segment.file.TarRevisions.flush(TarRevisions.java:209) at org.apache.jackrabbit.oak.segment.file.FileStore.flush(FileStore.java:358) at org.apache.jackrabbit.oak.segment.file.FileStore$2.run(FileStore.java:221) at org.apache.jackrabbit.oak.segment.file.SafeRunnable.run(SafeRunnable.java:67) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4069) Use read concern majority when connected to a replica set
[ https://issues.apache.org/jira/browse/OAK-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4069: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Use read concern majority when connected to a replica set > - > > Key: OAK-4069 > URL: https://issues.apache.org/jira/browse/OAK-4069 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: mongomk >Reporter: Tomek Rękawek >Assignee: Marcel Reutegger > Labels: resilience > Fix For: 1.6, 1.5.15 > > > Mongo 3.2 introduces new query option: {{readConcern}}. It allows to read > only these changes that have been already committed to the majority of > secondary instances. > It prevents stale reads - a situation in which a change has been committed on > the primary (and read from it), but due to the network partition a new > primary is elected and the change is rolled back. > We should use this new option (together with {{w:majority}} implemented in > OAK-3559) when running Oak on MongoDB replica set. > References: > * [Jepsen: MongoDB stale > reads|https://aphyr.com/posts/322-jepsen-mongodb-stale-reads] > * [MongoDB documentation: Read Concern > in|https://docs.mongodb.org/manual/reference/read-concern/] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5103) Backup is not incremental i.e. previous tar files are duplicated
[ https://issues.apache.org/jira/browse/OAK-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5103: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Backup is not incremental i.e. previous tar files are duplicated > > > Key: OAK-5103 > URL: https://issues.apache.org/jira/browse/OAK-5103 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Andrei Dulceanu >Assignee: Andrei Dulceanu > Labels: production, tooling > Fix For: 1.6, 1.5.15 > > > Performing two backups via {{RepositoryManagementMBean.startBackup}}, > directory size increases not only with the delta, but also again with the > size of existing tar files. This lead me to the conclusion that backup is not > incremental. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4954) SetPropertyTest benchmark fails on Segment Tar
[ https://issues.apache.org/jira/browse/OAK-4954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4954: -- Fix Version/s: (was: 1.5.14) 1.5.15 > SetPropertyTest benchmark fails on Segment Tar > -- > > Key: OAK-4954 > URL: https://issues.apache.org/jira/browse/OAK-4954 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: test-failure > Fix For: 1.6, 1.5.15 > > > The {{SetPropertyTest}} fails on Oak Segment Tar: > {noformat} > javax.jcr.InvalidItemStateException: This item > [/testfb3e8f1a/ca1ef350-f650-4466-b9e3-7f77d83e6303] does not exist anymore > at > org.apache.jackrabbit.oak.jcr.delegate.ItemDelegate.checkAlive(ItemDelegate.java:86) > at > org.apache.jackrabbit.oak.jcr.session.ItemImpl$ItemWriteOperation.checkPreconditions(ItemImpl.java:96) > at > org.apache.jackrabbit.oak.jcr.session.NodeImpl$35.checkPreconditions(NodeImpl.java:1366) > at > org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.prePerform(SessionDelegate.java:615) > at > org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(SessionDelegate.java:205) > at > org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(ItemImpl.java:112) > at > org.apache.jackrabbit.oak.jcr.session.NodeImpl.internalSetProperty(NodeImpl.java:1363) > at > org.apache.jackrabbit.oak.jcr.session.NodeImpl.setProperty(NodeImpl.java:506) > at > org.apache.jackrabbit.oak.benchmark.SetPropertyTest.runTest(SetPropertyTest.java:65) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:372) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.runTest(AbstractTest.java:221) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.run(AbstractTest.java:197) > at > org.apache.jackrabbit.oak.benchmark.BenchmarkRunner.main(BenchmarkRunner.java:456) > at > org.apache.jackrabbit.oak.run.BenchmarkCommand.execute(BenchmarkCommand.java:26) > at org.apache.jackrabbit.oak.run.Mode.execute(Mode.java:63) > at org.apache.jackrabbit.oak.run.Main.main(Main.java:49) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5116) GCJournal should persist size only when compaction is successful
[ https://issues.apache.org/jira/browse/OAK-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5116: -- Fix Version/s: (was: 1.5.14) 1.5.15 > GCJournal should persist size only when compaction is successful > > > Key: OAK-5116 > URL: https://issues.apache.org/jira/browse/OAK-5116 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu > Fix For: 1.6, 1.5.15 > > Attachments: OAK-5116.patch > > > In the case where compaction fails, the gc journal persists the current size, > delaying the next compaction cycle and potentially introducing even more work > for the gc process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3606) Improvements for IndexStatsMBean usage
[ https://issues.apache.org/jira/browse/OAK-3606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-3606: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Improvements for IndexStatsMBean usage > -- > > Key: OAK-3606 > URL: https://issues.apache.org/jira/browse/OAK-3606 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, lucene >Affects Versions: 1.3.9 >Reporter: Thierry Ygé >Assignee: Manfred Baedke > Fix For: 1.6, 1.5.15 > > Attachments: adding_new_MBean.patch, > new_mbean_interface_and_implementation.patch > > > When running integration tests, it is common to have the need to wait for the > async indexes to have been executed. So that the test can successfully > validate operations that depend on the search result. > With the current IndexStatsMBean implementation it cannot return the start > time of the last successful indexing. It provide a "LastIndexedTime" which is > not sufficient to know if changes made recently are now indexed. > The idea is to set the start time as value of a new attribute (i.e > "StartLastSuccessIndexedTime") to the IndexStatsMBean. > Then create a new Mbean that calculate from all existing IndexStatsMBean (as > multiple are possible now) the oldest "StartLastSuccessIndexedTime". > That will allow integration tests to be able to wait until that oldest > "StartLastSuccessIndexedTime" is greater than the time it started to wait. > Attached is a sample patch containing the necessary changes (for a Oak core > 1.4.0-SNAPSHOT). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5017) Standby test failures
[ https://issues.apache.org/jira/browse/OAK-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5017: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Standby test failures > - > > Key: OAK-5017 > URL: https://issues.apache.org/jira/browse/OAK-5017 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Reporter: Michael Dürig >Assignee: Timothee Maret > Labels: test-failure > Fix For: 1.6, 1.5.15 > > > I've recently seen a couple of the standby tests fail. E.g. on Jenkins: > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/1245/ > {noformat} > java.lang.AssertionError: expected: > org.apache.jackrabbit.oak.segment.SegmentNodeState<{ checkpoints = { ... }, > root = { ... } }> but was: > org.apache.jackrabbit.oak.segment.SegmentNodeState<{ checkpoints = { ... }, > root = { ... } }> > at > org.apache.jackrabbit.oak.segment.standby.StandbyTestIT.testSyncLoop(StandbyTestIT.java:122) > {noformat} > {noformat} > java.lang.AssertionError: expected: > org.apache.jackrabbit.oak.segment.SegmentNodeState<{ checkpoints = { ... }, > root = { ... } }> but was: > org.apache.jackrabbit.oak.segment.SegmentNodeState<{ checkpoints = { ... }, > root = { ... } }> > at > org.apache.jackrabbit.oak.segment.standby.StandbyTestIT.testSyncLoop(StandbyTestIT.java:122) > {noformat} > {{org.apache.jackrabbit.oak.segment.standby.ExternalSharedStoreIT.testProxySkippedBytes}}: > {noformat} > java.lang.AssertionError: expected:<{ root = { ... } }> but was:<{ root : { } > }> > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5058) Improve GC estimation strategy based on both absolute size and relative percentage
[ https://issues.apache.org/jira/browse/OAK-5058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5058: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Improve GC estimation strategy based on both absolute size and relative > percentage > -- > > Key: OAK-5058 > URL: https://issues.apache.org/jira/browse/OAK-5058 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Andrei Dulceanu >Assignee: Andrei Dulceanu >Priority: Minor > Fix For: 1.6, 1.5.15 > > > A better way of deciding whether GC should run or not might be by looking at > the numbers computed in {{SizeDeltaGcEstimation}} from both an absolute size > and relative percentage point of view. For example it would make sense to > run compaction only if both criteria are met: "run if there is > 50% increase > or more than 10GB". > Since the absolute threshold is already implemented (see > {{SegmentGCOptions.SIZE_DELTA_ESTIMATION_DEFAULT}}), it would be nice to add > also something like {{SegmentGCOptions.SIZE_PERCENTAGE_ESTIMATION_DEFAULT}} > and use it in taking the decision. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5028) Remove DocumentStore.update()
[ https://issues.apache.org/jira/browse/OAK-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5028: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Remove DocumentStore.update() > - > > Key: OAK-5028 > URL: https://issues.apache.org/jira/browse/OAK-5028 > Project: Jackrabbit Oak > Issue Type: Task > Components: core, documentmk, mongomk, rdbmk >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger > Fix For: 1.6, 1.5.15 > > > OAK-3018 removed the single production usage of DocumentStore.update(). I > propose we remove the method to reduce maintenance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5130) Prevent FileStore wrapping the segment buffer twice for the generation info
[ https://issues.apache.org/jira/browse/OAK-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-5130: -- Fix Version/s: (was: 1.5.14) 1.5.15 > Prevent FileStore wrapping the segment buffer twice for the generation info > --- > > Key: OAK-5130 > URL: https://issues.apache.org/jira/browse/OAK-5130 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Alex Parvulescu > Fix For: 1.6, 1.5.15 > > Attachments: OAK-5130.patch > > > It seems the FileStore#writeSegment method will wrap the buffer a second time > to extract the generation info [0], information which only applies to data > segments and should already be available. > [0] > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/file/FileStore.java#L621 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4957) SegmentRevisionGC MBean should report more detailed gc status information
[ https://issues.apache.org/jira/browse/OAK-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4957: -- Fix Version/s: (was: 1.5.14) 1.5.15 > SegmentRevisionGC MBean should report more detailed gc status information > --- > > Key: OAK-4957 > URL: https://issues.apache.org/jira/browse/OAK-4957 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Andrei Dulceanu > Labels: gc, monitoring > Fix For: 1.6, 1.5.15 > > > Regarding this, the current "Status" is showing the last log info. This is > useful, but it would also be interesting to expose the real-time status. For > monitoring it would be useful to know exactly in which phase we are, e.g. a > field showing on of the following: > - idle > - estimation > - compaction > - compaction-retry-1 > - compaction-retry-2 > - compaction-forcecompact > - cleanup -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4619) Unify RecordCacheStats and CacheStats
[ https://issues.apache.org/jira/browse/OAK-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-4619: -- Fix Version/s: (was: 1.5.14) 1.5.15 1.6 > Unify RecordCacheStats and CacheStats > - > > Key: OAK-4619 > URL: https://issues.apache.org/jira/browse/OAK-4619 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: technical_debt > Fix For: 1.6, 1.5.15 > > > There is {{org.apache.jackrabbit.oak.cache.CacheStats}} in {{oak-core}} and > {{org.apache.jackrabbit.oak.segment.RecordCacheStats}} in > {{oak-segment-tar}}. Both exposing quite similar functionality. We should try > to unify them as much as possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5134) temporarily allow prefiltering test mode to be configured via an osgi config
Stefan Egli created OAK-5134: Summary: temporarily allow prefiltering test mode to be configured via an osgi config Key: OAK-5134 URL: https://issues.apache.org/jira/browse/OAK-5134 Project: Jackrabbit Oak Issue Type: Task Components: jcr Affects Versions: 1.5.14 Reporter: Stefan Egli Assignee: Stefan Egli Priority: Trivial Fix For: 1.5.15 The prefiltering feature (of ChangeProcessor) currently has a System.property "{{oak.observation.prefilteringTestMode}}" which allows turning on/off a so called _test mode_. In the test mode prefiltering is evaluated but its result is not applied. Instead of applying, the prefiltering result (include vs exclude) is compared to whether any event is actually delivered to the listener. If an event is delivered and prefiltering result is _exclude_ then a log.warn is issued. While generally the System.property mechanism is slightly suboptimal, in this case it makes testing in a broader (AEM) context more difficult. It would be useful to be able to temporarily (ie until before 1.6) have an osgi config to turn this feature on or off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5098: Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 patch-available resilience (was: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 resilience) > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5098: Attachment: OAK-5098.diff Proposed patch. [~chetanm] - could you do a sanity check? > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > patch-available, resilience > Attachments: OAK-5098.diff > > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5133) StoreArgument class getter method opens repo in read/write and unsafe MMAP mode
[ https://issues.apache.org/jira/browse/OAK-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683509#comment-15683509 ] Tomek Rękawek commented on OAK-5133: Fixed for trunk in [r1770678|https://svn.apache.org/r1770678]. I wasn't able to open the segment-tar in read-only mode in this method, but disabling MMAP (as I did) should be enough. > StoreArgument class getter method opens repo in read/write and unsafe MMAP > mode > --- > > Key: OAK-5133 > URL: https://issues.apache.org/jira/browse/OAK-5133 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Tomek Rękawek >Priority: Critical > Fix For: 1.6, 1.5.14 > > > It seems that the getter method {{hasExternalBlobReferences()}} for a few > implementations in scope of [the following > commit|https://github.com/apache/jackrabbit-oak/commit/1cb749f87611e195515bf5998cbbedb0b9523c13#diff-f61fc3093dd3b539184a71f6dc5ab9b8R105] > introduced a full source repo access routine that can cause serious lock > issues (especially when opened in MMAP mode for Windows environments). > Moreover, reinitialisation of {{StoreArguments}} leads to > [OverlappingFileLockException|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/OverlappingFileLockException.html]. > Such getter method should only open repository in read only mode (using > {{buildReadOnly()}} of > [FileStoreBuilder|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/file/FileStore.java#L392]) > method. > Considering setting MMAP always to false might be also a good idea to avoid > buffer leakages (as getter should require as few features as possible) to be > compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5133) StoreArgument class getter method opens repo in read/write and unsafe MMAP mode
[ https://issues.apache.org/jira/browse/OAK-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek resolved OAK-5133. Resolution: Fixed Fix Version/s: 1.5.14 > StoreArgument class getter method opens repo in read/write and unsafe MMAP > mode > --- > > Key: OAK-5133 > URL: https://issues.apache.org/jira/browse/OAK-5133 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Tomek Rękawek >Priority: Critical > Fix For: 1.6, 1.5.14 > > > It seems that the getter method {{hasExternalBlobReferences()}} for a few > implementations in scope of [the following > commit|https://github.com/apache/jackrabbit-oak/commit/1cb749f87611e195515bf5998cbbedb0b9523c13#diff-f61fc3093dd3b539184a71f6dc5ab9b8R105] > introduced a full source repo access routine that can cause serious lock > issues (especially when opened in MMAP mode for Windows environments). > Moreover, reinitialisation of {{StoreArguments}} leads to > [OverlappingFileLockException|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/OverlappingFileLockException.html]. > Such getter method should only open repository in read only mode (using > {{buildReadOnly()}} of > [FileStoreBuilder|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/file/FileStore.java#L392]) > method. > Considering setting MMAP always to false might be also a good idea to avoid > buffer leakages (as getter should require as few features as possible) to be > compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-versions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-5112: --- Fix Version/s: 1.6 > oak-upgrade breaking versionStorage node when started with copy-versions=false > -- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Fix For: 1.6, 1.5.14 > > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-versions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5112: Summary: oak-upgrade breaking versionStorage node when started with copy-versions=false (was: oak-upgrade breaking versionStorage node when started with copy-verisions=false) > oak-upgrade breaking versionStorage node when started with copy-versions=false > -- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Fix For: 1.5.14 > > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683466#comment-15683466 ] Tomek Rękawek edited comment on OAK-5112 at 11/21/16 12:37 PM: --- Fixed for trunk in [r1770676|https://svn.apache.org/r1770676]. [~dsuess], could you confirm it? was (Author: tomek.rekawek): Fixed for trunk in [r1770676|https://svn.apache.org/r1770676]. > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Fix For: 1.5.14 > > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek resolved OAK-5112. Resolution: Fixed Fix Version/s: 1.5.14 > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Fix For: 1.5.14 > > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683466#comment-15683466 ] Tomek Rękawek commented on OAK-5112: Fixed for trunk in [r1770676|https://svn.apache.org/r1770676]. > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5133) StoreArgument class getter method opens repo in read/write and unsafe MMAP mode
[ https://issues.apache.org/jira/browse/OAK-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek reassigned OAK-5133: -- Assignee: Tomek Rękawek > StoreArgument class getter method opens repo in read/write and unsafe MMAP > mode > --- > > Key: OAK-5133 > URL: https://issues.apache.org/jira/browse/OAK-5133 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Arek Kita >Assignee: Tomek Rękawek >Priority: Critical > Fix For: 1.6 > > > It seems that the getter method {{hasExternalBlobReferences()}} for a few > implementations in scope of [the following > commit|https://github.com/apache/jackrabbit-oak/commit/1cb749f87611e195515bf5998cbbedb0b9523c13#diff-f61fc3093dd3b539184a71f6dc5ab9b8R105] > introduced a full source repo access routine that can cause serious lock > issues (especially when opened in MMAP mode for Windows environments). > Moreover, reinitialisation of {{StoreArguments}} leads to > [OverlappingFileLockException|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/OverlappingFileLockException.html]. > Such getter method should only open repository in read only mode (using > {{buildReadOnly()}} of > [FileStoreBuilder|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/file/FileStore.java#L392]) > method. > Considering setting MMAP always to false might be also a good idea to avoid > buffer leakages (as getter should require as few features as possible) to be > compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5133) StoreArgument class getter method opens repo in read/write and unsafe MMAP mode
Arek Kita created OAK-5133: -- Summary: StoreArgument class getter method opens repo in read/write and unsafe MMAP mode Key: OAK-5133 URL: https://issues.apache.org/jira/browse/OAK-5133 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Affects Versions: 1.5.13 Reporter: Arek Kita Priority: Critical Fix For: 1.6 It seems that the getter method {{hasExternalBlobReferences()}} for a few implementations in scope of [the following commit|https://github.com/apache/jackrabbit-oak/commit/1cb749f87611e195515bf5998cbbedb0b9523c13#diff-f61fc3093dd3b539184a71f6dc5ab9b8R105] introduced a full source repo access routine that can cause serious lock issues (especially when opened in MMAP mode for Windows environments). Moreover, reinitialisation of {{StoreArguments}} leads to [OverlappingFileLockException|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/OverlappingFileLockException.html]. Such getter method should only open repository in read only mode (using {{buildReadOnly()}} of [FileStoreBuilder|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment/src/main/java/org/apache/jackrabbit/oak/plugins/segment/file/FileStore.java#L392]) method. Considering setting MMAP always to false might be also a good idea to avoid buffer leakages (as getter should require as few features as possible) to be compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682993#comment-15682993 ] Tomek Rękawek edited comment on OAK-5112 at 11/21/16 12:02 PM: --- -[~dsuess], could you check if adding {{\-\-copy\-orphaned\-versions=false}} to both steps helps?- Nevermind, I've confirmed the bug. was (Author: tomek.rekawek): [~dsuess], could you check if adding {{\-\-copy\-orphaned\-versions=false}} to both steps helps? > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-5098: Labels: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4 resilience (was: candidate_oak_1_0 candidate_oak_1_2 candidate_oak_1_4) > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4, > resilience > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5098) improve DocumentNodeStoreService robustness for RDB configs
[ https://issues.apache.org/jira/browse/OAK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke reassigned OAK-5098: --- Assignee: Julian Reschke > improve DocumentNodeStoreService robustness for RDB configs > --- > > Key: OAK-5098 > URL: https://issues.apache.org/jira/browse/OAK-5098 > Project: Jackrabbit Oak > Issue Type: Task > Components: documentmk, rdbmk >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4 > > It appears that very strange things can happen when the system isn't > configured properly, for instance when multiple DataSources with the same > name are configured. > TODO: > - log.info unbindDataSource() calls > - prevent registrations of multiple node stores when bindDataSource() is > called multiple times > - in unbindDataSource(), check that the datasource is actually the one that > was bound before -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4912) MongoDB: ReadPreferenceIT.testMongoReadPreferencesForLocalChanges() occasionally fails
[ https://issues.apache.org/jira/browse/OAK-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683219#comment-15683219 ] Julian Reschke commented on OAK-4912: - (it just recurred for me) > MongoDB: ReadPreferenceIT.testMongoReadPreferencesForLocalChanges() > occasionally fails > -- > > Key: OAK-4912 > URL: https://issues.apache.org/jira/browse/OAK-4912 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, mongomk >Reporter: Julian Reschke >Assignee: Tomek Rękawek >Priority: Minor > > {noformat} > testMongoReadPreferencesForLocalChanges(org.apache.jackrabbit.oak.plugins.document.mongo.ReadPreferenceIT) > Time elapsed: 0.024 sec <<< FAILURE! > java.lang.AssertionError: expected: but was: > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.jackrabbit.oak.plugins.document.mongo.ReadPreferenceIT.testMongoReadPreferencesForLocalChanges(ReadPreferenceIT.java:195) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3813) Exception in datastore leads to async index stop indexing new content
[ https://issues.apache.org/jira/browse/OAK-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683209#comment-15683209 ] Chetan Mehrotra commented on OAK-3813: -- [~alexander.klimetschek] with OAK-4939 this case should be handled better. If you have any suggestion then add it to OAK-4939 > Exception in datastore leads to async index stop indexing new content > - > > Key: OAK-3813 > URL: https://issues.apache.org/jira/browse/OAK-3813 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Affects Versions: 1.2.2 >Reporter: Alexander Klimetschek >Assignee: Chetan Mehrotra >Priority: Critical > Fix For: 1.6 > > > We are using an S3 based datastore and that (for some other reasons) > sometimes starts to miss certain blobs and throws an exception, see below. > Unfortunately, it seems that this blocks the indexing of any new content - as > the index will try again and again to index that missing binary and fail at > the same point. > It would be great if the indexing process could be more resilient against > error like this. (I think the datastore implementation should probably not > propagate that exception to the outside but just log it, but that's a > separate issue). > This is seen with oak 1.2.2. I had a look at the [latest version on > trunk|https://github.com/apache/jackrabbit-oak/blob/d5da738aa6b43424f84063322987b765aead7813/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/index/AsyncIndexUpdate.java#L427-L431] > but it seems the behavior has not changed since then. > {noformat} > 17.12.2015 20:50:26.418 -0500 *ERROR* [pool-7-thread-5] > org.apache.sling.commons.scheduler.impl.QuartzScheduler Exception during job > execution of > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate@5cc5e2f6 : Error > occurred while obtaining InputStream for blobId > [2832539c16b1a2e5745370ee89e41ab562436c5f#109419] > java.lang.RuntimeException: Error occurred while obtaining InputStream for > blobId [2832539c16b1a2e5745370ee89e41ab562436c5f#109419] > at > org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:49) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:84) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.loadBlob(OakDirectory.java:216) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexFile.readBytes(OakDirectory.java:264) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readBytes(OakDirectory.java:350) > at > org.apache.jackrabbit.oak.plugins.index.lucene.OakDirectory$OakIndexInput.readByte(OakDirectory.java:356) > at org.apache.lucene.store.DataInput.readInt(DataInput.java:84) > at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:126) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader.(Lucene41PostingsReader.java:75) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:430) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244) > at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:116) > at org.apache.lucene.index.SegmentReader.(SegmentReader.java:96) > at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141) > at > org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:279) > at > org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3191) > at > org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3182) > at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3155) > at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3123) > at > org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:988) > at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:932) > at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:894) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditorContext.closeWriter(LuceneIndexEditorContext.java:169) > at > org.apache.jackrabbit.oak.plugins.index.lucene.LuceneIndexEditor.leave(LuceneIndexEditor.java:190) > at > org.apache.jackrabbit.oak.plugins.index.IndexUpdate.leave(IndexUpdate.java:221) > at > org.apache.jackrabbit.oak.spi.commit.VisibleEditor.leave(VisibleEditor.java:63) > at > org.apache.jackrabbit.oak.spi
[jira] [Issue Comment Deleted] (OAK-3328) checked-in state should only affect properties with OPV!=IGNORE
[ https://issues.apache.org/jira/browse/OAK-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marco Piovesana updated OAK-3328: - Comment: was deleted (was: I tried, this is what i did: {code:java} Tree parentTree = TreeFactory.createTree(this.parent.node); Tree nodeTree = TreeFactory.createTree(this.node); this.vMgr.getNodeTypeManager().getDefinition(parentTree, nodeTree); {code} but it fails because the generated _nodeTree_ has an empty value per the field "name". If I call _getDefinition_ using the node name (the string instead of the tree) it does work though. How can i get the node name from the _NodeState_ without casting it to _SegmentNodeState_ ?) > checked-in state should only affect properties with OPV!=IGNORE > --- > > Key: OAK-3328 > URL: https://issues.apache.org/jira/browse/OAK-3328 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Reporter: Julian Reschke > > We currently (as of OAK-3310) protect all properties from changes when the > node is checked-in. According to the spec > (http://www.day.com/specs/jcr/2.0/15_Versioning.html#15.2.2%20Read-Only%20on%20Check-In), > this should only be the case when on-parent-version is != "IGNORE". > (It seems this doesn't have TCK coverage) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3328) checked-in state should only affect properties with OPV!=IGNORE
[ https://issues.apache.org/jira/browse/OAK-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683183#comment-15683183 ] Marco Piovesana edited comment on OAK-3328 at 11/21/16 10:49 AM: - I tried, this is what i did: {code:java} Tree parentTree = TreeFactory.createTree(this.parent.node); Tree nodeTree = TreeFactory.createTree(this.node); this.vMgr.getNodeTypeManager().getDefinition(parentTree, nodeTree); {code} but it fails because the generated _nodeTree_ has an empty value per the field "name". If I call _getDefinition_ using the node name (the string instead of the tree) it does work though. How can i get the node name from the _NodeState_ without casting it to _SegmentNodeState_ ? was (Author: iosonomarco): I tried, this is what i did: {code:java} Tree parentTree = TreeFactory.createTree(this.parent.node); Tree nodeTree = TreeFactory.createTree(this.node); this.vMgr.getNodeTypeManager().getDefinition(parentTree, nodeTree); {code} but it fails because the generated _nodeTree_ has an empty value per the field "name". If I call _getDefinition_ using the node name it does work though. How can i get the node name from the _NodeState_ without casting it to _SegmentNodeState_ ? > checked-in state should only affect properties with OPV!=IGNORE > --- > > Key: OAK-3328 > URL: https://issues.apache.org/jira/browse/OAK-3328 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Reporter: Julian Reschke > > We currently (as of OAK-3310) protect all properties from changes when the > node is checked-in. According to the spec > (http://www.day.com/specs/jcr/2.0/15_Versioning.html#15.2.2%20Read-Only%20on%20Check-In), > this should only be the case when on-parent-version is != "IGNORE". > (It seems this doesn't have TCK coverage) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3328) checked-in state should only affect properties with OPV!=IGNORE
[ https://issues.apache.org/jira/browse/OAK-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683183#comment-15683183 ] Marco Piovesana commented on OAK-3328: -- I tried, this is what i did: {code:java} Tree parentTree = TreeFactory.createTree(this.parent.node); Tree nodeTree = TreeFactory.createTree(this.node); this.vMgr.getNodeTypeManager().getDefinition(parentTree, nodeTree); {code} but it fails because the generated _nodeTree_ has an empty value per the field "name". If I call _getDefinition_ using the node name it does work though. How can i get the node name from the _NodeState_ without casting it to _SegmentNodeState_ ? > checked-in state should only affect properties with OPV!=IGNORE > --- > > Key: OAK-3328 > URL: https://issues.apache.org/jira/browse/OAK-3328 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Reporter: Julian Reschke > > We currently (as of OAK-3310) protect all properties from changes when the > node is checked-in. According to the spec > (http://www.day.com/specs/jcr/2.0/15_Versioning.html#15.2.2%20Read-Only%20on%20Check-In), > this should only be the case when on-parent-version is != "IGNORE". > (It seems this doesn't have TCK coverage) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4940) Consider collecting grand-parent changes in ChangeSet
[ https://issues.apache.org/jira/browse/OAK-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli resolved OAK-4940. -- Resolution: Fixed implemented in http://svn.apache.org/viewvc?rev=1770640&view=rev the ChangeSet now has a {{getAllNodeTypes}} additionally to the {{getParentNodeTypes}} > Consider collecting grand-parent changes in ChangeSet > - > > Key: OAK-4940 > URL: https://issues.apache.org/jira/browse/OAK-4940 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.5.12 >Reporter: Stefan Egli >Assignee: Stefan Egli > Fix For: 1.6, 1.5.14 > > Attachments: OAK-4940.patch > > > At the moment the ChangeSet, which is populated by ChangeCollectorProvider (a > Validator) during a commit, collects changed property names, as well as node > name, node type and path of the parent (whereas _parent_ for a property > change is its node, while for a node change is actually its parent). > For improvements such as SLING-6163 it might be valuable to collect > grand-parent changes (node name, node type and perhaps path) too. We could > extend the ChangeSet with additional, explicit grand-parent sets (ie we > should not mix them with the parent changes as that would lessen filtering > rate) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5121) review CommitInfo==null in BackgroundObserver with isExternal change
[ https://issues.apache.org/jira/browse/OAK-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli updated OAK-5121: - Fix Version/s: (was: 1.5.14) removed from 1.5.14 as it depends on OAK-4898 which is not yet committed. > review CommitInfo==null in BackgroundObserver with isExternal change > > > Key: OAK-5121 > URL: https://issues.apache.org/jira/browse/OAK-5121 > Project: Jackrabbit Oak > Issue Type: Task > Components: core >Affects Versions: 1.5.13 >Reporter: Stefan Egli >Assignee: Stefan Egli > Fix For: 1.6 > > > OAK-4898 changes CommitInfo to be never null. This is the case outside of the > BackgroundObserver - but in the BackgroundObserver itself it is explicitly > set to null when compacting. > Once OAK-4898 is committed this task is about reviewing the implications in > BackgroundObserver wrt compaction and CommitInfo==null -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5125) Some implementations of CacheValue.getMemory() don't care about integer overflow.
[ https://issues.apache.org/jira/browse/OAK-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683084#comment-15683084 ] Marcel Reutegger commented on OAK-5125: --- In addition to protecting against integer overflow in getMemory() the DocumentNodeStore should also ensure a cache entry never reaches that size. I'd say it shouldn't even get close to Integer.MAX_VALUE because such a big cache entry potentially kicks out a lot of other useful cache entries. Created OAK-5132. > Some implementations of CacheValue.getMemory() don't care about integer > overflow. > - > > Key: OAK-5125 > URL: https://issues.apache.org/jira/browse/OAK-5125 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Affects Versions: 1.0.34, 1.2.21, 1.5.13, 1.4.10 >Reporter: Manfred Baedke >Assignee: Manfred Baedke >Priority: Blocker > Labels: candidate_oak_1_0, candidate_oak_1_2 > Fix For: 1.6, 1.4.11 > > > The interface method org.apache.jackrabbit.oak.cache.CacheValue.getMemory() > returns an int value, while some implementations measure values larger than > Integer.MAX_VALUE, producing an int overflow. This actually breaks in real > life when testing with huge repos and machines, since it's used to measure > the size of local diffs (e.g. a reindexing may diff the whole repo against an > empty root). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5132) Limit diff cache entries in size
Marcel Reutegger created OAK-5132: - Summary: Limit diff cache entries in size Key: OAK-5132 URL: https://issues.apache.org/jira/browse/OAK-5132 Project: Jackrabbit Oak Issue Type: Improvement Components: core, documentmk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Fix For: 1.6 Diff cache entries can become very big when the revision range is large and there were many changes to child nodes. The implementation should only cache diffs up to a reasonable size to avoid OAK-5125. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3328) checked-in state should only affect properties with OPV!=IGNORE
[ https://issues.apache.org/jira/browse/OAK-3328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683030#comment-15683030 ] Marcel Reutegger commented on OAK-3328: --- Yes, this is the right place for the change. Have a look at VersionableState.getOPV(). There are two variants, one for child nodes and one for properties. It uses a ReadOnlyNodeTypeManager to get the flag. I'd say something similar should work for the VersionEditor as well. > checked-in state should only affect properties with OPV!=IGNORE > --- > > Key: OAK-3328 > URL: https://issues.apache.org/jira/browse/OAK-3328 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Reporter: Julian Reschke > > We currently (as of OAK-3310) protect all properties from changes when the > node is checked-in. According to the spec > (http://www.day.com/specs/jcr/2.0/15_Versioning.html#15.2.2%20Read-Only%20on%20Check-In), > this should only be the case when on-parent-version is != "IGNORE". > (It seems this doesn't have TCK coverage) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682993#comment-15682993 ] Tomek Rękawek commented on OAK-5112: [~dsuess], could you check if adding {{\-\-copy\-orphaned\-versions=false}} to both steps helps? > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-4742) Improve FileStoreStatsMBean
[ https://issues.apache.org/jira/browse/OAK-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682968#comment-15682968 ] Francesco Mari edited comment on OAK-4742 at 11/21/16 9:20 AM: --- Fixed at r1770631. was (Author: frm): Fixet at r1770631. > Improve FileStoreStatsMBean > --- > > Key: OAK-4742 > URL: https://issues.apache.org/jira/browse/OAK-4742 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Francesco Mari > Labels: monitoring > Fix For: 1.6, 1.5.14 > > Attachments: CommitTime.png, OAK-4742-01.patch > > > We should add further data to that MBean (if feasible): > * Number of commits > * Number of commits queuing (blocked on the commit semaphore) > * Percentiles of commit times (exclude queueing time) > * Percentiles of commit queueing times > * Last gc run / size before gc and after gc / time gc took broken down into > the various phases -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4742) Improve FileStoreStatsMBean
[ https://issues.apache.org/jira/browse/OAK-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari resolved OAK-4742. - Resolution: Fixed Fixet at r1770631. > Improve FileStoreStatsMBean > --- > > Key: OAK-4742 > URL: https://issues.apache.org/jira/browse/OAK-4742 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Francesco Mari > Labels: monitoring > Fix For: 1.6, 1.5.14 > > Attachments: CommitTime.png, OAK-4742-01.patch > > > We should add further data to that MBean (if feasible): > * Number of commits > * Number of commits queuing (blocked on the commit semaphore) > * Percentiles of commit times (exclude queueing time) > * Percentiles of commit queueing times > * Last gc run / size before gc and after gc / time gc took broken down into > the various phases -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5112) oak-upgrade breaking versionStorage node when started with copy-verisions=false
[ https://issues.apache.org/jira/browse/OAK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek reassigned OAK-5112: -- Assignee: Tomek Rękawek > oak-upgrade breaking versionStorage node when started with > copy-verisions=false > --- > > Key: OAK-5112 > URL: https://issues.apache.org/jira/browse/OAK-5112 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.5.13 >Reporter: Dominik Süß >Assignee: Tomek Rękawek > Attachments: OAK-5112-test.patch > > > The attempt to sidegrade a repository and only keep the versionStorage of > certain paths failed with a broken /jcr:system/jcr:versionStorage node when > applying the following sequence: > {code} > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=false > java -jar oak-upgrade-1.6-SNAPSHOT.jar repository repository-temp > --copy-versions=true --include-paths=/path/requiring/versions > {code} > After first and second attempt the versionStorage node exists but has no > primparyType. Startup works but fails when ever new nodes would be created > underneath jcr:versionStorage. > The issue can be checked via oak-run console and running the following command > {code} > session.store.root.builder().getChildNode('jcr:system').getChildNode('jcr:versionStorage').getProperties() > {code} > //cc [~tomek.rekawek], [~alexparvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)