[jira] [Created] (OAK-4903) Async uploads in S3 causes issues in a cluster
Amit Jain created OAK-4903: -- Summary: Async uploads in S3 causes issues in a cluster Key: OAK-4903 URL: https://issues.apache.org/jira/browse/OAK-4903 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Amit Jain Assignee: Amit Jain S3DataStore and CachingFDS through the CachingDataStore enable async uploads. This causes problems in clustered setups where uploads can sometimes be visible after a delay. During this time any request for the corresponding asset/file would return errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4796) filter events before adding to ChangeProcessor's queue
[ https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15554084#comment-15554084 ] Chetan Mehrotra commented on OAK-4796: -- What I meant was to have simpler logic for path check i.e. in change set you have set of paths changed say upto depth 2 and then for a given listener just check if the path it is interested in (includes) is covered by the path in change set or not. This check need not be very precise Other filtering should also be supported. > filter events before adding to ChangeProcessor's queue > -- > > Key: OAK-4796 > URL: https://issues.apache.org/jira/browse/OAK-4796 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: jcr >Affects Versions: 1.5.9 >Reporter: Stefan Egli >Assignee: Stefan Egli > Labels: observation > Fix For: 1.6 > > Attachments: OAK-4796.changeSet.patch, OAK-4796.patch > > > Currently the > [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335] > is in charge of doing the event diffing and filtering and does so in a > pooled Thread, ie asynchronously, at a later stage independent from the > commit. This has the advantage that the commit is fast, but has the following > potentially negative effects: > # events (in the form of ContentChange Objects) occupy a slot of the queue > even if the listener is not interested in it - any commit lands on any > listener's queue. This reduces the capacity of the queue for 'actual' events > to be delivered. It therefore increases the risk that the queue fills - and > when full has various consequences such as loosing the CommitInfo etc. > # each event==ContentChange later on must be evaluated, and for that a diff > must be calculated. Depending on runtime behavior that diff might be > expensive if no longer in the cache (documentMk specifically). > As an improvement, this diffing+filtering could be done at an earlier stage > already, nearer to the commit, and in case the filter would ignore the event, > it would not have to be put into the queue at all, thus avoiding occupying a > slot and later potentially slower diffing. > The suggestion is to implement this via the following algorithm: > * During the commit, in a {{Validator}} the listener's filters are evaluated > - in an as-efficient-as-possible manner (Reason for doing it in a Validator > is that this doesn't add overhead as oak already goes through all changes for > other Validators). As a result a _list of potentially affected observers_ is > added to the {{CommitInfo}} (false positives are fine). > ** Note that the above adds cost to the commit and must therefore be > carefully done and measured > ** One potential measure could be to only do filtering when listener's queues > are larger than a certain threshold (eg 10) > * The ChangeProcessor in {{contentChanged}} (in the one created in > [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224]) > then checks the new commitInfo's _potentially affected observers_ list and > if it's not in the list, adds a {{NOOP}} token at the end of the queue. If > there's already a NOOP there, the two are collapsed (this way when a filter > is not affected it would have a NOOP at the end of the queue). If later on a > no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} > for the newly added {{ContentChange}} obj. > ** To achieve that, the ContentChange obj is extended to not only have the > "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which > currently is implicitly maintained. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-4902) Blob GC completion time should be logged in millis
Amit Jain created OAK-4902: -- Summary: Blob GC completion time should be logged in millis Key: OAK-4902 URL: https://issues.apache.org/jira/browse/OAK-4902 Project: Jackrabbit Oak Issue Type: Improvement Components: blob Reporter: Amit Jain Assignee: Amit Jain Priority: Minor Fix For: 1.5.12 MarkSweepGarbageCollector logs the completion time with changing units depending on the time taken - ms, s, min, h etc. This makes it difficult to write scripts to grep the completion time. So, additionally the completion time should also log in milliseconds. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4901) Improve SizeDeltaGcEstimation logging
[ https://issues.apache.org/jira/browse/OAK-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Valentin Olteanu updated OAK-4901: -- Attachment: OAK-4901.patch [~alex.parvulescu] could you please review the attached patch? Thanks! > Improve SizeDeltaGcEstimation logging > - > > Key: OAK-4901 > URL: https://issues.apache.org/jira/browse/OAK-4901 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: Segment Tar 0.0.14 >Reporter: Valentin Olteanu >Priority: Minor > Attachments: OAK-4901.patch > > > The current message logged after estimation is not very clear for someone > that does not know how it works: > {code} > 06.10.2016 18:53:07.819 *INFO* [TarMK revision gc > [/Users/volteanu/workspace/test/author/crx-quickstart/repository/segmentstore]] > org.apache.jackrabbit.oak.segment.file.FileStore TarMK GC #1: estimation > completed in 7.362 ms (7 ms). Size delta is 0% or 880.3 MB/880.9 MB > (880258560/880892416 bytes), so skipping compaction for now > {code} > I think it should also list the delta in absolute value and the threshold it > is compared against. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-4901) Improve SizeDeltaGcEstimation logging
Valentin Olteanu created OAK-4901: - Summary: Improve SizeDeltaGcEstimation logging Key: OAK-4901 URL: https://issues.apache.org/jira/browse/OAK-4901 Project: Jackrabbit Oak Issue Type: Improvement Components: segment-tar Affects Versions: Segment Tar 0.0.14 Reporter: Valentin Olteanu Priority: Minor The current message logged after estimation is not very clear for someone that does not know how it works: {code} 06.10.2016 18:53:07.819 *INFO* [TarMK revision gc [/Users/volteanu/workspace/test/author/crx-quickstart/repository/segmentstore]] org.apache.jackrabbit.oak.segment.file.FileStore TarMK GC #1: estimation completed in 7.362 ms (7 ms). Size delta is 0% or 880.3 MB/880.9 MB (880258560/880892416 bytes), so skipping compaction for now {code} I think it should also list the delta in absolute value and the threshold it is compared against. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance
[ https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothee Maret updated OAK-4883: Assignee: (was: Timothee Maret) > Missing BlobGarbageCollection MBean on standby instance > --- > > Key: OAK-4883 > URL: https://issues.apache.org/jira/browse/OAK-4883 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar, segmentmk >Reporter: Alex Parvulescu > Labels: gc > Attachments: OAK-4883.patch > > > The {{BlobGarbageCollection}} MBean is no longer available on a standby > instance, this affects non-shared datastore setups (on a shared datastore > you'd only need to run blob gc on the primary). > This change was introduced by OAK-4089 (and backported to 1.4 branch with > OAK-4093). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance
[ https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551967#comment-15551967 ] Timothee Maret edited comment on OAK-4883 at 10/6/16 1:51 PM: -- Attaching a svn compatible patch otherwise available at https://github.com/tmaret/jackrabbit-oak/commit/5e5370b81c8409af489ec4dee9064082a350dc48.patch The patch builds a SegmentNodeStore independently of the configuration, however it only register its service for non standby instances. Moreover, it keeps the OAK-4089 improvement by only building a {{ChangeDispatcher}} for non standby instances (as suggested in OAK-4089). The patch only contains the fix for the new segment-tar. The patch uses this {{null}} ref in {{SegmentNodeStore}}. We could improve that by building a custom {{ChangeDispatcher}} (e.g. {{NoopChangeDispatcher}}) and allow to pass the {{ChangeDispatcher}} via the builder. However, currently there's no easy way to build a custom ChangeDispatcher without passing a NodeState (which is what we want to avoid). was (Author: marett): Attaching a svn compatible patch otherwise available at https://github.com/tmaret/jackrabbit-oak/commit/5e5370b81c8409af489ec4dee9064082a350dc48.patch The patch builds a SegmentNodeStore independently of the configuration, however it only register its service for non standby instances. Moreover, it keeps the OAK-4089 improvement by only building a {{ChangeDispatcher}} for non standby instances (as suggested in OAK-4089). The patch only contains the fix for the new segment-tar. > Missing BlobGarbageCollection MBean on standby instance > --- > > Key: OAK-4883 > URL: https://issues.apache.org/jira/browse/OAK-4883 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar, segmentmk >Reporter: Alex Parvulescu > Labels: gc > Attachments: OAK-4883.patch > > > The {{BlobGarbageCollection}} MBean is no longer available on a standby > instance, this affects non-shared datastore setups (on a shared datastore > you'd only need to run blob gc on the primary). > This change was introduced by OAK-4089 (and backported to 1.4 branch with > OAK-4093). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance
[ https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551968#comment-15551968 ] Timothee Maret commented on OAK-4883: - [~alexparvulescu], [~frm] could you please review. > Missing BlobGarbageCollection MBean on standby instance > --- > > Key: OAK-4883 > URL: https://issues.apache.org/jira/browse/OAK-4883 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar, segmentmk >Reporter: Alex Parvulescu >Assignee: Timothee Maret > Labels: gc > Attachments: OAK-4883.patch > > > The {{BlobGarbageCollection}} MBean is no longer available on a standby > instance, this affects non-shared datastore setups (on a shared datastore > you'd only need to run blob gc on the primary). > This change was introduced by OAK-4089 (and backported to 1.4 branch with > OAK-4093). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance
[ https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothee Maret updated OAK-4883: Flags: Patch > Missing BlobGarbageCollection MBean on standby instance > --- > > Key: OAK-4883 > URL: https://issues.apache.org/jira/browse/OAK-4883 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar, segmentmk >Reporter: Alex Parvulescu >Assignee: Timothee Maret > Labels: gc > Attachments: OAK-4883.patch > > > The {{BlobGarbageCollection}} MBean is no longer available on a standby > instance, this affects non-shared datastore setups (on a shared datastore > you'd only need to run blob gc on the primary). > This change was introduced by OAK-4089 (and backported to 1.4 branch with > OAK-4093). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4883) Missing BlobGarbageCollection MBean on standby instance
[ https://issues.apache.org/jira/browse/OAK-4883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothee Maret updated OAK-4883: Attachment: OAK-4883.patch Attaching a svn compatible patch otherwise available at https://github.com/tmaret/jackrabbit-oak/commit/5e5370b81c8409af489ec4dee9064082a350dc48.patch The patch builds a SegmentNodeStore independently of the configuration, however it only register its service for non standby instances. Moreover, it keeps the OAK-4089 improvement by only building a {{ChangeDispatcher}} for non standby instances (as suggested in OAK-4089). The patch only contains the fix for the new segment-tar. > Missing BlobGarbageCollection MBean on standby instance > --- > > Key: OAK-4883 > URL: https://issues.apache.org/jira/browse/OAK-4883 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar, segmentmk >Reporter: Alex Parvulescu >Assignee: Timothee Maret > Labels: gc > Attachments: OAK-4883.patch > > > The {{BlobGarbageCollection}} MBean is no longer available on a standby > instance, this affects non-shared datastore setups (on a shared datastore > you'd only need to run blob gc on the primary). > This change was introduced by OAK-4089 (and backported to 1.4 branch with > OAK-4093). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4082) RDBDocumentStore on MySQL may fail when using useServerPrepStmts=true
[ https://issues.apache.org/jira/browse/OAK-4082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-4082. - Resolution: Fixed Fix Version/s: (was: 1.6) 1.4.9 1.0.35 1.5.12 1.2.20 I believe this is fixed by upgrading the mysql JDBC driver to 5.1.40, see OAK-4885. > RDBDocumentStore on MySQL may fail when using useServerPrepStmts=true > - > > Key: OAK-4082 > URL: https://issues.apache.org/jira/browse/OAK-4082 > Project: Jackrabbit Oak > Issue Type: Bug > Components: rdbmk >Reporter: Tomek Rękawek >Assignee: Julian Reschke > Fix For: 1.2.20, 1.5.12, 1.0.35, 1.4.9 > > Attachments: OAK-4082.demo, OAK-4082.patch > > > Seen when connecting with useServerPrepStmts=true: > {noformat} > 15:45:58 Caused by: > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Can't create more > than max_prepared_stmt_count statements (current value: 16382) > 15:45:58 at > sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source) > 15:45:58 at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > 15:45:58 at > java.lang.reflect.Constructor.newInstance(Constructor.java:526) > 15:45:58 at com.mysql.jdbc.Util.handleNewInstance(Util.java:400) > 15:45:58 at com.mysql.jdbc.Util.getInstance(Util.java:383) > 15:45:58 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:980) > 15:45:58 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3847) > 15:45:58 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3783) > 15:45:58 at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2447) > 15:45:58 at > com.mysql.jdbc.ServerPreparedStatement.serverPrepare(ServerPreparedStatement.java:1514) > 15:45:58 at > com.mysql.jdbc.ServerPreparedStatement.(ServerPreparedStatement.java:389) > 15:45:58 at > com.mysql.jdbc.ServerPreparedStatement.prepareBatchedInsertSQL(ServerPreparedStatement.java:2832) > 15:45:58 at > com.mysql.jdbc.PreparedStatement.executeBatchedInserts(PreparedStatement.java:1524) > 15:45:58 at > com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1262) > 15:45:58 at > org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.insert(RDBDocumentStoreJDBC.java:302) > 15:45:58 at > org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreJDBC.update(RDBDocumentStoreJDBC.java:446) > 15:45:58 at > org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.bulkUpdate(RDBDocumentStore.java:456) > 15:45:58 ... 21 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4796) filter events before adding to ChangeProcessor's queue
[ https://issues.apache.org/jira/browse/OAK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551799#comment-15551799 ] Stefan Egli commented on OAK-4796: -- Many thanks for this thorough review, [~chetanm]! I'll dig through them in detail. Agreeing on most points, except perhaps re bq. Probably we can keep things simple here and focus on included paths not sure what exactly you mean here: we've collected nodeTypes too, so are you suggesting not to filter by nodeType? if yes, why? > filter events before adding to ChangeProcessor's queue > -- > > Key: OAK-4796 > URL: https://issues.apache.org/jira/browse/OAK-4796 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: jcr >Affects Versions: 1.5.9 >Reporter: Stefan Egli >Assignee: Stefan Egli > Labels: observation > Fix For: 1.6 > > Attachments: OAK-4796.changeSet.patch, OAK-4796.patch > > > Currently the > [ChangeProcessor.contentChanged|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L335] > is in charge of doing the event diffing and filtering and does so in a > pooled Thread, ie asynchronously, at a later stage independent from the > commit. This has the advantage that the commit is fast, but has the following > potentially negative effects: > # events (in the form of ContentChange Objects) occupy a slot of the queue > even if the listener is not interested in it - any commit lands on any > listener's queue. This reduces the capacity of the queue for 'actual' events > to be delivered. It therefore increases the risk that the queue fills - and > when full has various consequences such as loosing the CommitInfo etc. > # each event==ContentChange later on must be evaluated, and for that a diff > must be calculated. Depending on runtime behavior that diff might be > expensive if no longer in the cache (documentMk specifically). > As an improvement, this diffing+filtering could be done at an earlier stage > already, nearer to the commit, and in case the filter would ignore the event, > it would not have to be put into the queue at all, thus avoiding occupying a > slot and later potentially slower diffing. > The suggestion is to implement this via the following algorithm: > * During the commit, in a {{Validator}} the listener's filters are evaluated > - in an as-efficient-as-possible manner (Reason for doing it in a Validator > is that this doesn't add overhead as oak already goes through all changes for > other Validators). As a result a _list of potentially affected observers_ is > added to the {{CommitInfo}} (false positives are fine). > ** Note that the above adds cost to the commit and must therefore be > carefully done and measured > ** One potential measure could be to only do filtering when listener's queues > are larger than a certain threshold (eg 10) > * The ChangeProcessor in {{contentChanged}} (in the one created in > [createObserver|https://github.com/apache/jackrabbit-oak/blob/f4f4e01dd8f708801883260481d37fdcd5868deb/oak-jcr/src/main/java/org/apache/jackrabbit/oak/jcr/observation/ChangeProcessor.java#L224]) > then checks the new commitInfo's _potentially affected observers_ list and > if it's not in the list, adds a {{NOOP}} token at the end of the queue. If > there's already a NOOP there, the two are collapsed (this way when a filter > is not affected it would have a NOOP at the end of the queue). If later on a > no-NOOP item is added, the NOOP's {{root}} is used as the {{previousRoot}} > for the newly added {{ContentChange}} obj. > ** To achieve that, the ContentChange obj is extended to not only have the > "to" {{root}} pointer, but also the "from" {{previousRoot}} pointer which > currently is implicitly maintained. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4835) Provide generic option to interrupt online revision cleanup
[ https://issues.apache.org/jira/browse/OAK-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551703#comment-15551703 ] Michael Dürig commented on OAK-4835: As mentioned on OAK-4899, I think it would make to expose this functionality through RevisionGCMBean and RepositoryManagementMBean. > Provide generic option to interrupt online revision cleanup > --- > > Key: OAK-4835 > URL: https://issues.apache.org/jira/browse/OAK-4835 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, documentmk, segmentmk >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu > Labels: compaction, gc > > JMX binding for stopping a running compaction process -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
[ https://issues.apache.org/jira/browse/OAK-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4899. Resolution: Duplicate Duplicate of OAK-4835. > Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean > -- > > Key: OAK-4899 > URL: https://issues.apache.org/jira/browse/OAK-4899 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: management, production, resilience > Fix For: 1.6 > > > With OAK-4765 Oak Segment Tar acquired the capability for stopping a running > revision gc task. This is currently exposed via > {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose > this functionality through {{RevisionGCMBean}} and > {{RepositoryManagementMBean}} also/instead. > [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also > implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
[ https://issues.apache.org/jira/browse/OAK-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551699#comment-15551699 ] Michael Dürig commented on OAK-4899: Oh, missed that one... > Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean > -- > > Key: OAK-4899 > URL: https://issues.apache.org/jira/browse/OAK-4899 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: management, production, resilience > Fix For: 1.6 > > > With OAK-4765 Oak Segment Tar acquired the capability for stopping a running > revision gc task. This is currently exposed via > {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose > this functionality through {{RevisionGCMBean}} and > {{RepositoryManagementMBean}} also/instead. > [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also > implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
[ https://issues.apache.org/jira/browse/OAK-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551692#comment-15551692 ] Alex Parvulescu commented on OAK-4899: -- I had already created an issue for this OAK-4835 :) for backporting purposes it makes sense to also provide an impl for {{segmentmk}} too. > Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean > -- > > Key: OAK-4899 > URL: https://issues.apache.org/jira/browse/OAK-4899 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: management, production, resilience > Fix For: 1.6 > > > With OAK-4765 Oak Segment Tar acquired the capability for stopping a running > revision gc task. This is currently exposed via > {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose > this functionality through {{RevisionGCMBean}} and > {{RepositoryManagementMBean}} also/instead. > [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also > implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4823) Upgrade Oak Segment Tar dependency to 0.0.12
[ https://issues.apache.org/jira/browse/OAK-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4823. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1763557&view=rev > Upgrade Oak Segment Tar dependency to 0.0.12 > > > Key: OAK-4823 > URL: https://issues.apache.org/jira/browse/OAK-4823 > Project: Jackrabbit Oak > Issue Type: Task > Components: parent >Reporter: Michael Dürig >Assignee: Michael Dürig > Fix For: 1.5.12 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4522) Improve CommitRateLimiter to optionally block some commits
[ https://issues.apache.org/jira/browse/OAK-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551573#comment-15551573 ] Thomas Mueller commented on OAK-4522: - http://svn.apache.org/r1763550 > Improve CommitRateLimiter to optionally block some commits > -- > > Key: OAK-4522 > URL: https://issues.apache.org/jira/browse/OAK-4522 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: jcr >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Labels: observation > Fix For: 1.5.12 > > Attachments: OAK-4522-b.patch, OAK-4522-c.patch, OAK-4522.patch > > > The CommitRateLimiter of OAK-1659 can delay commits, but doesn't currently > block them, and delays even those commits that are part of handling events. > Because of that, the queue can still get full, and possibly delaying commits > while handling events can make the situation even worse. > In Jackrabbit 2.x, we had a similar feature: JCR-2402. Also related is > JCR-2746. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4522) Improve CommitRateLimiter to optionally block some commits
[ https://issues.apache.org/jira/browse/OAK-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller resolved OAK-4522. - Resolution: Fixed Fix Version/s: (was: 1.6) 1.5.12 > Improve CommitRateLimiter to optionally block some commits > -- > > Key: OAK-4522 > URL: https://issues.apache.org/jira/browse/OAK-4522 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: jcr >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Labels: observation > Fix For: 1.5.12 > > Attachments: OAK-4522-b.patch, OAK-4522-c.patch, OAK-4522.patch > > > The CommitRateLimiter of OAK-1659 can delay commits, but doesn't currently > block them, and delays even those commits that are part of handling events. > Because of that, the queue can still get full, and possibly delaying commits > while handling events can make the situation even worse. > In Jackrabbit 2.x, we had a similar feature: JCR-2402. Also related is > JCR-2746. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3547) Improve ability of the OakDirectory to recover from unexpected file errors
[ https://issues.apache.org/jira/browse/OAK-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Boston resolved OAK-3547. - Resolution: Won't Fix Fix Version/s: (was: 1.6) Will be dealt with under OAK-4805 and other issues. > Improve ability of the OakDirectory to recover from unexpected file errors > -- > > Key: OAK-3547 > URL: https://issues.apache.org/jira/browse/OAK-3547 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Affects Versions: 1.4 >Reporter: Ian Boston > > Currently if the OakDirectory finds that a file is missing or in some way > damaged, and exception is thrown which impacts all queries using that index, > at times making the index unavailable. This improvement aims to make the > OakDirectory recover to a previously ok state by storing which files were > involved in previous states, and giving the code some way of checking if they > are valid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3547) Improve ability of the OakDirectory to recover from unexpected file errors
[ https://issues.apache.org/jira/browse/OAK-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551486#comment-15551486 ] Ian Boston commented on OAK-3547: - The original patch was written such a long time ago, I don't think IndexCopier was present, or at least the deployment the patch was targeting did not have writeOnCopy etc enabled or possibly available. The impl of OakIndexFile is suboptimal for Lucene usage, as it loads chunks of the index into memory as byte[] to perform seek, whereas FSDirectory uses OS level native code to seek, hence it makes no sense to use OakDirectory any more. FSDirectory should be used by whatever means necessary. Might be an idea to delete or deprecate OakDirectory, so its not used for opening lucene indexes. The patch is in a state where it should not be applied or used. It can't efficiently determine corruption without direct access to the underlying file, which is abstracted by Oak. With the benefit of hindsight, the patch should be in IndexCopier to prevent a bad segments.gen file failing the index. We should close this issue as the patch isn't valid any more. > Improve ability of the OakDirectory to recover from unexpected file errors > -- > > Key: OAK-3547 > URL: https://issues.apache.org/jira/browse/OAK-3547 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Affects Versions: 1.4 >Reporter: Ian Boston > Fix For: 1.6 > > > Currently if the OakDirectory finds that a file is missing or in some way > damaged, and exception is thrown which impacts all queries using that index, > at times making the index unavailable. This improvement aims to make the > OakDirectory recover to a previously ok state by storing which files were > involved in previous states, and giving the code some way of checking if they > are valid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4894) Potential NPE in Commit.apply()
[ https://issues.apache.org/jira/browse/OAK-4894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-4894: -- Labels: (was: candidate_oak_1_4) Fix Version/s: 1.4.9 Merged into 1.4 branch: http://svn.apache.org/r1763545 > Potential NPE in Commit.apply() > --- > > Key: OAK-4894 > URL: https://issues.apache.org/jira/browse/OAK-4894 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, documentmk >Affects Versions: 1.5.6, 1.4.8 >Reporter: Marcel Reutegger >Assignee: Marcel Reutegger >Priority: Minor > Fix For: 1.6, 1.5.12, 1.4.9 > > > May happen when a branch commit fails and {{Commit#b}} is not initialized. > The code should rather use getBranch(), which is responsible for lazy > initializing the field. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4823) Upgrade Oak Segment Tar dependency to 0.0.12
[ https://issues.apache.org/jira/browse/OAK-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551413#comment-15551413 ] Tomek Rękawek commented on OAK-4823: The OAK-4828 is fixed, the upgrade can be re-applied. > Upgrade Oak Segment Tar dependency to 0.0.12 > > > Key: OAK-4823 > URL: https://issues.apache.org/jira/browse/OAK-4823 > Project: Jackrabbit Oak > Issue Type: Task > Components: parent >Reporter: Michael Dürig >Assignee: Michael Dürig > Fix For: 1.5.12 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2498) Root record references provide too little context for parsing a segment
[ https://issues.apache.org/jira/browse/OAK-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2498: --- Fix Version/s: (was: Segment Tar 0.0.14) Segment Tar 0.0.16 > Root record references provide too little context for parsing a segment > --- > > Key: OAK-2498 > URL: https://issues.apache.org/jira/browse/OAK-2498 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Reporter: Michael Dürig >Assignee: Francesco Mari > Labels: tools > Fix For: Segment Tar 0.0.16 > > > According to the [documentation | > http://jackrabbit.apache.org/oak/docs/nodestore/segmentmk.html] the root > record references in a segment header provide enough context for parsing all > records within this segment without any external information. > Turns out this is not true: if a root record reference turns e.g. to a list > record. The items in that list are record ids of unknown type. So even though > those records might live in the same segment, we can't parse them as we don't > know their type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-4828: --- Fix Version/s: (was: Segment Tar 0.0.16) 1.5.12 1.6 > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: 1.6, 1.5.12 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek resolved OAK-4828. Resolution: Fixed > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4138) Decouple revision cleanup from the flush thread
[ https://issues.apache.org/jira/browse/OAK-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4138. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1763533&view=rev > Decouple revision cleanup from the flush thread > --- > > Key: OAK-4138 > URL: https://issues.apache.org/jira/browse/OAK-4138 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: resilience > Fix For: Segment Tar 0.0.14 > > > I suggest we decouple revision cleanup from the flush thread. With large > repositories where cleanup can take several minutes to complete it blocks the > flush thread from updating the journal and the persisted head thus resulting > in larger then necessary data loss in case of a crash. > /cc [~alex.parvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4685) Avoid concurrent calls to FileStore.cleanup() and FileStore.compact()
[ https://issues.apache.org/jira/browse/OAK-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4685. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1763538&view=rev > Avoid concurrent calls to FileStore.cleanup() and FileStore.compact() > - > > Key: OAK-4685 > URL: https://issues.apache.org/jira/browse/OAK-4685 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar, segmentmk >Affects Versions: 1.0.32, 1.4.6, 1.2.18, Segment Tar 0.0.8 >Reporter: Andrei Dulceanu >Assignee: Michael Dürig > Labels: cleanup, gc > Fix For: Segment Tar 0.0.14 > > > At the moment it is possible to have concurrent calls to > {{FileStore.cleanup}} and to {{FileStore.compact()}}. The former is called > from the latter and also from {{FileStore.flush()}} (this is tracked in > OAK-4138). We should change this status quo and also make the calls to > {{compact()}} and {{cleanup()}} mutually exclusive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4621) External invocation of background operations
[ https://issues.apache.org/jira/browse/OAK-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4621. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1763539&view=rev > External invocation of background operations > > > Key: OAK-4621 > URL: https://issues.apache.org/jira/browse/OAK-4621 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: technical_debt > Fix For: Segment Tar 0.0.14 > > > The background operations (flush, compact, cleanup, etc.) are historically > part of the implementation of the {{FileStore}}. They should better be > scheduled and invoked by an external agent. The code deploying the > {{FileStore}} might have better insights on when and how these background > operations should be invoked. See also OAK-3468. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4895) FileStore cleanup should not leak out file handles
[ https://issues.apache.org/jira/browse/OAK-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-4895. Resolution: Fixed Fixed at http://svn.apache.org/viewvc?rev=1763540&view=rev > FileStore cleanup should not leak out file handles > -- > > Key: OAK-4895 > URL: https://issues.apache.org/jira/browse/OAK-4895 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: technical_debt > Fix For: Segment Tar 0.0.14 > > > {{FileStore.cleanup()}} currently returns a list of {{File}} instances > relying on the caller to remove those files. This breaks encapsulation as the > file store is the sole owner of these files and only the file store should be > removing them. > I suggest to replace the current cleanup method with one that returns > {{void}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551383#comment-15551383 ] Julian Reschke commented on OAK-4828: - Yes, with that, the test does not fail anymore. > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-4900) Enable diff persistent cache by default
Marcel Reutegger created OAK-4900: - Summary: Enable diff persistent cache by default Key: OAK-4900 URL: https://issues.apache.org/jira/browse/OAK-4900 Project: Jackrabbit Oak Issue Type: Improvement Components: core, documentmk Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.6 The diff persistent cache is important for efficient processing of external changes and should be enabled by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-4828: --- Description: When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see OAK-4823), test cases fail due to files being open and thus not deleted on Windows: {noformat} Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< FAILURE! upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time elapsed: 0.781 sec <<< ERROR! java.io.IOException: Unable to delete file: target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) {noformat} was: When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see OAK-4823), test cases fail due to files being open and thus not deleted on Windows: {noformat} Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< FAILURE! upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time elapsed: 0.781 sec <<< ERROR! java.io.IOException: Unable to delete file: target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) at org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessor
[jira] [Commented] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
[ https://issues.apache.org/jira/browse/OAK-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551324#comment-15551324 ] Chetan Mehrotra commented on OAK-4899: -- +1 > Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean > -- > > Key: OAK-4899 > URL: https://issues.apache.org/jira/browse/OAK-4899 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: management, production, resilience > Fix For: 1.6 > > > With OAK-4765 Oak Segment Tar acquired the capability for stopping a running > revision gc task. This is currently exposed via > {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose > this functionality through {{RevisionGCMBean}} and > {{RepositoryManagementMBean}} also/instead. > [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also > implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3547) Improve ability of the OakDirectory to recover from unexpected file errors
[ https://issues.apache.org/jira/browse/OAK-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551315#comment-15551315 ] Chetan Mehrotra edited comment on OAK-3547 at 10/6/16 8:28 AM: --- Thanks [~ianeboston] for the feature patch. Some feedback below on current approach List of NodeState using generational approach [1] {noformat} Base state - No file added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652360{state = } :data === 3 files added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652394{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;} l_1475740652360{state = } :data foo2{blobSize = 1047552, jcr:lastModified = 1475740652394, jcr:data = [1 binaries], uniqueKey = 84dba956e1634dc14812db9a7ae6a81c} foo1{blobSize = 1047552, jcr:lastModified = 1475740652393, jcr:data = [1 binaries], uniqueKey = a9505647e35a7f0e0365adbc2531c91f} foo0{blobSize = 1047552, jcr:lastModified = 1475740652392, jcr:data = [1 binaries], uniqueKey = 937a0057a90df312f452aceb96f40c8c} === 3 'bar' files added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652407{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;bar2,bar2,656,2ea3ee49550e65f45d2e8d706e1a3bcef2d4a8b3;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;bar1,bar1,953,a296f858ccb93414d6e99ca7a1997fb711faa65d;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;bar0,bar0,433,83ad0b2e68f114aa901c9eee0715c4507cd5dfe1;} l_1475740652394{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;} l_1475740652360{state = } :data foo2{blobSize = 1047552, jcr:lastModified = 1475740652394, jcr:data = [1 binaries], uniqueKey = 84dba956e1634dc14812db9a7ae6a81c} foo1{blobSize = 1047552, jcr:lastModified = 1475740652393, jcr:data = [1 binaries], uniqueKey = a9505647e35a7f0e0365adbc2531c91f} foo0{blobSize = 1047552, jcr:lastModified = 1475740652392, jcr:data = [1 binaries], uniqueKey = 937a0057a90df312f452aceb96f40c8c} bar1{blobSize = 1047552, jcr:lastModified = 1475740652406, jcr:data = [1 binaries], uniqueKey = 5ff21364ce67199cddd14833f0614d73} bar2{blobSize = 1047552, jcr:lastModified = 1475740652406, jcr:data = [1 binaries], uniqueKey = c10e5f46e90e01d927f0aee82980ac6d} bar0{blobSize = 1047552, jcr:lastModified = 1475740652405, jcr:data = [1 binaries], uniqueKey = 0f32d06997a4fec9a90060b863cda454} === {noformat} * Empty {{:data}} created under {{:data}}. Is this required or possibly due bug below. In line #2 same builder should be used {code} private SimpleDirectoryListing(@Nonnull IndexDefinition definition, @Nonnull NodeBuilder builder) { this.definition = definition; this.directoryBuilder = getOrCreateChild(builder, INDEX_DATA_CHILD_NAME); this.fileNames.addAll(getListing()); } {code} * load and save is called every time even if no change is done. This adds empty l_ddd nodes. This should be avoided * LISTING_STATE_PROPERTY - ** It holds an encoded listing info stored as single string property. Hopefully this does not grow very large if directory content is large. Not sure of typical sizes ** You can possibly use a MultivalueProperty here. * Not sure on below snippet in {{load}} method. {code} if (loaded >= 0 ) { return (loaded != childNodes.size() - 1); } {code} * {{doGC}} and {{sync}} methods do not have any test coverage h4. Feature Flag It would be more comforting if this feature is driven by a feature flag. So generation logic is used if enabled otherwise it defaults to {{GenerationalDirectoryListing}}. We can expose a setting in {{LuceneIndexProviderService}} to lock new feature so as to enable controlled testing. I think having flag to enable when it is not being used would be easy. What would be tricky is to have it disabled once enabled ... that aspect can possibly be ignored h4. Effect of corruption on writes Currently if the corruption occurs system would automatically fallback to older version. This is fine for reads but for writes this would mean data loss unless indexed. As Async indexer would only index newer stuff. We have 2 options here # Let async indexer continue but provide some indication that index is corrupt and reindex is required in some time - This needs to be highlighted in prominenet way (perio
[jira] [Commented] (OAK-3547) Improve ability of the OakDirectory to recover from unexpected file errors
[ https://issues.apache.org/jira/browse/OAK-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551315#comment-15551315 ] Chetan Mehrotra commented on OAK-3547: -- List of NodeState using generational approach [1] {noformat} Base state - No file added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652360{state = } :data === 3 files added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652394{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;} l_1475740652360{state = } :data foo2{blobSize = 1047552, jcr:lastModified = 1475740652394, jcr:data = [1 binaries], uniqueKey = 84dba956e1634dc14812db9a7ae6a81c} foo1{blobSize = 1047552, jcr:lastModified = 1475740652393, jcr:data = [1 binaries], uniqueKey = a9505647e35a7f0e0365adbc2531c91f} foo0{blobSize = 1047552, jcr:lastModified = 1475740652392, jcr:data = [1 binaries], uniqueKey = 937a0057a90df312f452aceb96f40c8c} === 3 'bar' files added === /{saveDirectoryListing = true} :data :dir l_1475740652371{state = } l_1475740652407{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;bar2,bar2,656,2ea3ee49550e65f45d2e8d706e1a3bcef2d4a8b3;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;bar1,bar1,953,a296f858ccb93414d6e99ca7a1997fb711faa65d;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;bar0,bar0,433,83ad0b2e68f114aa901c9eee0715c4507cd5dfe1;} l_1475740652394{state = foo2,foo2,612,5b41ac56a8454e48af2dd7aa740f71a96a740366;foo1,foo1,233,ff0e1a49171201d638848c3f3e9b003596a81d6b;foo0,foo0,73,7feabefe2276000fa7722d6536183a508816d2c7;} l_1475740652360{state = } :data foo2{blobSize = 1047552, jcr:lastModified = 1475740652394, jcr:data = [1 binaries], uniqueKey = 84dba956e1634dc14812db9a7ae6a81c} foo1{blobSize = 1047552, jcr:lastModified = 1475740652393, jcr:data = [1 binaries], uniqueKey = a9505647e35a7f0e0365adbc2531c91f} foo0{blobSize = 1047552, jcr:lastModified = 1475740652392, jcr:data = [1 binaries], uniqueKey = 937a0057a90df312f452aceb96f40c8c} bar1{blobSize = 1047552, jcr:lastModified = 1475740652406, jcr:data = [1 binaries], uniqueKey = 5ff21364ce67199cddd14833f0614d73} bar2{blobSize = 1047552, jcr:lastModified = 1475740652406, jcr:data = [1 binaries], uniqueKey = c10e5f46e90e01d927f0aee82980ac6d} bar0{blobSize = 1047552, jcr:lastModified = 1475740652405, jcr:data = [1 binaries], uniqueKey = 0f32d06997a4fec9a90060b863cda454} === {noformat} * Empty {{:data}} created under {{:data}}. Is this required or possibly due bug below. In line #2 same builder should be used {code} private SimpleDirectoryListing(@Nonnull IndexDefinition definition, @Nonnull NodeBuilder builder) { this.definition = definition; this.directoryBuilder = getOrCreateChild(builder, INDEX_DATA_CHILD_NAME); this.fileNames.addAll(getListing()); } {code} * load and save is called every time even if no change is done. This adds empty l_ddd nodes. This should be avoided * LISTING_STATE_PROPERTY - ** It holds an encoded listing info stored as single string property. Hopefully this does not grow very large if directory content is large. Not sure of typical sizes ** You can possibly use a MultivalueProperty here. * Not sure on below snippet in {{load}} method. {code} if (loaded >= 0 ) { return (loaded != childNodes.size() - 1); } {code} * {{doGC}} and {{sync}} methods do not have any test coverage *Feature Flag* It would be more comforting if this feature is driven by a feature flag. So generation logic is used if enabled otherwise it defaults to {{GenerationalDirectoryListing}}. We can expose a setting in {{LuceneIndexProviderService}} to lock new feature so as to enable controlled testing. I think having flag to enable when it is not being used would be easy. What would be tricky is to have it disabled once enabled ... that aspect can possibly be ignored h5. Effect of corruption on writes Currently if the corruption occurs system would automatically fallback to older version. This is fine for reads but for writes this would mean data loss unless indexed. As Async indexer would only index newer stuff. We have 2 options here # Let async indexer continue but provide some indication that index is corrupt and reindex is required in some time - This needs to be highlighted in prominenet way (periodic logs, JMX etc) # Let fallback used for reads (readOnly == true) but let it fail for writes Possibly this needs to be exposed as conf
[jira] [Assigned] (OAK-4873) Avoid running GC too frequently
[ https://issues.apache.org/jira/browse/OAK-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig reassigned OAK-4873: -- Assignee: Michael Dürig > Avoid running GC too frequently > --- > > Key: OAK-4873 > URL: https://issues.apache.org/jira/browse/OAK-4873 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: Segment Tar 0.0.16 >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: compaction, gc > Fix For: Segment Tar 0.0.16 > > > As {{RevisionGCMBean.startRevisionGC()}} can be used to manually invoke a gc > cycle, there is the danger of running into a {{SNFE}} when gc is run multiple > times in quick succession (due to the retention time being based on number of > generations). We should come up with a mechanism to prevent this scenario. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4742) Improve FileStoreStatsMBean
[ https://issues.apache.org/jira/browse/OAK-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig reassigned OAK-4742: -- Assignee: Michael Dürig > Improve FileStoreStatsMBean > --- > > Key: OAK-4742 > URL: https://issues.apache.org/jira/browse/OAK-4742 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: monitoring > Fix For: Segment Tar 0.0.16 > > > We should add further data to that MBean (if feasible): > * Number of commits > * Number of commits queuing (blocked on the commit semaphore) > * Percentiles of commit times (exclude queueing time) > * Percentiles of commit queueing times > * Last gc run / size before gc and after gc / time gc took broken down into > the various phases -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4617) Align SegmentRevisionGC MBean with new generation based GC
[ https://issues.apache.org/jira/browse/OAK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig reassigned OAK-4617: -- Assignee: Michael Dürig > Align SegmentRevisionGC MBean with new generation based GC > -- > > Key: OAK-4617 > URL: https://issues.apache.org/jira/browse/OAK-4617 > Project: Jackrabbit Oak > Issue Type: Task > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: production > Fix For: Segment Tar 0.0.16 > > > The {{SegmentRevisionGC}} MBean still dates back to the old {{oak-segment}}. > We need to review its endpoints and only keep those that make sense for > {{oak-segment-tar}}, adapt the others as necessary any add further > functionality as required. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551298#comment-15551298 ] Michael Dürig commented on OAK-4828: OAK-4274 is the actual root cause of this issue. > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
[ https://issues.apache.org/jira/browse/OAK-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551295#comment-15551295 ] Marcel Reutegger commented on OAK-4899: --- It shouldn't be too difficult to also implement this functionality for the DocumentNodeStore. > Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean > -- > > Key: OAK-4899 > URL: https://issues.apache.org/jira/browse/OAK-4899 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: management, production, resilience > Fix For: 1.6 > > > With OAK-4765 Oak Segment Tar acquired the capability for stopping a running > revision gc task. This is currently exposed via > {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose > this functionality through {{RevisionGCMBean}} and > {{RepositoryManagementMBean}} also/instead. > [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also > implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4621) External invocation of background operations
[ https://issues.apache.org/jira/browse/OAK-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-4621: --- Fix Version/s: Segment Tar 0.0.14 > External invocation of background operations > > > Key: OAK-4621 > URL: https://issues.apache.org/jira/browse/OAK-4621 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: technical_debt > Fix For: Segment Tar 0.0.14 > > > The background operations (flush, compact, cleanup, etc.) are historically > part of the implementation of the {{FileStore}}. They should better be > scheduled and invoked by an external agent. The code deploying the > {{FileStore}} might have better insights on when and how these background > operations should be invoked. See also OAK-3468. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4895) FileStore cleanup should not leak out file handles
[ https://issues.apache.org/jira/browse/OAK-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551258#comment-15551258 ] Francesco Mari commented on OAK-4895: - As soon as OAK-4138 is committed, this should immediately follow! > FileStore cleanup should not leak out file handles > -- > > Key: OAK-4895 > URL: https://issues.apache.org/jira/browse/OAK-4895 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: technical_debt > Fix For: Segment Tar 0.0.14 > > > {{FileStore.cleanup()}} currently returns a list of {{File}} instances > relying on the caller to remove those files. This breaks encapsulation as the > file store is the sole owner of these files and only the file store should be > removing them. > I suggest to replace the current cleanup method with one that returns > {{void}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4138) Decouple revision cleanup from the flush thread
[ https://issues.apache.org/jira/browse/OAK-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551254#comment-15551254 ] Francesco Mari commented on OAK-4138: - I went through the commits. The work done so far looks good to me. > Decouple revision cleanup from the flush thread > --- > > Key: OAK-4138 > URL: https://issues.apache.org/jira/browse/OAK-4138 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: resilience > Fix For: Segment Tar 0.0.14 > > > I suggest we decouple revision cleanup from the flush thread. With large > repositories where cleanup can take several minutes to complete it blocks the > flush thread from updating the journal and the persisted head thus resulting > in larger then necessary data loss in case of a crash. > /cc [~alex.parvulescu] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4617) Align SegmentRevisionGC MBean with new generation based GC
[ https://issues.apache.org/jira/browse/OAK-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551255#comment-15551255 ] Michael Dürig commented on OAK-4617: We also need to consider the outcome of OAK-4899 here. I.e. whether {{SegmentRevisionGCMBean.stopCompaction}} should be moved to those other MBeans, duplicated there or neither of both. > Align SegmentRevisionGC MBean with new generation based GC > -- > > Key: OAK-4617 > URL: https://issues.apache.org/jira/browse/OAK-4617 > Project: Jackrabbit Oak > Issue Type: Task > Components: segment-tar >Reporter: Michael Dürig > Labels: production > Fix For: Segment Tar 0.0.16 > > > The {{SegmentRevisionGC}} MBean still dates back to the old {{oak-segment}}. > We need to review its endpoints and only keep those that make sense for > {{oak-segment-tar}}, adapt the others as necessary any add further > functionality as required. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-4899) Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean
Michael Dürig created OAK-4899: -- Summary: Include option to stop GC in RevisionGCMBean and RepositoryManagementMBean Key: OAK-4899 URL: https://issues.apache.org/jira/browse/OAK-4899 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Michael Dürig Assignee: Michael Dürig Fix For: 1.6 With OAK-4765 Oak Segment Tar acquired the capability for stopping a running revision gc task. This is currently exposed via {{SegmentRevisionGCMBean.stopCompaction}}. I think it would make to expose this functionality through {{RevisionGCMBean}} and {{RepositoryManagementMBean}} also/instead. [~mreutegg], [~alex.parvulescu] WDYT? Could the document node store also implement this or would we just not support it there? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-4861) AsyncIndexUpdate consuming constantly 1 CPU core
[ https://issues.apache.org/jira/browse/OAK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551155#comment-15551155 ] Elemer Kisch edited comment on OAK-4861 at 10/6/16 7:38 AM: Commented by [~jhoh] in the DC: "As Chetan pointed out to me, that CopyOnWrite has not been enabled on this environment and also the "indexPath" property was missing on /oak:index/lucene, I redid the tests on a different environment (hardening env). Here the closing the IndexWriter takes about 10 sec, both with CopyOnWrite disabled and enabled, with or without the indexPath property being present on /oak:index/lucene. I attach the logfile with the mentioned loggers being set to Trace." Please contact me for the logfiles. was (Author: elemer): Commented by [~jhoh] in the DC: "As Chetan pointed out to me, that CopyOnWrite has not been enabled on this environment and also the "indexPath" property was missing on /oak:index/lucene, I redid the tests on a different environment (hardening env). Here the closing the IndexWriter takes about 10 sec, both with CopyOnWrite disabled and enabled, with or without the indexPath property being present on /oak:index/lucene. I attach the logfile with the mentioned loggers being set to Trace." > AsyncIndexUpdate consuming constantly 1 CPU core > > > Key: OAK-4861 > URL: https://issues.apache.org/jira/browse/OAK-4861 > Project: Jackrabbit Oak > Issue Type: Improvement >Affects Versions: 1.0.31 >Reporter: Jörg Hoh > > The AsyncIndexUpdate thread takes a long time on some of our environments to > update the index for a few nodes: > {noformat} > 28.09.2016 18:23:49.405 *DEBUG* [pool-11-thread-1] > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate AsyncIndex (async) > update run completed in 3,369 min. Indexed 14 nodes > {noformat} > I set the log to DEBUG for investigation and during the time I saw constantly > such times; this instance is a test instance and there was no activity from > the outside during this time. I also can confirm such times from other > environments. > The immediately visible problem is that this instance (and others as well) > constantly consume at least 1 CPU core, no matter what activity happens on > the system. This only happens on the master node of our DocumentNodeStore > clusters. > I did a number of threaddumps during investigation and a common stacktrace > for the AsyncIndexUpdate was like this: > {noformat} > 3XMTHREADINFO "pool-11-thread-1" J9VMThread:0x01E69C00, > j9thread_t:0x7FFFBE101CB0, java/lang/Thread:0x0001077FE938, state:R, > prio=5 > 3XMJAVALTHREAD (java/lang/Thread getId:0x10A, isDaemon:false) > 3XMTHREADINFO1 (native thread ID:0x3C781, native priority:0x5, native > policy:UNKNOWN, vmstate:CW, vm thread flags:0x0001) > 3XMTHREADINFO2 (native stack address range from:0x7FFFC696A000, > to:0x7FFFC69AB000, size:0x41000) > 3XMCPUTIME CPU usage total: 992425.397213554 secs, current > category="Application" > 3XMHEAPALLOC Heap bytes allocated since last GC cycle=735893672 (0x2BDCD8A8) > 3XMTHREADINFO3 Java callstack: > 4XESTACKTRACE at > com/google/common/collect/Lists.newArrayList(Lists.java:128(Compiled Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:255(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:201(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.(OakDirectory.java:400(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:408(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:384(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:288(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:269(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader.(BlockTreeTermsReader.java:176(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentCoreReaders.(SegmentCoreReaders.java:116(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentReader.(SegmentReader.java:96(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index
[jira] [Updated] (OAK-4861) AsyncIndexUpdate consuming constantly 1 CPU core
[ https://issues.apache.org/jira/browse/OAK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elemer Kisch updated OAK-4861: -- Attachment: (was: si09ent-cqa-243-async-index-update.log) > AsyncIndexUpdate consuming constantly 1 CPU core > > > Key: OAK-4861 > URL: https://issues.apache.org/jira/browse/OAK-4861 > Project: Jackrabbit Oak > Issue Type: Improvement >Affects Versions: 1.0.31 >Reporter: Jörg Hoh > > The AsyncIndexUpdate thread takes a long time on some of our environments to > update the index for a few nodes: > {noformat} > 28.09.2016 18:23:49.405 *DEBUG* [pool-11-thread-1] > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate AsyncIndex (async) > update run completed in 3,369 min. Indexed 14 nodes > {noformat} > I set the log to DEBUG for investigation and during the time I saw constantly > such times; this instance is a test instance and there was no activity from > the outside during this time. I also can confirm such times from other > environments. > The immediately visible problem is that this instance (and others as well) > constantly consume at least 1 CPU core, no matter what activity happens on > the system. This only happens on the master node of our DocumentNodeStore > clusters. > I did a number of threaddumps during investigation and a common stacktrace > for the AsyncIndexUpdate was like this: > {noformat} > 3XMTHREADINFO "pool-11-thread-1" J9VMThread:0x01E69C00, > j9thread_t:0x7FFFBE101CB0, java/lang/Thread:0x0001077FE938, state:R, > prio=5 > 3XMJAVALTHREAD (java/lang/Thread getId:0x10A, isDaemon:false) > 3XMTHREADINFO1 (native thread ID:0x3C781, native priority:0x5, native > policy:UNKNOWN, vmstate:CW, vm thread flags:0x0001) > 3XMTHREADINFO2 (native stack address range from:0x7FFFC696A000, > to:0x7FFFC69AB000, size:0x41000) > 3XMCPUTIME CPU usage total: 992425.397213554 secs, current > category="Application" > 3XMHEAPALLOC Heap bytes allocated since last GC cycle=735893672 (0x2BDCD8A8) > 3XMTHREADINFO3 Java callstack: > 4XESTACKTRACE at > com/google/common/collect/Lists.newArrayList(Lists.java:128(Compiled Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:255(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:201(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.(OakDirectory.java:400(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:408(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:384(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:288(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:269(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader.(BlockTreeTermsReader.java:176(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentCoreReaders.(SegmentCoreReaders.java:116(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentReader.(SegmentReader.java:96(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/ReadersAndUpdates.getReader(ReadersAndUpdates.java:141(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:279(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/BufferedUpdatesStream@0x0002245B4100, entry > count: 1) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3191(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 3) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.maybeApplyDeletes(IndexWriter.java:3182(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 2) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.doFlush(IndexWriter.java:3155(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 1) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.flush(IndexWriter.java:3123(Compiled >
[jira] [Commented] (OAK-4898) Allow for external changes to have a CommitInfo attached
[ https://issues.apache.org/jira/browse/OAK-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551192#comment-15551192 ] Stefan Egli commented on OAK-4898: -- +1 besides allowing the additional infos mentioned, I think we should change the logic to have CommitInfo never be null - even in the overflow case - with the same arguments in that it would allow to pass state between/to Observers. > Allow for external changes to have a CommitInfo attached > > > Key: OAK-4898 > URL: https://issues.apache.org/jira/browse/OAK-4898 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Chetan Mehrotra > Fix For: 1.6 > > > Currently the observation logic relies on fact that CommitInfo being null > means that changes are from other cluster node i.e. external changes. > We should change this semantic and provide a different way to indicate that > changes are external. This would allow a NodeStore implementation to still > pass in a CommitInfo which captures useful information about commit like > brief summary on what got changed which can be used for pre filtering > (OAK-4796) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551171#comment-15551171 ] Tomek Rękawek commented on OAK-4828: Get the latest trunk (r>=1763502) and in the {{oak-parent/pom.xml}} set {{segment.tar.version}} to {{0.0.12}}. > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551164#comment-15551164 ] Julian Reschke commented on OAK-4828: - Sure; just tell me precisely to change... > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551161#comment-15551161 ] Michael Dürig commented on OAK-4828: >From the {{FileChannel.map}} Javadoc: {{A mapping, once established, is not >dependent upon the file channel that was used to create it. Closing the >channel, in particular, has no effect upon the validity of the mapping.}} >Effectively this means that the mapping stays valid until the JVM's gc >collects the buffer returned by {{map}}. As on *nix like OSs allow open files >to be deleted, Windows doesn't and this is most likely what we are seeing >here. There is nothing much we can do here I guess. See also >http://bugs.java.com/view_bug.do?bug_id=4715154 > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4861) AsyncIndexUpdate consuming constantly 1 CPU core
[ https://issues.apache.org/jira/browse/OAK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elemer Kisch updated OAK-4861: -- Attachment: si09ent-cqa-243-async-index-update.log > AsyncIndexUpdate consuming constantly 1 CPU core > > > Key: OAK-4861 > URL: https://issues.apache.org/jira/browse/OAK-4861 > Project: Jackrabbit Oak > Issue Type: Improvement >Affects Versions: 1.0.31 >Reporter: Jörg Hoh > Attachments: si09ent-cqa-243-async-index-update.log > > > The AsyncIndexUpdate thread takes a long time on some of our environments to > update the index for a few nodes: > {noformat} > 28.09.2016 18:23:49.405 *DEBUG* [pool-11-thread-1] > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate AsyncIndex (async) > update run completed in 3,369 min. Indexed 14 nodes > {noformat} > I set the log to DEBUG for investigation and during the time I saw constantly > such times; this instance is a test instance and there was no activity from > the outside during this time. I also can confirm such times from other > environments. > The immediately visible problem is that this instance (and others as well) > constantly consume at least 1 CPU core, no matter what activity happens on > the system. This only happens on the master node of our DocumentNodeStore > clusters. > I did a number of threaddumps during investigation and a common stacktrace > for the AsyncIndexUpdate was like this: > {noformat} > 3XMTHREADINFO "pool-11-thread-1" J9VMThread:0x01E69C00, > j9thread_t:0x7FFFBE101CB0, java/lang/Thread:0x0001077FE938, state:R, > prio=5 > 3XMJAVALTHREAD (java/lang/Thread getId:0x10A, isDaemon:false) > 3XMTHREADINFO1 (native thread ID:0x3C781, native priority:0x5, native > policy:UNKNOWN, vmstate:CW, vm thread flags:0x0001) > 3XMTHREADINFO2 (native stack address range from:0x7FFFC696A000, > to:0x7FFFC69AB000, size:0x41000) > 3XMCPUTIME CPU usage total: 992425.397213554 secs, current > category="Application" > 3XMHEAPALLOC Heap bytes allocated since last GC cycle=735893672 (0x2BDCD8A8) > 3XMTHREADINFO3 Java callstack: > 4XESTACKTRACE at > com/google/common/collect/Lists.newArrayList(Lists.java:128(Compiled Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:255(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:201(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.(OakDirectory.java:400(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:408(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:384(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:288(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:269(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader.(BlockTreeTermsReader.java:176(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentCoreReaders.(SegmentCoreReaders.java:116(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentReader.(SegmentReader.java:96(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/ReadersAndUpdates.getReader(ReadersAndUpdates.java:141(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:279(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/BufferedUpdatesStream@0x0002245B4100, entry > count: 1) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3191(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 3) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.maybeApplyDeletes(IndexWriter.java:3182(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 2) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.doFlush(IndexWriter.java:3155(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B4050, entry count: 1) > 4XESTACKTRACE at > org/apache/lucene/index
[jira] [Commented] (OAK-4861) AsyncIndexUpdate consuming constantly 1 CPU core
[ https://issues.apache.org/jira/browse/OAK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551155#comment-15551155 ] Elemer Kisch commented on OAK-4861: --- Commented by [~jhoh] in the DC: "As Chetan pointed out to me, that CopyOnWrite has not been enabled on this environment and also the "indexPath" property was missing on /oak:index/lucene, I redid the tests on a different environment (hardening env). Here the closing the IndexWriter takes about 10 sec, both with CopyOnWrite disabled and enabled, with or without the indexPath property being present on /oak:index/lucene. I attach the logfile with the mentioned loggers being set to Trace." > AsyncIndexUpdate consuming constantly 1 CPU core > > > Key: OAK-4861 > URL: https://issues.apache.org/jira/browse/OAK-4861 > Project: Jackrabbit Oak > Issue Type: Improvement >Affects Versions: 1.0.31 >Reporter: Jörg Hoh > Attachments: si09ent-cqa-243-async-index-update.log > > > The AsyncIndexUpdate thread takes a long time on some of our environments to > update the index for a few nodes: > {noformat} > 28.09.2016 18:23:49.405 *DEBUG* [pool-11-thread-1] > org.apache.jackrabbit.oak.plugins.index.AsyncIndexUpdate AsyncIndex (async) > update run completed in 3,369 min. Indexed 14 nodes > {noformat} > I set the log to DEBUG for investigation and during the time I saw constantly > such times; this instance is a test instance and there was no activity from > the outside during this time. I also can confirm such times from other > environments. > The immediately visible problem is that this instance (and others as well) > constantly consume at least 1 CPU core, no matter what activity happens on > the system. This only happens on the master node of our DocumentNodeStore > clusters. > I did a number of threaddumps during investigation and a common stacktrace > for the AsyncIndexUpdate was like this: > {noformat} > 3XMTHREADINFO "pool-11-thread-1" J9VMThread:0x01E69C00, > j9thread_t:0x7FFFBE101CB0, java/lang/Thread:0x0001077FE938, state:R, > prio=5 > 3XMJAVALTHREAD (java/lang/Thread getId:0x10A, isDaemon:false) > 3XMTHREADINFO1 (native thread ID:0x3C781, native priority:0x5, native > policy:UNKNOWN, vmstate:CW, vm thread flags:0x0001) > 3XMTHREADINFO2 (native stack address range from:0x7FFFC696A000, > to:0x7FFFC69AB000, size:0x41000) > 3XMCPUTIME CPU usage total: 992425.397213554 secs, current > category="Application" > 3XMHEAPALLOC Heap bytes allocated since last GC cycle=735893672 (0x2BDCD8A8) > 3XMTHREADINFO3 Java callstack: > 4XESTACKTRACE at > com/google/common/collect/Lists.newArrayList(Lists.java:128(Compiled Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:255(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexFile.(OakDirectory.java:201(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.(OakDirectory.java:400(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:408(Compiled > Code)) > 4XESTACKTRACE at > org/apache/jackrabbit/oak/plugins/index/lucene/OakDirectory$OakIndexInput.clone(OakDirectory.java:384(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:288(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/store/Directory$SlicedIndexInput.clone(Directory.java:269(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/BlockTreeTermsReader.(BlockTreeTermsReader.java:176(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentCoreReaders.(SegmentCoreReaders.java:116(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/SegmentReader.(SegmentReader.java:96(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/ReadersAndUpdates.getReader(ReadersAndUpdates.java:141(Compiled > Code)) > 4XESTACKTRACE at > org/apache/lucene/index/BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:279(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/BufferedUpdatesStream@0x0002245B4100, entry > count: 1) > 4XESTACKTRACE at > org/apache/lucene/index/IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3191(Compiled > Code)) > 5XESTACKTRACE (entered lock: > org/apache/lucene/index/IndexWriter@0x0002245B
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551119#comment-15551119 ] Tomek Rękawek commented on OAK-4828: [~julian.resc...@gmx.de] - do you mind testing if the current oak-upgrade version works after bumping the segment-tar version? > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4828) oak-ugrade tests fail with segment tar 0.0.12 (on Windows)
[ https://issues.apache.org/jira/browse/OAK-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15551105#comment-15551105 ] Tomek Rękawek commented on OAK-4828: >From what I saw, if you use MMAPed FileStore on Windows, the files are not >released even though the FileStore is closed. That's the reason of exception. In [r1763502|https://svn.apache.org/r1763502] I'm using a custom version of deleteRecursively that doesn't throw exception but logs the error IO cause (so the segment tar version can be bumped). The root cause is worth investigating anyway. Reproduction steps: a) create FileStore with MMAP enabled, b) close FileStore c) remove repository directory D d) check if the directory D exists On Windows either c) or d) fails. > oak-ugrade tests fail with segment tar 0.0.12 (on Windows) > -- > > Key: OAK-4828 > URL: https://issues.apache.org/jira/browse/OAK-4828 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.10 >Reporter: Julian Reschke > Fix For: Segment Tar 0.0.16 > > > When the dependency for segment tar is changed from 0.0.10 to 0.0.12 (see > OAK-4823), test cases fail due to files being open and thus not deleted on > Windows: > {noformat} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.781 sec <<< > FAILURE! > upgradeFrom10(org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest) Time > elapsed: 0.781 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\UpgradeOldSegmentTest\test-repo-new\segmentstore\data0a.tar > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.upgrade.UpgradeOldSegmentTest.upgradeFrom10(UpgradeOldSegmentTest.java:119) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) -- This message was sent by Atlassian JIRA (v6.3.4#6332)