[jira] [Commented] (OAK-3862) Move integration tests in a different Maven module
[ https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093484#comment-15093484 ] Marcel Reutegger commented on OAK-3862: --- +1 > Move integration tests in a different Maven module > -- > > Key: OAK-3862 > URL: https://issues.apache.org/jira/browse/OAK-3862 > Project: Jackrabbit Oak > Issue Type: Improvement >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > While moving the Segment Store and related packages into its own bundle, I > figured out that integration tests contained in {{oak-core}} contribute to a > cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}. > The dependency is due to the usage of {{NodeStoreFixture}} to instantiate > different implementations of {{NodeStore}} in a semi-transparent way. > Tests depending on {{NodeStoreFixture}} are most likely integration tests. A > clean solution to this problem would be to move those integration tests into > a new Maven module, referencing the API and implementation modules as needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3862) Move integration tests in a different Maven module
[ https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093459#comment-15093459 ] Chetan Mehrotra commented on OAK-3862: -- bq. The dependency is due to the usage of NodeStoreFixture to instantiate different implementations of NodeStore in a semi-transparent way. Instead of moving all other users of NodeStoreFixture to separate module may be we should compose NodeStoreFixture using ServiceLoader. > Move integration tests in a different Maven module > -- > > Key: OAK-3862 > URL: https://issues.apache.org/jira/browse/OAK-3862 > Project: Jackrabbit Oak > Issue Type: Improvement >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > While moving the Segment Store and related packages into its own bundle, I > figured out that integration tests contained in {{oak-core}} contribute to a > cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}. > The dependency is due to the usage of {{NodeStoreFixture}} to instantiate > different implementations of {{NodeStore}} in a semi-transparent way. > Tests depending on {{NodeStoreFixture}} are most likely integration tests. A > clean solution to this problem would be to move those integration tests into > a new Maven module, referencing the API and implementation modules as needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3863) [oak-blob-cloud] Incorrect export package
Amit Jain created OAK-3863: -- Summary: [oak-blob-cloud] Incorrect export package Key: OAK-3863 URL: https://issues.apache.org/jira/browse/OAK-3863 Project: Jackrabbit Oak Issue Type: Bug Reporter: Amit Jain Assignee: Amit Jain Fix For: 1.2.10, 1.3.14 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2643) Migration between different NodeStore implementations should be available as a command in oak-run
[ https://issues.apache.org/jira/browse/OAK-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Sedding resolved OAK-2643. - Resolution: Duplicate This was resolved with OAK-2171. > Migration between different NodeStore implementations should be available as > a command in oak-run > - > > Key: OAK-2643 > URL: https://issues.apache.org/jira/browse/OAK-2643 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: run >Reporter: Manfred Baedke >Priority: Minor > Attachments: OAK-2643.patch > > > The class org.apache.jackrabbit.oak.upgrade.RepositorySidegrade features > migration between different NodeStore types and should be usable from the > command line as a part of oak-run. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3844) Better support for versionable nodes without version histories
[ https://issues.apache.org/jira/browse/OAK-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092684#comment-15092684 ] Julian Sedding commented on OAK-3844: - Not sure proceeding silently is a good idea. I'll need to think more about this. Any chance we can reproduce it in a test case? > Better support for versionable nodes without version histories > -- > > Key: OAK-3844 > URL: https://issues.apache.org/jira/browse/OAK-3844 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: upgrade >Affects Versions: 1.3.13 >Reporter: Tomek Rękawek >Assignee: Julian Sedding > Fix For: 1.4 > > Attachments: OAK-3844.patch > > > One of the customers reported following exception that has been thrown during > the migration: > {noformat} > Caused by: java.lang.IllegalStateException: This builder does not exist: > 95a5253f-d37b-4e88-a4b4-0721530344fc > at > com.google.common.base.Preconditions.checkState(Preconditions.java:150) > at > org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:506) > at > org.apache.jackrabbit.oak.plugins.memory.MemoryNodeBuilder.setProperty(MemoryNodeBuilder.java:522) > at > org.apache.jackrabbit.oak.upgrade.version.VersionableEditor.setVersionablePath(VersionableEditor.java:148) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeStore.merge(SegmentNodeStore.java:226) > ... > at > org.apache.jackrabbit.oak.upgrade.RepositoryUpgrade.copy(RepositoryUpgrade.java:486) > {noformat} > It seems that the node with reported UUID has primary type inheriting from > {{mix:versionable}}, but there is no appropriate version history in the > version storage. > Obviously this means that there's something wrong with the repository. > However, I think that the migration process shouldn't fail, but proceed > silently. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3849) After partial migration versions are not restorable
[ https://issues.apache.org/jira/browse/OAK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Sedding resolved OAK-3849. - Resolution: Fixed Fix Version/s: (was: 1.4) 1.3.14 Thanks for the patch [~tomek.rekawek], I applied it in [r1724130|https://svn.apache.org/r1724130]. > After partial migration versions are not restorable > --- > > Key: OAK-3849 > URL: https://issues.apache.org/jira/browse/OAK-3849 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Reporter: Tomek Rękawek >Assignee: Julian Sedding > Fix For: 1.3.14 > > Attachments: OAK-3849.patch > > > After migrating a content subtree with referenced versions and starting the > destination repository, the versions are not available. The reason is that > the new version histories UUIDs hasn't been indexed. > We should set the {{/oak:index/uuid/reindex}} to {{true}} every time we copy > a version history, so the new identifiers will be indexed during the merge. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3836) Convert simple versionable nodes during upgrade
[ https://issues.apache.org/jira/browse/OAK-3836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092597#comment-15092597 ] Julian Sedding edited comment on OAK-3836 at 1/11/16 8:52 PM: -- Thanks for your patch [~tomek.rekawek]. However, I found a much simpler solution. Check out the changes to [JackrabbitNodeState|https://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/JackrabbitNodeState.java?r1=1724123&r2=1724122&pathrev=1724123] if you're interested. Fixed in [r1724123|https://svn.apache.org/r1724123]. Fixed CopyVersionHistorySidegradeTest in [r1724126|https://svn.apache.org/r1724126]. was (Author: jsedding): Thanks for your patch [~tomek.rekawek]. However, I found a much simpler solution. Check out the changes to [JackrabbitNodeState|https://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/JackrabbitNodeState.java?r1=1724123&r2=1724122&pathrev=1724123] if you're interested. Fixed in [r1724123|https://svn.apache.org/r1724123]. > Convert simple versionable nodes during upgrade > --- > > Key: OAK-3836 > URL: https://issues.apache.org/jira/browse/OAK-3836 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: upgrade >Reporter: Tomek Rękawek >Assignee: Julian Sedding > Fix For: 1.3.14 > > Attachments: OAK-3836.patch > > > JCR 2 supports two modes of versioning: simple and full. Oak supports only > full versioning. We should convert the simple-versionable nodes during > repository upgrade, so the version history of such items is not lost. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3836) Convert simple versionable nodes during upgrade
[ https://issues.apache.org/jira/browse/OAK-3836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Sedding resolved OAK-3836. - Resolution: Fixed Fix Version/s: (was: 1.4) 1.3.14 Thanks for your patch [~tomek.rekawek]. However, I found a much simpler solution. Check out the changes to [JackrabbitNodeState|https://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-upgrade/src/main/java/org/apache/jackrabbit/oak/upgrade/JackrabbitNodeState.java?r1=1724123&r2=1724122&pathrev=1724123] if you're interested. Fixed in [r1724123|https://svn.apache.org/r1724123]. > Convert simple versionable nodes during upgrade > --- > > Key: OAK-3836 > URL: https://issues.apache.org/jira/browse/OAK-3836 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: upgrade >Reporter: Tomek Rękawek >Assignee: Julian Sedding > Fix For: 1.3.14 > > Attachments: OAK-3836.patch > > > JCR 2 supports two modes of versioning: simple and full. Oak supports only > full versioning. We should convert the simple-versionable nodes during > repository upgrade, so the version history of such items is not lost. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3537) Move the Segment Store to its own bundle
[ https://issues.apache.org/jira/browse/OAK-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092344#comment-15092344 ] Francesco Mari commented on OAK-3537: - [~mduerig], [~alex.parvulescu], I started the work for this issue in [this branch|https://github.com/francescomari/jackrabbit-oak/tree/bundle]. At the moment, the branch doesn't compile because some code related to integration tests is not accessible - see OAK-3862. > Move the Segment Store to its own bundle > > > Key: OAK-3537 > URL: https://issues.apache.org/jira/browse/OAK-3537 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > The {{SegmentStore}} and its related code should be moved into their own > bundles to ease the development and the deployment of this functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3862) Move integration tests in a different Maven module
[ https://issues.apache.org/jira/browse/OAK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-3862: Fix Version/s: 1.4 > Move integration tests in a different Maven module > -- > > Key: OAK-3862 > URL: https://issues.apache.org/jira/browse/OAK-3862 > Project: Jackrabbit Oak > Issue Type: Improvement >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > While moving the Segment Store and related packages into its own bundle, I > figured out that integration tests contained in {{oak-core}} contribute to a > cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}. > The dependency is due to the usage of {{NodeStoreFixture}} to instantiate > different implementations of {{NodeStore}} in a semi-transparent way. > Tests depending on {{NodeStoreFixture}} are most likely integration tests. A > clean solution to this problem would be to move those integration tests into > a new Maven module, referencing the API and implementation modules as needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3537) Move the Segment Store to its own bundle
[ https://issues.apache.org/jira/browse/OAK-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-3537: Summary: Move the Segment Store to its own bundle (was: Move the SegmentStore subsystem to its own set of bundles) > Move the Segment Store to its own bundle > > > Key: OAK-3537 > URL: https://issues.apache.org/jira/browse/OAK-3537 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > The {{SegmentStore}} and its related code should be moved into their own > bundles to ease the development and the deployment of this functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3537) Move the SegmentStore subsystem to its own set of bundles
[ https://issues.apache.org/jira/browse/OAK-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-3537: Fix Version/s: 1.4 > Move the SegmentStore subsystem to its own set of bundles > - > > Key: OAK-3537 > URL: https://issues.apache.org/jira/browse/OAK-3537 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.4 > > > The {{SegmentStore}} and its related code should be moved into their own > bundles to ease the development and the deployment of this functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3862) Move integration tests in a different Maven module
Francesco Mari created OAK-3862: --- Summary: Move integration tests in a different Maven module Key: OAK-3862 URL: https://issues.apache.org/jira/browse/OAK-3862 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Francesco Mari Assignee: Francesco Mari While moving the Segment Store and related packages into its own bundle, I figured out that integration tests contained in {{oak-core}} contribute to a cyclic dependency between the (new) {{oak-segment}} bundle and {{oak-core}}. The dependency is due to the usage of {{NodeStoreFixture}} to instantiate different implementations of {{NodeStore}} in a semi-transparent way. Tests depending on {{NodeStoreFixture}} are most likely integration tests. A clean solution to this problem would be to move those integration tests into a new Maven module, referencing the API and implementation modules as needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092248#comment-15092248 ] Tobias Bocanegra commented on OAK-3842: --- yes, remove the export. thanks. > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3825) Including Resource name to suggestions
[ https://issues.apache.org/jira/browse/OAK-3825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092224#comment-15092224 ] Amit Gupta commented on OAK-3825: - [~mmarth] you are right, it depends on the use case. In some use cases i.e. digital asset management, users are used to naming convention for asset (from file system days) and since they bring those same files to JCR. Asset name (node name) are some time meaningful and suggestion is desirable. As this would be based on index definition, I think that it would be configurable for system. > Including Resource name to suggestions > -- > > Key: OAK-3825 > URL: https://issues.apache.org/jira/browse/OAK-3825 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Reporter: Ankit Agarwal >Assignee: Vikas Saurabh > Fix For: 1.4 > > > Currently it is possible to include properties of a resource into > suggestions. > There should be a way so that its possible to include resource name itself > into suggestions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3645: Attachment: OAK-3645-jr.patch Slightly modified patch. > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Attachments: OAK-3645-jr.patch, OAK-3645.patch > > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3577) NameValidator diagnostics could be more helpful
[ https://issues.apache.org/jira/browse/OAK-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-3577. - Resolution: Fixed Fix Version/s: 1.3.14 1.2.10 1.0.26 trunk: http://svn.apache.org/r1724057 1.2: http://svn.apache.org/r1724065 1.0: http://svn.apache.org/r1724068 > NameValidator diagnostics could be more helpful > --- > > Key: OAK-3577 > URL: https://issues.apache.org/jira/browse/OAK-3577 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.2.9, 1.0.25, 1.3.13 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Fix For: 1.0.26, 1.2.10, 1.3.14 > > Attachments: OAK-3577.patch > > > When reporting an invalid name, the log isn't always helpful, as trailing > whitespace or non-ASCII whitespace characters are hard to spot. It would be > good to append a variant of the name that has all non-printing ASCII > characters plus whitespace escaped. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092165#comment-15092165 ] Julian Reschke commented on OAK-3645: - The patch works for me, except that Oracle and Derby complained about the trailing semicolon. I removed it throughout (for all other DBs as well), and this seems to work. One remaining concern is that we don't have a default implementation for unknown databases (or we have one, but it won't work). It probably would be best to disable the check in that case. Will post an updated patch. > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Attachments: OAK-3645.patch > > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092164#comment-15092164 ] angela commented on OAK-3842: - if that works even better... i vaguely remember that i tried it in the past and it didn't work due to oak-jcr but things might have changed (or i am mistaken). > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092162#comment-15092162 ] angela commented on OAK-3842: - i would rather remove the export if it is not needed to prevent internals all in a sudden being exported as new code is generated (without being aware of the export). > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3777) Multiplexing support in default PermissionStore implementation
[ https://issues.apache.org/jira/browse/OAK-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3777: Component/s: (was: security) core > Multiplexing support in default PermissionStore implementation > -- > > Key: OAK-3777 > URL: https://issues.apache.org/jira/browse/OAK-3777 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: core >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > > Similar to other parts we need to prototype support for multiplexing in > default permission store -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3861) MapRecord reduce extra loop in MapEntry creation
[ https://issues.apache.org/jira/browse/OAK-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu updated OAK-3861: - Attachment: MapRecord.java.patch [~mduerig] quick look? > MapRecord reduce extra loop in MapEntry creation > > > Key: OAK-3861 > URL: https://issues.apache.org/jira/browse/OAK-3861 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Reporter: Alex Parvulescu >Priority: Trivial > Attachments: MapRecord.java.patch > > > removes unneeded extra loop and a bunch of temp arrays -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3861) MapRecord reduce extra loop in MapEntry creation
[ https://issues.apache.org/jira/browse/OAK-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu reassigned OAK-3861: Assignee: Alex Parvulescu > MapRecord reduce extra loop in MapEntry creation > > > Key: OAK-3861 > URL: https://issues.apache.org/jira/browse/OAK-3861 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Reporter: Alex Parvulescu >Assignee: Alex Parvulescu >Priority: Trivial > Attachments: MapRecord.java.patch > > > removes unneeded extra loop and a bunch of temp arrays -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3861) MapRecord reduce extra loop in MapEntry creation
Alex Parvulescu created OAK-3861: Summary: MapRecord reduce extra loop in MapEntry creation Key: OAK-3861 URL: https://issues.apache.org/jira/browse/OAK-3861 Project: Jackrabbit Oak Issue Type: Improvement Components: segmentmk Reporter: Alex Parvulescu Priority: Trivial removes unneeded extra loop and a bunch of temp arrays -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3799) Drop module oak-js
[ https://issues.apache.org/jira/browse/OAK-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092029#comment-15092029 ] Michael Dürig commented on OAK-3799: Thanks for noting. Fixed at http://svn.apache.org/viewvc?rev=1724050&view=rev > Drop module oak-js > -- > > Key: OAK-3799 > URL: https://issues.apache.org/jira/browse/OAK-3799 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: technical_debt > Fix For: 1.3.14 > > > The Oak checkout contains a module {{oak-js}}, which is mostly empty apart > from a TODO statement. As we didn't work on this and AFAIK do not intend to > work on this in the near future, I propose to drop the module for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3799) Drop module oak-js
[ https://issues.apache.org/jira/browse/OAK-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092027#comment-15092027 ] Chetan Mehrotra commented on OAK-3799: -- may be missing {{git config --global svn.rmdir true}} in git profile > Drop module oak-js > -- > > Key: OAK-3799 > URL: https://issues.apache.org/jira/browse/OAK-3799 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: technical_debt > Fix For: 1.3.14 > > > The Oak checkout contains a module {{oak-js}}, which is mostly empty apart > from a TODO statement. As we didn't work on this and AFAIK do not intend to > work on this in the near future, I propose to drop the module for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions
[ https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092009#comment-15092009 ] Marcel Reutegger commented on OAK-2472: --- I also looked at the test and there is a problem with the DocumentNodeStore and how it handles conflicts and collisions. In some cases it also treat (non-conflicting) collisions as conflicts. I think this is the reason why we see so many conflicts and consolidation tasks scheduled during the test. I created OAK-3859 to fix the underlying issue. > Add support for atomic counters on cluster solutions > > > Key: OAK-2472 > URL: https://issues.apache.org/jira/browse/OAK-2472 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.3.0 >Reporter: Davide Giannella >Assignee: Davide Giannella > Labels: scalability > Fix For: 1.4 > > Attachments: OAK-2472-failure-1452511772.log.gz, > OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, > oak-1452268140.log.gz > > > As of OAK-2220 we added support for atomic counters in a non-clustered > situation. > This ticket is about covering the clustered ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3799) Drop module oak-js
[ https://issues.apache.org/jira/browse/OAK-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092017#comment-15092017 ] Alex Parvulescu commented on OAK-3799: -- hmm, I think there's still some leftover stuff here, svn still reports oak-js exists, even if empty: https://svn.apache.org/repos/asf/jackrabbit/oak/trunk/oak-js/ ... while at the same time github doesn't :) https://github.com/apache/jackrabbit-oak/tree/trunk/oak-js > Drop module oak-js > -- > > Key: OAK-3799 > URL: https://issues.apache.org/jira/browse/OAK-3799 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: technical_debt > Fix For: 1.3.14 > > > The Oak checkout contains a module {{oak-js}}, which is mostly empty apart > from a TODO statement. As we didn't work on this and AFAIK do not intend to > work on this in the near future, I propose to drop the module for now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3860) Remove transitive test dependency on ancient xerces
[ https://issues.apache.org/jira/browse/OAK-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig resolved OAK-3860. Resolution: Fixed Fixed http://svn.apache.org/viewvc?rev=1724047&view=rev > Remove transitive test dependency on ancient xerces > > > Key: OAK-3860 > URL: https://issues.apache.org/jira/browse/OAK-3860 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > > {{oak-core}} has a transitive dependency to xerces 2.6.2 through > {{junit-addons}}. This causes Java Mission Control to fail creating flight > recordings on test executions. I therefore suggest to cut this transitive > dependency in the pom. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3860) Remove transitive test dependency on ancient xerces
Michael Dürig created OAK-3860: -- Summary: Remove transitive test dependency on ancient xerces Key: OAK-3860 URL: https://issues.apache.org/jira/browse/OAK-3860 Project: Jackrabbit Oak Issue Type: Improvement Components: core Reporter: Michael Dürig Assignee: Michael Dürig {{oak-core}} has a transitive dependency to xerces 2.6.2 through {{junit-addons}}. This causes Java Mission Control to fail creating flight recordings on test executions. I therefore suggest to cut this transitive dependency in the pom. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3850) Collect and expose Persistent Cache stats
[ https://issues.apache.org/jira/browse/OAK-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092002#comment-15092002 ] Teodor Rosu commented on OAK-3850: -- After a quick inspection: 1. In this case the following methods will return zeros: _getTotalLoadTime_, _getAverageLoadPenality_, _getEvictionCount_, _getElementCount_(see 2) , _getMaxTotalWeight_, _estimateCurrentWeight_. 2. *element count* - NodeCache would need a "get size" operation that would translate to a MultiGenerationMap "get size" operation - the general MultiGenerationMap can contain the same element in multiple generations ( CacheMaps ) - as a result the size is not accurate in the general case *cache memory size for a type* - looking at the MVStore, I don't think it's possible to compute the amount of space used by individual Maps 3. I agree. Impact on performance needs proper investigation. I will look into this and get back with numbers. > Collect and expose Persistent Cache stats > - > > Key: OAK-3850 > URL: https://issues.apache.org/jira/browse/OAK-3850 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, documentmk >Reporter: Teodor Rosu > Attachments: OAK-3850-v0.patch > > > Expose persistent cache statistics ( see: [Guava > CacheStats|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/cache/CacheStats.html] > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3859) Suspended commit depends on non-conflicting change
Marcel Reutegger created OAK-3859: - Summary: Suspended commit depends on non-conflicting change Key: OAK-3859 URL: https://issues.apache.org/jira/browse/OAK-3859 Project: Jackrabbit Oak Issue Type: Bug Components: core, documentmk Affects Versions: 1.3.6 Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 1.4 When a conflict occurs, a commit is suspended until the conflicting revision becomes visible. This feature was introduced with OAK-3042. The implementation does not distinguish between revisions that are conflicting and those that are reported as collisions. The latter just means changes happened after the base revision but may not necessarily be conflicting. E.g. different properties can be changed concurrently. There are actually two problems: - When a commit detects a conflict, it will create collision markers even for changes that are non conflicting. The commit should only create collision markers for conflicting changes. - A commit with a conflict will suspend until is sees also revisions that are considered a collision but are not actually a conflict. The commit should only suspend until conflicting revisions are visible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions
[ https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091981#comment-15091981 ] Davide Giannella commented on OAK-2472: --- [~mreutegg] it seems your suspect was right. What's happening is: - the first 4 threads start - they get processed by the counter and scheduled - as soon as it starts processing counter from "different cluster nodes" it gets a Collision - DNSB suspend and resume later on - a new counter is processed from scratch generating a new scheduled task that will add to the conflicts ideas on possible solutions? > Add support for atomic counters on cluster solutions > > > Key: OAK-2472 > URL: https://issues.apache.org/jira/browse/OAK-2472 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.3.0 >Reporter: Davide Giannella >Assignee: Davide Giannella > Labels: scalability > Fix For: 1.4 > > Attachments: OAK-2472-failure-1452511772.log.gz, > OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, > oak-1452268140.log.gz > > > As of OAK-2220 we added support for atomic counters in a non-clustered > situation. > This ticket is about covering the clustered ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3577) NameValidator diagnostics could be more helpful
[ https://issues.apache.org/jira/browse/OAK-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3577: Affects Version/s: 1.2.9 1.0.25 1.3.13 > NameValidator diagnostics could be more helpful > --- > > Key: OAK-3577 > URL: https://issues.apache.org/jira/browse/OAK-3577 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.2.9, 1.0.25, 1.3.13 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Attachments: OAK-3577.patch > > > When reporting an invalid name, the log isn't always helpful, as trailing > whitespace or non-ASCII whitespace characters are hard to spot. It would be > good to append a variant of the name that has all non-printing ASCII > characters plus whitespace escaped. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (OAK-3852) RDBDocumentStore: batched append logic may loose property changes
[ https://issues.apache.org/jira/browse/OAK-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3852: Comment: was deleted (was: trunk: http://svn.apache.org/r1723731 1.2: http://svn.apache.org/r1723732 1.0: http://svn.apache.org/r1723734 ) > RDBDocumentStore: batched append logic may loose property changes > - > > Key: OAK-3852 > URL: https://issues.apache.org/jira/browse/OAK-3852 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.2.9, 1.0.25, 1.3.13 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.0.26, 1.2.10, 1.3.14 > > > When using the "append" logic, we only serialize those parts of the > {{UpdateOp}} referring to non-column properties. However, the update logic > currently does not handle all column properties, such as {{_deletedOnce}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3852) RDBDocumentStore: batched append logic may loose property changes
[ https://issues.apache.org/jira/browse/OAK-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke resolved OAK-3852. - Resolution: Fixed trunk: http://svn.apache.org/r1724026 http://svn.apache.org/r1723731 1.2: http://svn.apache.org/r1724031 http://svn.apache.org/r1723732 1.0: http://svn.apache.org/r1724033 http://svn.apache.org/r1723734 > RDBDocumentStore: batched append logic may loose property changes > - > > Key: OAK-3852 > URL: https://issues.apache.org/jira/browse/OAK-3852 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.2.9, 1.0.25, 1.3.13 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.0.26, 1.2.10, 1.3.14 > > > When using the "append" logic, we only serialize those parts of the > {{UpdateOp}} referring to non-column properties. However, the update logic > currently does not handle all column properties, such as {{_deletedOnce}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3826) Lucene index augmentation doesn't work in Osgi environment
[ https://issues.apache.org/jira/browse/OAK-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikas Saurabh updated OAK-3826: --- Attachment: OAK-3826-v2.patch Attaching [^OAK-3826-v2.patch]. Since, OAK-3815 won't be fixed, the patch moves bind/unbind into {{IndexAugmentFactory}} and removes {{Tracker}} usage. It also adjusts tests accordingly. The patch uses OsgiMock to validate Osgi based registration, but unfortunately, I couldn't find a way to call unbind via scr annotations. [~chetanm], can you please take a look. > Lucene index augmentation doesn't work in Osgi environment > -- > > Key: OAK-3826 > URL: https://issues.apache.org/jira/browse/OAK-3826 > Project: Jackrabbit Oak > Issue Type: Bug > Components: lucene >Reporter: Vikas Saurabh >Assignee: Vikas Saurabh >Priority: Minor > Fix For: 1.4 > > Attachments: OAK-3826-v2.patch, OAK-3826.patch > > > OAK-3576 introduced a way to hook SPI to provide extra fields and query terms > for a lucene index. > In Osgi world, due to OAK-3815, {{LuceneIndexProviderService}} registered > references to SPI and pinged {{IndexAugmentFactory}} to update its map. But, > it seems bind/unbind methods get called ahead of time as compared to the > information Tracker contains. This leads to wrong set of services captured by > {{IndexAugmentFactory}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091911#comment-15091911 ] Tomek Rękawek commented on OAK-3645: I attached the draft patch. I was able to test the expressions pasted in the previous comment directly on all databases. I also tested the refactored determineServerTimeDifferenceMillis() with H2. Tomorrow I'll try to run modified Oak on all other RDBMSes. > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Attachments: OAK-3645.patch > > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-3645: --- Attachment: OAK-3645.patch > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Attachments: OAK-3645.patch > > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091768#comment-15091768 ] Tomek Rękawek edited comment on OAK-3645 at 1/11/16 1:16 PM: - I'll try to prepare expressions that returns current unix epoch for all database engines. So far: {noformat} # db2 select cast (days(current_timestamp - current_timezone) - days('1970-01-01') as integer) * 86400 + midnight_seconds(current_timestamp - current_timezone) from sysibm.sysdummy1 (tested) # postgres select extract(epoch from now())::integer; # mysql select unix_timestamp(); # h2 ## init create alias if not exists unix_timestamp as $$ long unix_timestamp() { return System.currentTimeMillis()/1000L; } $$; ## fetch select unix_timestamp(); # derby (doesn't support timezones) values {fn timestampdiff(SQL_TSI_SECOND,timestamp('1970-1-1-00.00.00.00'), current_timestamp)}; # oracle select (trunc(sys_extract_utc(systimestamp)) - to_date('01/01/1970', 'MM/DD/')) * 24 * 60 * 60 + to_number(to_char(sys_extract_utc(systimestamp), 'S')) from dual; # sql server select datediff(second, dateadd(second, datediff(second, getutcdate(), getdate()), '1970-01-01'), getdate()); {noformat} was (Author: tomek.rekawek): I'll try to prepare expressions that returns current unix epoch for all database engines. So far: {noformat} db2 select cast (days(current_timestamp - current_timezone) - days('1970-01-01') as integer) * 86400 + midnight_seconds(current_timestamp - current_timezone) from sysibm.sysdummy1 postgres select extract(epoch from now())::integer; mysql select unix_timestamp(); {noformat} > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2714) Test failures on Jenkins
[ https://issues.apache.org/jira/browse/OAK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-2714: --- Description: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture || JVM || | org.apache.jackrabbit.oak.plugins.index.lucene.IndexCopierTest.reuseLocalDir | 81 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.OrderedIndexIT.oak2035 | 76, 128 | SEGMENT_MK , DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.plugins.segment.standby.StandbyTestIT.testSyncLoop | 64 | ?| ? | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobStore | 52, 181, 399 | SEGMENT_MK, DOCUMENT_NS | 1.7 | | org.apache.jackrabbit.oak.run.osgi.JsonConfigRepFactoryTest.testRepositoryTar | 41 | ?| ? | | org.apache.jackrabbit.test.api.observation.PropertyAddedTest.testMultiPropertyAdded | 29 | ?| ? | | org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT.heavyWrite | 35 | SEGMENT_MK | ? | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.propertyIndexVersusNodeTypeIndex | 90 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.removeSubtreeFilter | 94 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.disjunctPaths | 121, 157, 396 | DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentFileOperationsTest.concurrent | 110, 382 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.MoveRemoveTest.removeExistingNode | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.jcr.RepositoryTest.addEmptyMultiValue | 115 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.stats.ClockTest.testClockDriftFast | 115, 142 | SEGMENT_MK, DOCUMENT_NS | 1.6, 1.8 | | org.apache.jackrabbit.oak.plugins.index.solr.query.SolrQueryIndexTest | 148, 151, 490, 656 | SEGMENT_MK, DOCUMENT_NS, DOCUMENT_RDB | 1.6, 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTest.testReorder | 155 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.osgi.OSGiIT.bundleStates | 163, 656 | SEGMENT_MK, DOCUMENT_RDB, DOCUMENT_NS | 1.6, 1.7 | | org.apache.jackrabbit.oak.jcr.query.SuggestTest | 171 | SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.jcr.nodetype.NodeTypeTest.updateNodeType | 243, 400 | DOCUMENT_RDB | 1.6, 1.8 | | org.apache.jackrabbit.oak.jcr.query.QueryPlanTest.nodeType | 272 | DOCUMENT_RDB | 1.8 | | org.apache.jackrabbit.oak.jcr.observation.ObservationTesttestMove | 308 | DOCUMENT_RDB | 1.6 | | org.apache.jackrabbit.oak.jcr.version.VersionablePathNodeStoreTest.testVersionablePaths | 361 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest | 361, 608 | DOCUMENT_NS, SEGMENT_MK | 1.7, 1.8 | | org.apache.jackrabbit.oak.jcr.ConcurrentAddIT.addNodesSameParent | 427, 428 | DOCUMENT_NS, SEGMENT_MK | 1.7 | | Build crashes: malloc(): memory corruption | 477 | DOCUMENT_NS | 1.6 | | org.apache.jackrabbit.oak.upgrade.cli.SegmentToJdbcTest.validateMigration | 486 | DOCUMENT_NS | 1.7| | org.apache.jackrabbit.j2ee.TomcatIT.testTomcat | 489, 493, 597, 648 | DOCUMENT_NS, SEGMENT_MK | 1.7 | | org.apache.jackrabbit.oak.plugins.index.solr.index.SolrIndexEditorTest | 490, 623, 624, 656 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.plugins.index.solr.server.EmbeddedSolrServerProviderTest.testEmbeddedSolrServerInitialization | 490, 656 | DOCUMENT_RDB | 1.7 | | org.apache.jackrabbit.oak.run.osgi.PropertyIndexReindexingTest.propertyIndexState | 492 | DOCUMENT_NS | 1.6 | | org.apache.jackrabbit.j2ee.TomcatIT | 589 | SEGMENT_MK | 1.8 | | org.apache.jackrabbit.oak.run.osgi.DocumentNodeStoreConfigTest.testRDBDocumentStoreRestart | 621 | DOCUMENT_NS | 1.8 | | org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest.testSynonymsFileCreation | 627 | DOCUMENT_RDB |1.7 | | org.apache.jackrabbit.oak.spi.security.authorization.cug.impl.* | 648 | SEGMENT_MK, DOCUMENT_NS | 1.8 | | org.apache.jackrabbit.oak.remote.http.handler.RemoteServerIT | 643 | DOCUMNET_NS | 1.7, 1.8 | | org.apache.jackrabbit.oak.plugins.index.solr.util.NodeTypeIndexingUtilsTest | 663 | SEGMENT_MK | 1.7 | was: This issue is for tracking test failures seen at our Jenkins instance that might yet be transient. Once a failure happens too often we should remove it here and create a dedicated issue for it. || Test || Builds || Fixture |
[jira] [Updated] (OAK-3809) Test failure: FacetTest
[ https://issues.apache.org/jira/browse/OAK-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3809: --- Description: {{org.apache.jackrabbit.oak.jcr.query.FacetTest}} keeps failing on Jenkins: {noformat} testFacetRetrievalMV(org.apache.jackrabbit.oak.jcr.query.FacetTest) Time elapsed: 5.927 sec <<< FAILURE! junit.framework.ComparisonFailure: expected: but was: at junit.framework.Assert.assertEquals(Assert.java:100) at junit.framework.Assert.assertEquals(Assert.java:107) at junit.framework.TestCase.assertEquals(TestCase.java:269) at org.apache.jackrabbit.oak.jcr.query.FacetTest.testFacetRetrievalMV(FacetTest.java:80) {noformat} Failure seen at builds: 628, 629, 630, 633, 634, 636, 642, 643, 644, 645, 648, 651, 656, 659, 660, 663 See e.g. https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/634/#showFailuresLink was: {{org.apache.jackrabbit.oak.jcr.query.FacetTest}} keeps failing on Jenkins: {noformat} testFacetRetrievalMV(org.apache.jackrabbit.oak.jcr.query.FacetTest) Time elapsed: 5.927 sec <<< FAILURE! junit.framework.ComparisonFailure: expected: but was: at junit.framework.Assert.assertEquals(Assert.java:100) at junit.framework.Assert.assertEquals(Assert.java:107) at junit.framework.TestCase.assertEquals(TestCase.java:269) at org.apache.jackrabbit.oak.jcr.query.FacetTest.testFacetRetrievalMV(FacetTest.java:80) {noformat} Failure seen at builds: 628, 629, 630, 633, 634, 636, 642, 643, 644, 645, 648, 651, 656, 659, 660 See e.g. https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/634/#showFailuresLink > Test failure: FacetTest > --- > > Key: OAK-3809 > URL: https://issues.apache.org/jira/browse/OAK-3809 > Project: Jackrabbit Oak > Issue Type: Bug > Components: solr > Environment: > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/ >Reporter: Michael Dürig >Assignee: Tommaso Teofili > Labels: ci, jenkins, test-failure > Fix For: 1.4 > > > {{org.apache.jackrabbit.oak.jcr.query.FacetTest}} keeps failing on Jenkins: > {noformat} > testFacetRetrievalMV(org.apache.jackrabbit.oak.jcr.query.FacetTest) Time > elapsed: 5.927 sec <<< FAILURE! > junit.framework.ComparisonFailure: expected: (2), aem (1), apache (1), cosmetics (1), furniture (1)], tags:[repository > (2), software (2), aem (1), apache (1), cosmetics (1), furniture (1)], > tags:[repository (2), software (2), aem (1), apache (1), cosmetics (1), > furniture (1)], tags:[repository (2), software (2), aem (1), apache (1), > cosmetics (1), furniture (1)]]> but was: > at junit.framework.Assert.assertEquals(Assert.java:100) > at junit.framework.Assert.assertEquals(Assert.java:107) > at junit.framework.TestCase.assertEquals(TestCase.java:269) > at > org.apache.jackrabbit.oak.jcr.query.FacetTest.testFacetRetrievalMV(FacetTest.java:80) > {noformat} > Failure seen at builds: 628, 629, 630, 633, 634, 636, 642, 643, 644, 645, > 648, 651, 656, 659, 660, 663 > See e.g. > https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/634/#showFailuresLink -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (OAK-3852) RDBDocumentStore: batched append logic may loose property changes
[ https://issues.apache.org/jira/browse/OAK-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke reopened OAK-3852: - > RDBDocumentStore: batched append logic may loose property changes > - > > Key: OAK-3852 > URL: https://issues.apache.org/jira/browse/OAK-3852 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.2.9, 1.0.25, 1.3.13 >Reporter: Julian Reschke >Assignee: Julian Reschke > Fix For: 1.0.26, 1.2.10, 1.3.14 > > > When using the "append" logic, we only serialize those parts of the > {{UpdateOp}} referring to non-column properties. However, the update logic > currently does not handle all column properties, such as {{_deletedOnce}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3847) Provide an easy way to parse/retrieve facets
[ https://issues.apache.org/jira/browse/OAK-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091804#comment-15091804 ] Chetan Mehrotra commented on OAK-3847: -- Looks good. +1 > Provide an easy way to parse/retrieve facets > > > Key: OAK-3847 > URL: https://issues.apache.org/jira/browse/OAK-3847 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 1.3.14 > > > Current facet results are returned within the rep:facet($propertyname) > property of each resulting node. The resulting String [1] is however a bit > annoying to parse as it separates label / value by comma so that if label > contains a similar pattern parsing may even be buggy. > An easier format for facets should be used, eventually together with an > utility class that returns proper objects that client code can consume. > [1] : > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/jcr/query/FacetTest.java#L99 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-3847) Provide an easy way to parse/retrieve facets
[ https://issues.apache.org/jira/browse/OAK-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091801#comment-15091801 ] Tommaso Teofili edited comment on OAK-3847 at 1/11/16 11:35 AM: after offline talk with [~chetanm] we have tweaked the proposal to have e.g. a {{FacetResult}} class (living in _oak-core_) that will wrap the {{QueryResult}} and will provide access to facets data via POJOs: {code} String sql2 = "select [jcr:path], [rep:facet(text)] from [nt:base] " + "where contains([text], 'hello OR hallo') order by [jcr:path]"; Query q = qm.createQuery(sql2, Query.JCR_SQL2); QueryResult queryResult = q.execute(); FacetResult facetResult = new FacetResult(queryResult); String[] dimensions = facetResult.getDimensions(); // { "text" } for (String dimension : dimensions) { List facets = facetResult.getFacets(dimension); for (Facet facet : facets) { System.out.println(facet.getLabel() + ":" + facet.getCount()); } } {code} I'll follow up with a structured proposal for the {{FacetResult}} API. was (Author: teofili): after offline talk with [~chetanm] we have tweaked the proposal to have e.g. a {{FacetResult}} class (living in _oak-core_) that will wrap the {{QueryResult}} and will provide access to facets data via POJOs: {code} String sql2 = "select [jcr:path], [rep:facet(text)] from [nt:base] " + "where contains([text], 'hello OR hallo') order by [jcr:path]"; Query q = qm.createQuery(sql2, Query.JCR_SQL2); QueryResult queryResult = q.execute(); FacetResult facetResult = new FacetResult(queryResult); String[] dimensions = facetResult.getDimensions(); // { "text" } for (String dimension : dimensions) { List facets = facetResult.getFacets(dimension); for (Facet facet : facets) { System.out.println(facet.getLabel() + ":" + facet.getCount()); } } {code} > Provide an easy way to parse/retrieve facets > > > Key: OAK-3847 > URL: https://issues.apache.org/jira/browse/OAK-3847 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 1.3.14 > > > Current facet results are returned within the rep:facet($propertyname) > property of each resulting node. The resulting String [1] is however a bit > annoying to parse as it separates label / value by comma so that if label > contains a similar pattern parsing may even be buggy. > An easier format for facets should be used, eventually together with an > utility class that returns proper objects that client code can consume. > [1] : > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/jcr/query/FacetTest.java#L99 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3847) Provide an easy way to parse/retrieve facets
[ https://issues.apache.org/jira/browse/OAK-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091801#comment-15091801 ] Tommaso Teofili commented on OAK-3847: -- after offline talk with [~chetanm] we have tweaked the proposal to have e.g. a {{FacetResult}} class (living in _oak-core_) that will wrap the {{QueryResult}} and will provide access to facets data via POJOs: {code} String sql2 = "select [jcr:path], [rep:facet(text)] from [nt:base] " + "where contains([text], 'hello OR hallo') order by [jcr:path]"; Query q = qm.createQuery(sql2, Query.JCR_SQL2); QueryResult queryResult = q.execute(); FacetResult facetResult = new FacetResult(queryResult); String[] dimensions = facetResult.getDimensions(); // { "text" } for (String dimension : dimensions) { List facets = facetResult.getFacets(dimension); for (Facet facet : facets) { System.out.println(facet.getLabel() + ":" + facet.getCount()); } } {code} > Provide an easy way to parse/retrieve facets > > > Key: OAK-3847 > URL: https://issues.apache.org/jira/browse/OAK-3847 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene, solr >Reporter: Tommaso Teofili >Assignee: Tommaso Teofili > Fix For: 1.3.14 > > > Current facet results are returned within the rep:facet($propertyname) > property of each resulting node. The resulting String [1] is however a bit > annoying to parse as it separates label / value by comma so that if label > contains a similar pattern parsing may even be buggy. > An easier format for facets should be used, eventually together with an > utility class that returns proper objects that client code can consume. > [1] : > https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/test/java/org/apache/jackrabbit/oak/jcr/query/FacetTest.java#L99 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2472) Add support for atomic counters on cluster solutions
[ https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella updated OAK-2472: -- Attachment: OAK-2472-success-1452511511.log.gz OAK-2472-failure-1452511772.log.gz [~mreutegg] attaching [^OAK-2472-failure-1452511772.log.gz] and [^OAK-2472-success-1452511511.log.gz] which are failure and successfull run. For the same use case: 2 updates on 2 nodes; success is while debugging the IT (not the atomic counter code itself), failure is leaving it running full speed. Still have to look at the logs myself. /cc [~mduerig] > Add support for atomic counters on cluster solutions > > > Key: OAK-2472 > URL: https://issues.apache.org/jira/browse/OAK-2472 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.3.0 >Reporter: Davide Giannella >Assignee: Davide Giannella > Labels: scalability > Fix For: 1.4 > > Attachments: OAK-2472-failure-1452511772.log.gz, > OAK-2472-success-1452511511.log.gz, atomic-counter.md, oak-1452185608.log.gz, > oak-1452268140.log.gz > > > As of OAK-2220 we added support for atomic counters in a non-clustered > situation. > This ticket is about covering the clustered ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091777#comment-15091777 ] Julian Reschke commented on OAK-3645: - +1 You may want to have a look at https://code.google.com/p/h2database/issues/detail?id=211 as well. > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3645) RDBDocumentStore: server time detection for DB2 fails due to timezone/dst differences
[ https://issues.apache.org/jira/browse/OAK-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091768#comment-15091768 ] Tomek Rękawek commented on OAK-3645: I'll try to prepare expressions that returns current unix epoch for all database engines. So far: {noformat} db2 select cast (days(current_timestamp - current_timezone) - days('1970-01-01') as integer) * 86400 + midnight_seconds(current_timestamp - current_timezone) from sysibm.sysdummy1 postgres select extract(epoch from now())::integer; mysql select unix_timestamp(); {noformat} > RDBDocumentStore: server time detection for DB2 fails due to timezone/dst > differences > - > > Key: OAK-3645 > URL: https://issues.apache.org/jira/browse/OAK-3645 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: rdbmk >Affects Versions: 1.3.10, 1.2.8, 1.0.24 >Reporter: Julian Reschke >Assignee: Tomek Rękawek > > We use {{CURRENT_TIMESTAMP(4)}} to ask the DB for it's system time. > Apparently, at least with DB2, this might return a value that is off by a > multiple of one hour (3600 * 1000ms) depending on whether the OAK instance > and the DB run in different timezones. > Known to work: both on the same machine. > Known to fail: OAK in CET, DB2 in UTC, in which case we're getting a > timestamp one hour in the past. > At this time it's not clear whether the same problem occurs for other > databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3850) Collect and expose Persistent Cache stats
[ https://issues.apache.org/jira/browse/OAK-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091765#comment-15091765 ] Chetan Mehrotra commented on OAK-3850: -- Thanks Teodar for taking this up! Would be useful. Some comments # You can have PersistentCacheStatsMBean extend from CacheStatsMBean. Then it would allow the cache stats to be listed as part of ConsolidatedCacheStatsMBean. Which gives insight into all cache at one place # Would also need ## {{elementCount}} i.e. size in terms of number of entries of given type in cache ## Size of memory occupied by the cache for that type in MVStore # Time taken for read - Would need to see if collecting time for reads add any significant overhead. For persistent cache time collection would be more important when collected for read done from MVStore i.e. in readIfPresent [~tmueller] Any other stats we can collect from underlying MVStore > Collect and expose Persistent Cache stats > - > > Key: OAK-3850 > URL: https://issues.apache.org/jira/browse/OAK-3850 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, documentmk >Reporter: Teodor Rosu > Attachments: OAK-3850-v0.patch > > > Expose persistent cache statistics ( see: [Guava > CacheStats|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/cache/CacheStats.html] > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-3858) Review slow running tests
Francesco Mari created OAK-3858: --- Summary: Review slow running tests Key: OAK-3858 URL: https://issues.apache.org/jira/browse/OAK-3858 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Francesco Mari Some of the tests executed during a normal {{mvn clean test}} execution seem to be very slow if compared with the rest of the suite. On my machine, some problematic tests are: {noformat} Running org.apache.jackrabbit.oak.spi.blob.FileBlobStoreTest Tests run: 18, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.982 sec Running org.apache.jackrabbit.oak.plugins.document.BasicDocumentStoreTest Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.961 sec Running org.apache.jackrabbit.oak.plugins.document.BulkCreateOrUpdateTest Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.076 sec Running org.apache.jackrabbit.oak.plugins.document.ConcurrentDocumentStoreTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.054 sec Running org.apache.jackrabbit.oak.plugins.document.DocumentDiscoveryLiteServiceTest Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 982.526 sec Running org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreTest Tests run: 53, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.132 sec Running org.apache.jackrabbit.oak.plugins.document.LastRevRecoveryAgentTest Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.068 sec Running org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStorePerformanceTest Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 10.006 sec Running org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStoreTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.017 sec Running org.apache.jackrabbit.oak.plugins.document.VersionGCWithSplitTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.128 sec Running org.apache.jackrabbit.oak.security.authentication.ldap.LdapLoginStandaloneTest Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.96 sec {noformat} These tests should be analyzed for potential errors or moved to the integration test phase. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091749#comment-15091749 ] Michael Dürig commented on OAK-3842: re. {{org.apache.jackrabbit.oak.security.authentication.ldap}}. That package is actually empty. So there is no harm in exporting it I guess. Since it has an {{impl}} sub-package its intention seems to be API. [~tripod], please confirm. > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091747#comment-15091747 ] Michael Dürig commented on OAK-3842: re. {{org.apache.jackrabbit.oak.security}}: I would remove the package export then as it is only used by Oak itself and we don't want it to be used externally. > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions
[ https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091737#comment-15091737 ] Marcel Reutegger commented on OAK-2472: --- One reason why consolidation tasks are re-created may be due to the retry logic in DocumentNodeStoreBranch. When a commit fails because of a conflict, it is suspended until the conflicting change is visible, then changes are rebased and a new commit is attempted. This also means the commit hook is called again and I assume the atomic counter editor will again create a new consolidation task. You can track conflict handling when you enable debug logging for the classes DocumentNodeStoreBranch and Collision. > Add support for atomic counters on cluster solutions > > > Key: OAK-2472 > URL: https://issues.apache.org/jira/browse/OAK-2472 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.3.0 >Reporter: Davide Giannella >Assignee: Davide Giannella > Labels: scalability > Fix For: 1.4 > > Attachments: atomic-counter.md, oak-1452185608.log.gz, > oak-1452268140.log.gz > > > As of OAK-2220 we added support for atomic counters in a non-clustered > situation. > This ticket is about covering the clustered ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091686#comment-15091686 ] angela commented on OAK-3842: - [~mduerig] as far as the security packages are concerned: everything that contains {{spi}} is public and should be exported. the remaining two non-spi packages: - org.apache.jackrabbit.oak.security.authentication.ldap that looks wrong to me; but i am not the author of that code. [~tripod], can you please explain why this is exported? - org.apache.jackrabbit.oak.security this is only needed because oak-jcr (and maybe other modules) hard-code the {{SecurityProviderImpl}} in the default setup. That implementation has been replaced by Francesco's implementations in all OSGi-based setups; if i could wish this would only be used for test-purposes... but as long as we have the dual-setup mess we probably have to stick with the old/broken implementation. Not sure if/how we can get rid of that export though I would wish to remove it asap. > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3850) Collect and expose Persistent Cache stats
[ https://issues.apache.org/jira/browse/OAK-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teodor Rosu updated OAK-3850: - Attachment: OAK-3850-v0.patch I attached a quick draft just to share the exposed stats and mbean interface I was thinking of. This only covers NodeCache stats for now. [~chetanm] [~mreutegg] Could you please take a quick look? Also could you link this to OAK-3814? I initially thought of exposing as CacheStatsMbean, but I don't think it's the best choice ( no TimeStats, non trivial methods like getEvictionCount and others ). > Collect and expose Persistent Cache stats > - > > Key: OAK-3850 > URL: https://issues.apache.org/jira/browse/OAK-3850 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, documentmk >Reporter: Teodor Rosu > Attachments: OAK-3850-v0.patch > > > Expose persistent cache statistics ( see: [Guava > CacheStats|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/cache/CacheStats.html] > ) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091636#comment-15091636 ] Michael Dürig commented on OAK-3842: Yes I'm planning to do that once we have sorted out the process regarding the remaining packages in this issue. > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091618#comment-15091618 ] Konrad Windszus commented on OAK-3842: -- Maybe then it would be a good idea to collect in a separate ticket what should end up in a dedicated API package in the mid term (because the classes are useful for other consumers than Oak itself) and use this ticket only for deciding which parts of the bundle should now be under semantic versioning rules (without any refactoring of classes/packages). > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3842) Adjust package export declarations
[ https://issues.apache.org/jira/browse/OAK-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091609#comment-15091609 ] Michael Dürig commented on OAK-3842: I'm aware of the {{NodeObserver}} situation. Unfortunately there is no easy way forward AFAICS. This is why I'm proposing to remove the package export declaration from the respective packages until we are in the position to make the associated guarantees. Leaving these packaged managed will inflict more pain on consumers as the versions will most likely not be sufficiently stable (because of other changes in the same packages). > Adjust package export declarations > --- > > Key: OAK-3842 > URL: https://issues.apache.org/jira/browse/OAK-3842 > Project: Jackrabbit Oak > Issue Type: Task >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Critical > Labels: api, modularization, technical_debt > Fix For: 1.4 > > > We need to adjust the package export declarations such that they become > manageable with our branch / release model. > See http://markmail.org/thread/5g3viq5pwtdryapr for discussion. > I propose to remove package export declarations from all packages that we > don't consider public API / SPI beyond Oak itself. This would allow us to > evolve Oak internal stuff (e.g. things used across Oak modules) freely > without having to worry about merges to branches messing up semantic > versioning. OTOH it would force us to keep externally facing public API / SPI > reasonably stable also across the branches. Furthermore such an approach > would send the right signal to Oak API / SPI consumers regarding the > stability assumptions they can make. > An external API / SPI having a (transitive) dependency on internals might be > troublesome. In doubt I would remove the export version here until we can > make reasonable guarantees (either through decoupling the code or stabilising > the dependencies). > I would start digging through the export version and prepare an initial > proposal for further discussion. > /cc [~frm], [~chetanm], [~mmarth] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2472) Add support for atomic counters on cluster solutions
[ https://issues.apache.org/jira/browse/OAK-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091588#comment-15091588 ] Davide Giannella commented on OAK-2472: --- Latest update from Friday evening. If I go debug the UT, allowing therefore more time between each step, for a total of 4 updates (2 updates on each of the 2 cluster nodes), you can see that only 4 updates are issued, get rescheduled some times and complete successfully. The same code, with the same conditions, run without any interruptions gets into a sort-of loop re-creating over and over new tasks. /cc [~mduerig], [~mreutegg] > Add support for atomic counters on cluster solutions > > > Key: OAK-2472 > URL: https://issues.apache.org/jira/browse/OAK-2472 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.3.0 >Reporter: Davide Giannella >Assignee: Davide Giannella > Labels: scalability > Fix For: 1.4 > > Attachments: atomic-counter.md, oak-1452185608.log.gz, > oak-1452268140.log.gz > > > As of OAK-2220 we added support for atomic counters in a non-clustered > situation. > This ticket is about covering the clustered ones. -- This message was sent by Atlassian JIRA (v6.3.4#6332)