[jira] [Updated] (OAK-5923) Document S3 datastore
[ https://issues.apache.org/jira/browse/OAK-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Klimetschek updated OAK-5923: --- Description: The S3 datastore is currently hardly documented. The [generic blobstore documentation|http://jackrabbit.apache.org/oak/docs/plugins/blobstore.html] is very much focused about the internal class structures, but quite confusing for someone who wants to configure a specific datastore such as file and s3 (the only ones right now). S3 settings are not documented at all, the [config page|http://jackrabbit.apache.org/oak/docs/osgi_config.html#config-blobstore] only mentions the generic maxCachedBinarySize and cacheSizeInMB. The best bet is the [Adobe AEM product documentation|https://docs.adobe.com/docs/en/aem/6-2/deploy/platform/data-store-config.html], but that is for an older version and a few things changed since then. Specific items below. Some have been confusing people using oak-blob-cloud 1.5.15: - "secret" property unclear (new) - secretKey & accessKey can be omitted to leverage IAM roles (new) - drop of proactiveCaching property (new) - aws bucket/region/etc. settings - config options (timeout, retries, threads) - understanding caching behavior and performance optimization - shared vs. non-shared options - migrating from a previous version, how to update the config - requirements on the AWS (account) side was: The S3 datastore is currently hardly documented. The [generic blobstore documentation|http://jackrabbit.apache.org/oak/docs/plugins/blobstore.html] is very much focused about the internal class structures, but quite confusing for someone who wants to configure a specific datastore such as file and s3 (the only ones right now). S3 settings are not documented at all, the [config page|http://jackrabbit.apache.org/oak/docs/osgi_config.html#config-blobstore] only mentions the generic maxCachedBinarySize and cacheSizeInMB. The best bet is the [Adobe AEM product documentation|https://docs.adobe.com/docs/en/aem/6-2/deploy/platform/data-store-config.html], but that is for an older version and a few things changed since then. Specific items below. Some have been confusing people using oak-blob-cloud 1.5.15: - "secret" property unclear (new) - secretKey & accessKey can be omitted to leverage IAM roles (new) - drop of proactiveCaching property (new) - aws bucket/region/etc. settings - config options (timeout, retries, threads) - understanding caching behavior and performance optimization - shared vs. non-shared options - migrating from a previous version, how to update the config > Document S3 datastore > - > > Key: OAK-5923 > URL: https://issues.apache.org/jira/browse/OAK-5923 > Project: Jackrabbit Oak > Issue Type: Documentation > Components: blob >Reporter: Alexander Klimetschek > > The S3 datastore is currently hardly documented. > The [generic blobstore > documentation|http://jackrabbit.apache.org/oak/docs/plugins/blobstore.html] > is very much focused about the internal class structures, but quite confusing > for someone who wants to configure a specific datastore such as file and s3 > (the only ones right now). S3 settings are not documented at all, the [config > page|http://jackrabbit.apache.org/oak/docs/osgi_config.html#config-blobstore] > only mentions the generic maxCachedBinarySize and cacheSizeInMB. > The best bet is the [Adobe AEM product > documentation|https://docs.adobe.com/docs/en/aem/6-2/deploy/platform/data-store-config.html], > but that is for an older version and a few things changed since then. > Specific items below. Some have been confusing people using oak-blob-cloud > 1.5.15: > - "secret" property unclear (new) > - secretKey & accessKey can be omitted to leverage IAM roles (new) > - drop of proactiveCaching property (new) > - aws bucket/region/etc. settings > - config options (timeout, retries, threads) > - understanding caching behavior and performance optimization > - shared vs. non-shared options > - migrating from a previous version, how to update the config > - requirements on the AWS (account) side -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (OAK-5923) Document S3 datastore
[ https://issues.apache.org/jira/browse/OAK-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905907#comment-15905907 ] Alexander Klimetschek commented on OAK-5923: /cc [~shgu...@adobe.com] > Document S3 datastore > - > > Key: OAK-5923 > URL: https://issues.apache.org/jira/browse/OAK-5923 > Project: Jackrabbit Oak > Issue Type: Documentation > Components: blob >Reporter: Alexander Klimetschek > > The S3 datastore is currently hardly documented. > The [generic blobstore > documentation|http://jackrabbit.apache.org/oak/docs/plugins/blobstore.html] > is very much focused about the internal class structures, but quite confusing > for someone who wants to configure a specific datastore such as file and s3 > (the only ones right now). S3 settings are not documented at all, the [config > page|http://jackrabbit.apache.org/oak/docs/osgi_config.html#config-blobstore] > only mentions the generic maxCachedBinarySize and cacheSizeInMB. > The best bet is the [Adobe AEM product > documentation|https://docs.adobe.com/docs/en/aem/6-2/deploy/platform/data-store-config.html], > but that is for an older version and a few things changed since then. > Specific items below. Some have been confusing people using oak-blob-cloud > 1.5.15: > - "secret" property unclear (new) > - secretKey & accessKey can be omitted to leverage IAM roles (new) > - drop of proactiveCaching property (new) > - aws bucket/region/etc. settings > - config options (timeout, retries, threads) > - understanding caching behavior and performance optimization > - shared vs. non-shared options > - migrating from a previous version, how to update the config > - requirements on the AWS (account) side -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (OAK-5923) Document S3 datastore
Alexander Klimetschek created OAK-5923: -- Summary: Document S3 datastore Key: OAK-5923 URL: https://issues.apache.org/jira/browse/OAK-5923 Project: Jackrabbit Oak Issue Type: Documentation Components: blob Reporter: Alexander Klimetschek The S3 datastore is currently hardly documented. The [generic blobstore documentation|http://jackrabbit.apache.org/oak/docs/plugins/blobstore.html] is very much focused about the internal class structures, but quite confusing for someone who wants to configure a specific datastore such as file and s3 (the only ones right now). S3 settings are not documented at all, the [config page|http://jackrabbit.apache.org/oak/docs/osgi_config.html#config-blobstore] only mentions the generic maxCachedBinarySize and cacheSizeInMB. The best bet is the [Adobe AEM product documentation|https://docs.adobe.com/docs/en/aem/6-2/deploy/platform/data-store-config.html], but that is for an older version and a few things changed since then. Specific items below. Some have been confusing people using oak-blob-cloud 1.5.15: - "secret" property unclear (new) - secretKey & accessKey can be omitted to leverage IAM roles (new) - drop of proactiveCaching property (new) - aws bucket/region/etc. settings - config options (timeout, retries, threads) - understanding caching behavior and performance optimization - shared vs. non-shared options - migrating from a previous version, how to update the config -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (OAK-5922) Utils.abortingIterable should implement Closeable
Julian Reschke created OAK-5922: --- Summary: Utils.abortingIterable should implement Closeable Key: OAK-5922 URL: https://issues.apache.org/jira/browse/OAK-5922 Project: Jackrabbit Oak Issue Type: Task Components: documentmk Reporter: Julian Reschke Priority: Minor Wrapping a {{CloseableIterator}} with {{abortingIterable()}} causes it to lose it's closeability, which might lead tohard to debug problems later on. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (OAK-5834) Remove the deprecated oak-segment module
[ https://issues.apache.org/jira/browse/OAK-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrei Dulceanu resolved OAK-5834. -- Resolution: Fixed Fixed at r1786327. > Remove the deprecated oak-segment module > > > Key: OAK-5834 > URL: https://issues.apache.org/jira/browse/OAK-5834 > Project: Jackrabbit Oak > Issue Type: Task > Components: segmentmk >Reporter: Michael Dürig >Assignee: Andrei Dulceanu >Priority: Minor > Labels: deprecation, technical_debt > Fix For: 1.7.0 > > Attachments: OAK-5834-01.patch, OAK-5834-02.patch, OAK-5834-03.patch, > OAK-5834-04.patch > > > The {{oak-segment}} module has been deprecated for 1.6 with OAK-4247. We > should remove it entirely now: > * Remove the module > * Remove fixtures and ITs pertaining to it > * Remove references from documentation where not needed any more > An open question is how we should deal with the tooling for {{oak-segment}}. > Should we still maintain this in trunk and keep the required classes (which > very much might be all) or should we maintain the tooling on the branches? > What about new features in tooling? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (OAK-5834) Remove the deprecated oak-segment module
[ https://issues.apache.org/jira/browse/OAK-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904943#comment-15904943 ] Francesco Mari commented on OAK-5834: - LGTM > Remove the deprecated oak-segment module > > > Key: OAK-5834 > URL: https://issues.apache.org/jira/browse/OAK-5834 > Project: Jackrabbit Oak > Issue Type: Task > Components: segmentmk >Reporter: Michael Dürig >Assignee: Andrei Dulceanu >Priority: Minor > Labels: deprecation, technical_debt > Fix For: 1.7.0 > > Attachments: OAK-5834-01.patch, OAK-5834-02.patch, OAK-5834-03.patch, > OAK-5834-04.patch > > > The {{oak-segment}} module has been deprecated for 1.6 with OAK-4247. We > should remove it entirely now: > * Remove the module > * Remove fixtures and ITs pertaining to it > * Remove references from documentation where not needed any more > An open question is how we should deal with the tooling for {{oak-segment}}. > Should we still maintain this in trunk and keep the required classes (which > very much might be all) or should we maintain the tooling on the branches? > What about new features in tooling? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (OAK-5834) Remove the deprecated oak-segment module
[ https://issues.apache.org/jira/browse/OAK-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrei Dulceanu updated OAK-5834: - Attachment: OAK-5834-04.patch I attached a new patch which includes: * removal of dependency between oak-http and oak-segment * documentation update Changes can be followed easier at [1]. This concludes the work needed for solving this issue. [~frm], [~mduerig], can you take a look at it? [1] https://github.com/dulceanu/jackrabbit-oak/commits/issues/OAK-5834 > Remove the deprecated oak-segment module > > > Key: OAK-5834 > URL: https://issues.apache.org/jira/browse/OAK-5834 > Project: Jackrabbit Oak > Issue Type: Task > Components: segmentmk >Reporter: Michael Dürig >Assignee: Andrei Dulceanu >Priority: Minor > Labels: deprecation, technical_debt > Fix For: 1.7.0 > > Attachments: OAK-5834-01.patch, OAK-5834-02.patch, OAK-5834-03.patch, > OAK-5834-04.patch > > > The {{oak-segment}} module has been deprecated for 1.6 with OAK-4247. We > should remove it entirely now: > * Remove the module > * Remove fixtures and ITs pertaining to it > * Remove references from documentation where not needed any more > An open question is how we should deal with the tooling for {{oak-segment}}. > Should we still maintain this in trunk and keep the required classes (which > very much might be all) or should we maintain the tooling on the branches? > What about new features in tooling? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (OAK-4780) VersionGarbageCollector should be able to run incrementally
[ https://issues.apache.org/jira/browse/OAK-4780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904841#comment-15904841 ] Stefan Eissing commented on OAK-4780: - Result from run of current version against a large customer database: h4. Run Timings {code} Started: 2017-03-09 18:39:22.787 Ended: 2017-03-10 06:56:01.721 Duration: 12.28 hours Stats: VersionGCStats{ignoredGCDueToCheckPoint=true, canceled=true, deletedDocGCCount=2011642 (of which leaf: 687880), updateResurrectedGCCount=25229600, splitDocGCCount=92, intermediateSplitDocGCCount=0, iterationCount=60, timeActive=12,28 h, timeToCollectDeletedDocs=0,000 ns, timeToCheckDeletedDocs=0,000 ns, timeToSortDocIds=0,000 ns, timeTakenToUpdateResurrectedDocs=0,000 ns, timeTakenToDeleteDeletedDocs=0,000 ns, timeTakenToCollectAndDeleteSplitDocs=0,000 ns} {code} _(stopwatches are not accumulative over runs atm, so they show 0 for the total)_ The RGC run was initiated by: {code} java -jar target/oak-run-1.8-SNAPSHOT.jar revisions mongodb://localhost/ collect {code} It performed 60 iterations over almost 12 hours (the database has ~250 million nodes of which 11% had {{_deletedOnce}} set), starting with: {code} 18:39:22.789 INFO start 1. run (avg duration 0.0 sec) 18:39:22.792 DEBUG No lastOldestTimestamp found, querying for the oldest deletedOnce candidate 18:39:37.861 DEBUG lastOldestTimestamp found: 2016-10-20 17:47:49.999 18:39:50.491 DEBUG deletedOnce candidates: 27241631 found, 95000 preferred, scope now [2016-10-20 17:47:49.999, 2016-10-21 05:31:15.835] ... 18:39:54.469 DEBUG successful run using 0.0% of limit, raising recommended interval to 63308 seconds 18:39:54.471 INFO Revision garbage collection finished in 3,973 s. VersionGCStats{ignoredGCDueToCheckPoint=false, canceled=false, deletedDocGCCount=0 (of which leaf: 0), updateResurrectedGCCount=9, splitDocGCCount=0, intermediateSplitDocGCCount=0, iterationCount=0, timeActive=31,68 s, timeToCollectDeletedDocs=3,935 s, timeToCheckDeletedDocs=2,953 ms, timeToSortDocIds=0,000 ns, timeTakenToUpdateResurrectedDocs=19,46 ms, timeTakenToDeleteDeletedDocs=0,000 ns, timeTakenToCollectAndDeleteSplitDocs=13,27 ms} {code} until the last one: {code} 06:56:01.717 INFO start 60. run (avg duration 749.118 sec) 06:56:01.717 DEBUG previous runs recommend a 68783 sec duration, scope now [2017-01-29 21:09:00.734, 2017-01-30 16:15:24.416] 06:56:01.717 WARN Ignoring RGC run because a valid checkpoint [revision: "r159ebd85640-0-4", clusterId: 4, time: "2017-01-29 21:09:00.736"] exists inside minimal scope [2017-01-29 21:09:00.734, 2017-01-29 21:10:00.734]. {code} h4. Time Interval Handling The runs enforced the default collection limit (10), so that no disk file sorting was necessary. Because the delete candidates were not evenly distributed, it aborted 15 runs and halfed the time interval used. (those runs did still delete leaf nodes and reset _deletedOnce for visible documents encountered, so they are not "lost"). The time interval was again raised by a factor of 1.5 whenever the collected nodes (non-leaf) stayed below 66% of the limit. This happened initially on every run, as they encountered only nodes where _deletedOnce needed to be reset. Intervals grew from ~9 hours to 13 days, then needed to shrink to cleanup the lump of really deleted non-leaf docs, using 13.25 minutes as shortest scope for a run. In detail: {code} 18:39:54.469 used 0.0% of limit, rec. interval of 63308 seconds 18:39:54.485 used 0.0% of limit, rec. interval of 94963 seconds 18:39:54.497 used 0.0% of limit, rec. interval of 142444 seconds 18:39:54.509 used 0.0% of limit, rec. interval of 213667 seconds 18:44:01.693 used 0.0% of limit, rec. interval of 320500 seconds 18:44:01.924 used 0.0% of limit, rec. interval of 480750 seconds 18:52:36.998 used 0.0% of limit, rec. interval of 721126 seconds 19:11:03.765 used 0.0% of limit, rec. interval of 1081689 seconds 00:27:51.562 used 0.0% of limit, rec. interval of 1622534 seconds 01:08:25.855 used 0.0% of limit, rec. interval of 2433801 seconds 03:41:52.568 used 0.0% of limit, rec. interval of 3650701 seconds 06:37:03.949 limit exceeded, reducing interval to 762539 seconds 06:41:30.360 used 0.0% of limit, rec. interval of 1143809 seconds 06:42:27.120 limit exceeded, reducing interval to 381269 seconds 06:42:48.380 used 0.0% of limit, rec. interval of 571904 seconds 06:42:51.860 limit exceeded, reducing interval to 190634 seconds 06:43:06.386 limit exceeded, reducing interval to 95317 seconds 06:43:21.085 used 0.0% of limit, rec. interval of 142976 seconds 06:43:23.816 limit exceeded, reducing interval to 71488 seconds 06:43:40.364 used 7.8% of limit, rec. interval of 107232 seconds 06:43:43.166 limit exceeded, reducing interval to 53616 seconds 06:43:59.157 limit exceeded, reducing interval to 13404 seconds 06:43:55.608 limit exceeded,
[jira] [Reopened] (OAK-5921) Make import org.apache.log4j optional
[ https://issues.apache.org/jira/browse/OAK-5921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu reopened OAK-5921: -- apologies, spoke too soon, this is about the other log4j package > Make import org.apache.log4j optional > - > > Key: OAK-5921 > URL: https://issues.apache.org/jira/browse/OAK-5921 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: 1.6.1 >Reporter: Oliver Lietz > Attachments: OAK-5921.patch > > > {{oak-segment-tar}} imports {{org.apache.log4j}}. I didn't investigate why > {{oak-segment-tar}} is importing {{org.apache.log4j}} at all (no direct use), > but that import could be optional (did some tests in Sling). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (OAK-5921) Make import org.apache.log4j optional
[ https://issues.apache.org/jira/browse/OAK-5921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Parvulescu resolved OAK-5921. -- Resolution: Duplicate dupe of OAK-4829, this is done on trunk already, please ask for backport to 1.6 on that issue. > Make import org.apache.log4j optional > - > > Key: OAK-5921 > URL: https://issues.apache.org/jira/browse/OAK-5921 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: 1.6.1 >Reporter: Oliver Lietz > Attachments: OAK-5921.patch > > > {{oak-segment-tar}} imports {{org.apache.log4j}}. I didn't investigate why > {{oak-segment-tar}} is importing {{org.apache.log4j}} at all (no direct use), > but that import could be optional (did some tests in Sling). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (OAK-5641) Update AWS sdk dependency to latest version in oak-blob-cloud
[ https://issues.apache.org/jira/browse/OAK-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Davide Giannella closed OAK-5641. - Bulk close 1.4.14 > Update AWS sdk dependency to latest version in oak-blob-cloud > - > > Key: OAK-5641 > URL: https://issues.apache.org/jira/browse/OAK-5641 > Project: Jackrabbit Oak > Issue Type: Task > Components: blob >Reporter: Amit Jain >Assignee: Amit Jain > Labels: candidate_oak_1_6 > Fix For: 1.4.14, 1.7.0, 1.8 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (OAK-5921) Make import org.apache.log4j optional
[ https://issues.apache.org/jira/browse/OAK-5921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oliver Lietz updated OAK-5921: -- Attachment: OAK-5921.patch > Make import org.apache.log4j optional > - > > Key: OAK-5921 > URL: https://issues.apache.org/jira/browse/OAK-5921 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Affects Versions: 1.6.1 >Reporter: Oliver Lietz > Attachments: OAK-5921.patch > > > {{oak-segment-tar}} imports {{org.apache.log4j}}. I didn't investigate why > {{oak-segment-tar}} is importing {{org.apache.log4j}} at all (no direct use), > but that import could be optional (did some tests in Sling). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (OAK-5921) Make import org.apache.log4j optional
Oliver Lietz created OAK-5921: - Summary: Make import org.apache.log4j optional Key: OAK-5921 URL: https://issues.apache.org/jira/browse/OAK-5921 Project: Jackrabbit Oak Issue Type: Improvement Components: segment-tar Affects Versions: 1.6.1 Reporter: Oliver Lietz {{oak-segment-tar}} imports {{org.apache.log4j}}. I didn't investigate why {{oak-segment-tar}} is importing {{org.apache.log4j}} at all (no direct use), but that import could be optional (did some tests in Sling). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (OAK-5920) Checkpoint migration will fail if the MissingBlobStore is used
[ https://issues.apache.org/jira/browse/OAK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek reassigned OAK-5920: -- Assignee: Tomek Rękawek > Checkpoint migration will fail if the MissingBlobStore is used > -- > > Key: OAK-5920 > URL: https://issues.apache.org/jira/browse/OAK-5920 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.6.1 >Reporter: Tomek Rękawek >Assignee: Tomek Rękawek > Fix For: 1.6.2 > > > It seems that the checkpoint migration doesn't work with the missing blob > store: > {noformat} > 09.03.2017 07:44:22.777 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2638: > /libs/fd/af/components/panel/cq:styleConfig/items/wizardPanel/items/wizardPanelScrollers > 09.03.2017 07:44:22.813 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2639: > /apps/acs-commons/components/content/sharethis-buttons/dialog/items/items > 09.03.2017 07:44:31.805 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository-segment-tar-20170309-072655/segmentstore > 09.03.2017 07:44:32.051 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository/segmentstore > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.adobe.granite.quickstart.base.impl.Main.(Main.java:881) > at com.adobe.granite.quickstart.base.impl.Main.main(Main.java:959) > Caused by: java.lang.RuntimeException: javax.jcr.RepositoryException: Failed > to copy content > at com.google.common.io.Closer.rethrow(Closer.java:149) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81) > at > com.adobe.granite.crx2oak.engine.MigrationEngine$2.doMigration(MigrationEngine.java:66) > at > com.adobe.granite.crx2oak.engine.MigrationEngine.process(MigrationEngine.java:91) > at com.adobe.granite.crx2oak.pipeline.Pipeline.run(Pipeline.java:103) > at com.adobe.granite.crx2oak.CRX2Oak.run(CRX2Oak.java:66) > at com.adobe.granite.crx2oak.CRX2Oak.main(CRX2Oak.java:51) > ... 6 more > Caused by: javax.jcr.RepositoryException: Failed to copy content > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286) > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:242) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78) > ... 11 more > Caused by: java.lang.UnsupportedOperationException > at > org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62) > at > org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44) > at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227) > at com.google.common.base.Objects.equal(Objects.java:60) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at >
[jira] [Updated] (OAK-5920) Checkpoint migration will fail if the MissingBlobStore is used
[ https://issues.apache.org/jira/browse/OAK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-5920: --- Fix Version/s: 1.6.2 > Checkpoint migration will fail if the MissingBlobStore is used > -- > > Key: OAK-5920 > URL: https://issues.apache.org/jira/browse/OAK-5920 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.6.1 >Reporter: Tomek Rękawek >Assignee: Tomek Rękawek > Fix For: 1.6.2 > > > It seems that the checkpoint migration doesn't work with the missing blob > store: > {noformat} > 09.03.2017 07:44:22.777 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2638: > /libs/fd/af/components/panel/cq:styleConfig/items/wizardPanel/items/wizardPanelScrollers > 09.03.2017 07:44:22.813 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2639: > /apps/acs-commons/components/content/sharethis-buttons/dialog/items/items > 09.03.2017 07:44:31.805 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository-segment-tar-20170309-072655/segmentstore > 09.03.2017 07:44:32.051 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository/segmentstore > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.adobe.granite.quickstart.base.impl.Main.(Main.java:881) > at com.adobe.granite.quickstart.base.impl.Main.main(Main.java:959) > Caused by: java.lang.RuntimeException: javax.jcr.RepositoryException: Failed > to copy content > at com.google.common.io.Closer.rethrow(Closer.java:149) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81) > at > com.adobe.granite.crx2oak.engine.MigrationEngine$2.doMigration(MigrationEngine.java:66) > at > com.adobe.granite.crx2oak.engine.MigrationEngine.process(MigrationEngine.java:91) > at com.adobe.granite.crx2oak.pipeline.Pipeline.run(Pipeline.java:103) > at com.adobe.granite.crx2oak.CRX2Oak.run(CRX2Oak.java:66) > at com.adobe.granite.crx2oak.CRX2Oak.main(CRX2Oak.java:51) > ... 6 more > Caused by: javax.jcr.RepositoryException: Failed to copy content > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286) > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:242) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78) > ... 11 more > Caused by: java.lang.UnsupportedOperationException > at > org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62) > at > org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44) > at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227) > at com.google.common.base.Objects.equal(Objects.java:60) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at >
[jira] [Updated] (OAK-5920) Checkpoint migration will fail if the MissingBlobStore is used
[ https://issues.apache.org/jira/browse/OAK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek updated OAK-5920: --- Summary: Checkpoint migration will fail if the MissingBlobStore is used (was: Checkpoint migration shouldn't be enabled if the MissingBlobStore is used) > Checkpoint migration will fail if the MissingBlobStore is used > -- > > Key: OAK-5920 > URL: https://issues.apache.org/jira/browse/OAK-5920 > Project: Jackrabbit Oak > Issue Type: Bug > Components: upgrade >Affects Versions: 1.6.1 >Reporter: Tomek Rękawek > > It seems that the checkpoint migration doesn't work with the missing blob > store: > {noformat} > 09.03.2017 07:44:22.777 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2638: > /libs/fd/af/components/panel/cq:styleConfig/items/wizardPanel/items/wizardPanelScrollers > 09.03.2017 07:44:22.813 INFO o.a.j.o.u.RepositorySidegrade: Copying node > #2639: > /apps/acs-commons/components/content/sharethis-buttons/dialog/items/items > 09.03.2017 07:44:31.805 INFO o.a.j.o.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository-segment-tar-20170309-072655/segmentstore > 09.03.2017 07:44:32.051 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: > /mnt/crx/publish/crx-quickstart/repository/segmentstore > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.adobe.granite.quickstart.base.impl.Main.(Main.java:881) > at com.adobe.granite.quickstart.base.impl.Main.main(Main.java:959) > Caused by: java.lang.RuntimeException: javax.jcr.RepositoryException: Failed > to copy content > at com.google.common.io.Closer.rethrow(Closer.java:149) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81) > at > com.adobe.granite.crx2oak.engine.MigrationEngine$2.doMigration(MigrationEngine.java:66) > at > com.adobe.granite.crx2oak.engine.MigrationEngine.process(MigrationEngine.java:91) > at com.adobe.granite.crx2oak.pipeline.Pipeline.run(Pipeline.java:103) > at com.adobe.granite.crx2oak.CRX2Oak.run(CRX2Oak.java:66) > at com.adobe.granite.crx2oak.CRX2Oak.main(CRX2Oak.java:51) > ... 6 more > Caused by: javax.jcr.RepositoryException: Failed to copy content > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286) > at > org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:242) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92) > at > org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78) > ... 11 more > Caused by: java.lang.UnsupportedOperationException > at > org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62) > at > org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44) > at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227) > at com.google.common.base.Objects.equal(Objects.java:60) > at > org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at > org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) > at > org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) > at >
[jira] [Created] (OAK-5920) Checkpoint migration shouldn't be enabled if the MissingBlobStore is used
Tomek Rękawek created OAK-5920: -- Summary: Checkpoint migration shouldn't be enabled if the MissingBlobStore is used Key: OAK-5920 URL: https://issues.apache.org/jira/browse/OAK-5920 Project: Jackrabbit Oak Issue Type: Bug Components: upgrade Affects Versions: 1.6.1 Reporter: Tomek Rękawek It seems that the checkpoint migration doesn't work with the missing blob store: {noformat} 09.03.2017 07:44:22.777 INFO o.a.j.o.u.RepositorySidegrade: Copying node #2638: /libs/fd/af/components/panel/cq:styleConfig/items/wizardPanel/items/wizardPanelScrollers 09.03.2017 07:44:22.813 INFO o.a.j.o.u.RepositorySidegrade: Copying node #2639: /apps/acs-commons/components/content/sharethis-buttons/dialog/items/items 09.03.2017 07:44:31.805 INFO o.a.j.o.s.f.FileStore: TarMK closed: /mnt/crx/publish/crx-quickstart/repository-segment-tar-20170309-072655/segmentstore 09.03.2017 07:44:32.051 INFO o.a.j.o.p.s.f.FileStore: TarMK closed: /mnt/crx/publish/crx-quickstart/repository/segmentstore java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.adobe.granite.quickstart.base.impl.Main.(Main.java:881) at com.adobe.granite.quickstart.base.impl.Main.main(Main.java:959) Caused by: java.lang.RuntimeException: javax.jcr.RepositoryException: Failed to copy content at com.google.common.io.Closer.rethrow(Closer.java:149) at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:81) at com.adobe.granite.crx2oak.engine.MigrationEngine$2.doMigration(MigrationEngine.java:66) at com.adobe.granite.crx2oak.engine.MigrationEngine.process(MigrationEngine.java:91) at com.adobe.granite.crx2oak.pipeline.Pipeline.run(Pipeline.java:103) at com.adobe.granite.crx2oak.CRX2Oak.run(CRX2Oak.java:66) at com.adobe.granite.crx2oak.CRX2Oak.main(CRX2Oak.java:51) ... 6 more Caused by: javax.jcr.RepositoryException: Failed to copy content at org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:286) at org.apache.jackrabbit.oak.upgrade.RepositorySidegrade.copy(RepositorySidegrade.java:242) at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.sidegrade(OakUpgrade.java:92) at org.apache.jackrabbit.oak.upgrade.cli.OakUpgrade.migrate(OakUpgrade.java:78) ... 11 more Caused by: java.lang.UnsupportedOperationException at org.apache.jackrabbit.oak.upgrade.cli.blob.MissingBlobStore.getInputStream(MissingBlobStore.java:62) at org.apache.jackrabbit.oak.plugins.blob.BlobStoreBlob.getNewStream(BlobStoreBlob.java:47) at org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:276) at org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.getNewStream(SegmentBlob.java:86) at org.apache.jackrabbit.oak.plugins.memory.AbstractBlob$1.openStream(AbstractBlob.java:44) at com.google.common.io.ByteSource.contentEquals(ByteSource.java:344) at org.apache.jackrabbit.oak.plugins.memory.AbstractBlob.equal(AbstractBlob.java:67) at org.apache.jackrabbit.oak.plugins.segment.SegmentBlob.equals(SegmentBlob.java:227) at com.google.common.base.Objects.equal(Objects.java:60) at org.apache.jackrabbit.oak.plugins.memory.AbstractPropertyState.equal(AbstractPropertyState.java:53) at org.apache.jackrabbit.oak.plugins.segment.SegmentPropertyState.equals(SegmentPropertyState.java:242) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareProperties(SegmentNodeState.java:617) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:511) at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:551) at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) at org.apache.jackrabbit.oak.plugins.segment.MapRecord.compare(MapRecord.java:456) at org.apache.jackrabbit.oak.plugins.segment.SegmentNodeState.compareAgainstBaseState(SegmentNodeState.java:604) at org.apache.jackrabbit.oak.spi.state.ApplyDiff.childNodeChanged(ApplyDiff.java:87) at
[jira] [Updated] (OAK-5919) Discrepancy between addAccessControlEntry and clear on versioned node
[ https://issues.apache.org/jira/browse/OAK-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marco Piovesana updated OAK-5919: - Description: I was trying to change the permissions on a *versioned* node and i found two things that I don't quite understand: 1) If i try to do an {{AccessControlUtils.addAccessControlEntry(...)}} i got an error if the node in not checked-out (and this seems consistent with what written on [JCR-1639|https://issues.apache.org/jira/browse/JCR-1639]). However I can do an {{AccessControlUtils.clear(...)}} without any error. Why? Aren't they both changing the ACL? 2) When I do {{AccessControlUtils.addAccessControlEntry(...)}} the error that i receive is: _OakVersion0001: Cannot add property jcr:mixinTypes on checked in node_. Why a change on the ACL should change the _jcr:mixinTypes_? My node was already versionable. One last question: if I want to change the permissions without having to check-out the node what I have to do? (I can do that by setting the _jcr:mixinTypes_ to _IGNORE_ but i don't think is the right way... or not?) was: I was trying to change the permissions on a *versioned* node and i found two things that don't quite understand: 1) If i try to do an {{AccessControlUtils.addAccessControlEntry(...)}} i got an error if the node in not checked-out (and this seems consistent with what written on [JCR-1639|https://issues.apache.org/jira/browse/JCR-1639]). However I can do an {{AccessControlUtils.clear(...)}} without any error. Why? Aren't they both changing the ACL? 2) When I do {{AccessControlUtils.addAccessControlEntry(...)}} the error that i receive is: _OakVersion0001: Cannot add property jcr:mixinTypes on checked in node_. Why a change on the ACL should change the _jcr:mixinTypes_? My node was already versionable. One last question: if I want to change the permissions without having to check-out the node what I have to do? (I can do that by setting the _jcr:mixinTypes_ to _IGNORE_ but i don't think is the right way... or not?) > Discrepancy between addAccessControlEntry and clear on versioned node > - > > Key: OAK-5919 > URL: https://issues.apache.org/jira/browse/OAK-5919 > Project: Jackrabbit Oak > Issue Type: Bug >Affects Versions: 1.6.0 >Reporter: Marco Piovesana > > I was trying to change the permissions on a *versioned* node and i found two > things that I don't quite understand: > 1) If i try to do an {{AccessControlUtils.addAccessControlEntry(...)}} i got > an error if the node in not checked-out (and this seems consistent with what > written on [JCR-1639|https://issues.apache.org/jira/browse/JCR-1639]). > However I can do an {{AccessControlUtils.clear(...)}} without any error. Why? > Aren't they both changing the ACL? > 2) When I do {{AccessControlUtils.addAccessControlEntry(...)}} the error that > i receive is: _OakVersion0001: Cannot add property jcr:mixinTypes on checked > in node_. Why a change on the ACL should change the _jcr:mixinTypes_? My node > was already versionable. > One last question: if I want to change the permissions without having to > check-out the node what I have to do? (I can do that by setting the > _jcr:mixinTypes_ to _IGNORE_ but i don't think is the right way... or not?) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (OAK-5919) Discrepancy between addAccessControlEntry and clear on versioned node
Marco Piovesana created OAK-5919: Summary: Discrepancy between addAccessControlEntry and clear on versioned node Key: OAK-5919 URL: https://issues.apache.org/jira/browse/OAK-5919 Project: Jackrabbit Oak Issue Type: Bug Affects Versions: 1.6.0 Reporter: Marco Piovesana I was trying to change the permissions on a *versioned* node and i found two things that don't quite understand: 1) If i try to do an {{AccessControlUtils.addAccessControlEntry(...)}} i got an error if the node in not checked-out (and this seems consistent with what written on [JCR-1639|https://issues.apache.org/jira/browse/JCR-1639]). However I can do an {{AccessControlUtils.clear(...)}} without any error. Why? Aren't they both changing the ACL? 2) When I do {{AccessControlUtils.addAccessControlEntry(...)}} the error that i receive is: _OakVersion0001: Cannot add property jcr:mixinTypes on checked in node_. Why a change on the ACL should change the _jcr:mixinTypes_? My node was already versionable. One last question: if I want to change the permissions without having to check-out the node what I have to do? (I can do that by setting the _jcr:mixinTypes_ to _IGNORE_ but i don't think is the right way... or not?) -- This message was sent by Atlassian JIRA (v6.3.15#6346)