[jira] [Resolved] (OAK-5095) Improve normalization of configured path in AbstractSharedCachingDataStore
[ https://issues.apache.org/jira/browse/OAK-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain resolved OAK-5095. Resolution: Fixed On trunk http://svn.apache.org/viewvc?rev=1769246=rev > Improve normalization of configured path in AbstractSharedCachingDataStore > -- > > Key: OAK-5095 > URL: https://issues.apache.org/jira/browse/OAK-5095 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: blob >Reporter: Amit Jain >Assignee: Amit Jain >Priority: Minor > Fix For: 1.5.14 > > > Improve normalization of paths to better support relative paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5095) Improve normalization of configured path in AbstractSharedCachingDataStore
Amit Jain created OAK-5095: -- Summary: Improve normalization of configured path in AbstractSharedCachingDataStore Key: OAK-5095 URL: https://issues.apache.org/jira/browse/OAK-5095 Project: Jackrabbit Oak Issue Type: Technical task Components: blob Reporter: Amit Jain Assignee: Amit Jain Priority: Minor Fix For: 1.5.14 Improve normalization of paths to better support relative paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4837) Improved caching for DataStore
[ https://issues.apache.org/jira/browse/OAK-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Jain updated OAK-4837: --- Fix Version/s: (was: 1.5.14) > Improved caching for DataStore > -- > > Key: OAK-4837 > URL: https://issues.apache.org/jira/browse/OAK-4837 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: blob >Reporter: Amit Jain >Assignee: Amit Jain > Labels: performance > Fix For: 1.6 > > Attachments: FileReadBenchmark.png, FileReadBenchmark_0Cache.png, > FileWriteBenchmark.png > > > The current CachingDataStore implementation used with S3DataStore has certain > problems: > * Lack of stats to show hit rate/miss rates for files being requested for > downloads > * Lack of stats for async uploads > * CachingDataStore starts proactively downloading files in the background > when a call to {{getRecord}} is made. > * Async upload functionality leaks into Backend implementations, LocalCache > classes. > * The call to {{DataStore#getRecord()}} which makes multiple calls to > backends which is problematic for S3 (i.e. when not being served bu cache) > * There is some functionality which is not used with Oak like length cache, > sync/async touch etc. which can be removed and code simplified. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2821) PersistentCache not used for RDBBlobStore
[ https://issues.apache.org/jira/browse/OAK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Marth updated OAK-2821: --- Issue Type: Improvement (was: Bug) > PersistentCache not used for RDBBlobStore > - > > Key: OAK-2821 > URL: https://issues.apache.org/jira/browse/OAK-2821 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: rdbmk >Affects Versions: 1.2.12, 1.0.28, 1.4.0 >Reporter: Julian Reschke >Assignee: Thomas Mueller >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4 > Fix For: 1.8 > > Attachments: OAK-2821.diff > > > DocumentMK is currently inconsistent wrt to the use of the PersistentCache > for BlobStore. It is used for Mongo, but not for RDB. We should be consistent > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3402) Multiplexing NodeStore support in Oak layer
[ https://issues.apache.org/jira/browse/OAK-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3402: --- Component/s: (was: segment-tar) (was: documentmk) core > Multiplexing NodeStore support in Oak layer > --- > > Key: OAK-3402 > URL: https://issues.apache.org/jira/browse/OAK-3402 > Project: Jackrabbit Oak > Issue Type: Epic > Components: core >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Labels: multiplexing > > Supporting multiplexing repository would have impact on various places in Oak > design. There are various sub components in Oak which maintain there own > storage built on top of NodeStore. For e.g. indexes are stored within > NodeStore, permissions are also stored within NodeStore. Adding multiplexing > support would impact such stores in following ways > The most basic application of multiplexing support is to support private and > shared storage. Under this an Oak application would have a private store and > a shared store. Content under certain paths would be stored under private > repo while all other content is stored under shared repo > # *Writing* - Any content written via JCR API passes through some > {{CommitHooks}}. These hooks are responsible for updating the indexes, > permission store etc. Now if any path say /foo/bar gets modified the commits > hooks would need to determine under which path in NodeStore should the > derived data (index entries, permission etc) should be stored. For simple > case of private and shared store where we have 2 sets of paths private and > shared these hooks would need to be aware of that and use different path in > NodeStore to store the derived content. Key point to note here that any such > storage has to differentiate wether the path from which the content is being > derived is a private path or shared path > # *Reading* - Reading requirement compliments the writing problem. While > performing any JCR operation Oak might need to invoke QueryIndex, > PermissionStore etc. These stores in turn would need to perform a read from > there storage area within NodeStore. For multiplexing support these > components would then need to be aware that there storage can exist in both > shared and private stores > h4. Terms Used > # _private repo_ (PR) - Set of paths which are considered private to the > application. Tentative example /lib,/apps > # _shared repo_ (SR) - Set of paths which are considered shared and different > versions of the application can perform read and write operations on them. > Tentative example /content, /etc/workflow/instances > # {{PathToStoreMapper}} - Responsible for mapping a path to store type. For > now it can just answer either PR or SR. But the concept can be generalized > Aim of this story is to prototype changes in Oak layer in a fork to asses the > impact on current implementation -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5094) NPE when failing to get the remote head
[ https://issues.apache.org/jira/browse/OAK-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari resolved OAK-5094. - Resolution: Fixed Fixed at r1769157. > NPE when failing to get the remote head > --- > > Key: OAK-5094 > URL: https://issues.apache.org/jira/browse/OAK-5094 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Timothee Maret >Assignee: Francesco Mari > Fix For: 1.6, 1.5.14 > > Attachments: OAK-5094.patch > > > {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} > may return {{null}} in case it fails the request fails. This case is not > currently handled and cause > {code} > 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed > synchronizing state > . > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5094) NPE when failing to get the remote head
[ https://issues.apache.org/jira/browse/OAK-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-5094: Fix Version/s: 1.6 > NPE when failing to get the remote head > --- > > Key: OAK-5094 > URL: https://issues.apache.org/jira/browse/OAK-5094 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Timothee Maret >Assignee: Francesco Mari > Fix For: 1.6, 1.5.14 > > Attachments: OAK-5094.patch > > > {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} > may return {{null}} in case it fails the request fails. This case is not > currently handled and cause > {code} > 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed > synchronizing state > . > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5094) NPE when failing to get the remote head
[ https://issues.apache.org/jira/browse/OAK-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari reassigned OAK-5094: --- Assignee: Francesco Mari (was: Timothee Maret) > NPE when failing to get the remote head > --- > > Key: OAK-5094 > URL: https://issues.apache.org/jira/browse/OAK-5094 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Timothee Maret >Assignee: Francesco Mari > Fix For: 1.6, 1.5.14 > > Attachments: OAK-5094.patch > > > {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} > may return {{null}} in case it fails the request fails. This case is not > currently handled and cause > {code} > 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed > synchronizing state > . > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5094) NPE when failing to get the remote head
[ https://issues.apache.org/jira/browse/OAK-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothee Maret updated OAK-5094: Attachment: OAK-5094.patch Attaching a patch that handle the case (essentially throw a IllegalStateException instead of NPE). [~frm] could you have a look ? > NPE when failing to get the remote head > --- > > Key: OAK-5094 > URL: https://issues.apache.org/jira/browse/OAK-5094 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Timothee Maret >Assignee: Timothee Maret > Fix For: 1.5.14 > > Attachments: OAK-5094.patch > > > {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} > may return {{null}} in case it fails the request fails. This case is not > currently handled and cause > {code} > 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed > synchronizing state > . > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5094) NPE when failing to get the remote head
[ https://issues.apache.org/jira/browse/OAK-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothee Maret updated OAK-5094: Flags: Patch > NPE when failing to get the remote head > --- > > Key: OAK-5094 > URL: https://issues.apache.org/jira/browse/OAK-5094 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Timothee Maret >Assignee: Timothee Maret > Fix For: 1.5.14 > > Attachments: OAK-5094.patch > > > {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} > may return {{null}} in case it fails the request fails. This case is not > currently handled and cause > {code} > 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed > synchronizing state > . > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) > at > org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) > at > org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) > at org.quartz.core.JobRunShell.run(JobRunShell.java:202) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (OAK-5094) NPE when failing to get the remote head
Timothee Maret created OAK-5094: --- Summary: NPE when failing to get the remote head Key: OAK-5094 URL: https://issues.apache.org/jira/browse/OAK-5094 Project: Jackrabbit Oak Issue Type: Bug Components: segment-tar Affects Versions: 1.5.12 Reporter: Timothee Maret Assignee: Timothee Maret Fix For: 1.5.14 {{org.apache.jackrabbit.oak.segment.standby.client.StandbyClient#getHead}} may return {{null}} in case it fails the request fails. This case is not currently handled and cause {code} 09.11.2016 18:57:12.183 *ERROR* [sling-default-44-Registered Service.609] org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync Failed synchronizing state . java.lang.NullPointerException: null at java.util.regex.Matcher.getTextLength(Matcher.java:1283) at java.util.regex.Matcher.reset(Matcher.java:309) at java.util.regex.Matcher.(Matcher.java:229) at java.util.regex.Pattern.matcher(Pattern.java:1093) at org.apache.jackrabbit.oak.segment.RecordId.fromString(RecordId.java:48) at org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.getHead(StandbyClientSyncExecution.java:81) at org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSyncExecution.execute(StandbyClientSyncExecution.java:64) at org.apache.jackrabbit.oak.segment.standby.client.StandbyClientSync.run(StandbyClientSync.java:141) at org.apache.sling.commons.scheduler.impl.QuartzJobExecutor.execute(QuartzJobExecutor.java:118) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2906) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654380#comment-15654380 ] Julian Reschke commented on OAK-2906: - ...so it does nor fail regularly on Windows because it's not being run anymore; see OAK-2904. > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-2906 > URL: https://issues.apache.org/jira/browse/OAK-2906 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Affects Versions: 1.5.13 >Reporter: Julian Reschke > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2687) Introduce Dynamic Groups
[ https://issues.apache.org/jira/browse/OAK-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-2687. - Resolution: Later > Introduce Dynamic Groups > > > Key: OAK-2687 > URL: https://issues.apache.org/jira/browse/OAK-2687 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: core, jcr >Reporter: angela >Assignee: angela > > we may consider extending the jackrabbit user management API by the concept > of dynamic groups that would have the following characteristics: > - the group in the repository is just a marker > - the group members are not stored with the group and are not revealed by > regular membership operations such as 'getMembers', 'getDeclaredMembers', > 'memberOf', 'declaredMemberOf' > - the dynamic group membership is only evaluated upon authentication (e.g. in > the principal provider implementation) based on implementation details both > in the principal provider and the login module. > one example to illustrate the concept of the dynamic groups is the 'Everyone' > principal where every principal of the default principal management > implementation is member of. for consistency, this group principal already > requires special treatment in the user management implementation in case > there exists an 'everyone' group (match by principal name only). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3402) Multiplexing NodeStore support in Oak layer
[ https://issues.apache.org/jira/browse/OAK-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3402: Component/s: segment-tar documentmk > Multiplexing NodeStore support in Oak layer > --- > > Key: OAK-3402 > URL: https://issues.apache.org/jira/browse/OAK-3402 > Project: Jackrabbit Oak > Issue Type: Epic > Components: documentmk, segment-tar >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Labels: multiplexing > > Supporting multiplexing repository would have impact on various places in Oak > design. There are various sub components in Oak which maintain there own > storage built on top of NodeStore. For e.g. indexes are stored within > NodeStore, permissions are also stored within NodeStore. Adding multiplexing > support would impact such stores in following ways > The most basic application of multiplexing support is to support private and > shared storage. Under this an Oak application would have a private store and > a shared store. Content under certain paths would be stored under private > repo while all other content is stored under shared repo > # *Writing* - Any content written via JCR API passes through some > {{CommitHooks}}. These hooks are responsible for updating the indexes, > permission store etc. Now if any path say /foo/bar gets modified the commits > hooks would need to determine under which path in NodeStore should the > derived data (index entries, permission etc) should be stored. For simple > case of private and shared store where we have 2 sets of paths private and > shared these hooks would need to be aware of that and use different path in > NodeStore to store the derived content. Key point to note here that any such > storage has to differentiate wether the path from which the content is being > derived is a private path or shared path > # *Reading* - Reading requirement compliments the writing problem. While > performing any JCR operation Oak might need to invoke QueryIndex, > PermissionStore etc. These stores in turn would need to perform a read from > there storage area within NodeStore. For multiplexing support these > components would then need to be aware that there storage can exist in both > shared and private stores > h4. Terms Used > # _private repo_ (PR) - Set of paths which are considered private to the > application. Tentative example /lib,/apps > # _shared repo_ (SR) - Set of paths which are considered shared and different > versions of the application can perform read and write operations on them. > Tentative example /content, /etc/workflow/instances > # {{PathToStoreMapper}} - Responsible for mapping a path to store type. For > now it can just answer either PR or SR. But the concept can be generalized > Aim of this story is to prototype changes in Oak layer in a fork to asses the > impact on current implementation -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5025) Speed up ACE node name generation
[ https://issues.apache.org/jira/browse/OAK-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-5025: Fix Version/s: 1.6 > Speed up ACE node name generation > - > > Key: OAK-5025 > URL: https://issues.apache.org/jira/browse/OAK-5025 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Affects Versions: 1.5.12 >Reporter: Alex COLLIGNON >Assignee: angela >Priority: Minor > Labels: performance > Fix For: 1.6 > > > Currently, > {{o.a.j.oak.security.authorization.accesscontrol.Util#generateAceName}} is > traversing all the existing ACE of a certain node in order to generate > continuous numbering (allow0, allow1, allow2). > While that certainly helps to produce human readable names, it represents > quite a performance bottleneck when the number of existing ACE starts to grow. > Since the naming is a pure implementation detail, my proposal is to keep the > continuous numbering for the first hundreds of nodes and then use a random > number to generate unique names in a faster fashion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3627) Allow to set ACLs on not-yet existing nodes
[ https://issues.apache.org/jira/browse/OAK-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-3627. - Resolution: Later resolving issue. this could be implemented using a custom authorization model (as it actually used to be in jackrabbit 2.x) > Allow to set ACLs on not-yet existing nodes > --- > > Key: OAK-3627 > URL: https://issues.apache.org/jira/browse/OAK-3627 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: core >Reporter: Konrad Windszus > > With Jackrabbit 2 it was possible to set ACLs for nodes which do not exist > yet through Principal-based ACLs > (http://wiki.apache.org/jackrabbit/AccessControl#Principal-based_ACLs). > With Oak this is no longer possible as the principal based ACEs are not > supported > (https://jackrabbit.apache.org/oak/docs/security/accesscontrol/differences.html). > Since there are valid use cases for setting ACEs prior to creating the > according nodes it would be good, if Oak would support this as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-3616) Reconsider Synchronization with ContentSessionImpl.checkLive
[ https://issues.apache.org/jira/browse/OAK-3616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-3616. - Resolution: Won't Fix > Reconsider Synchronization with ContentSessionImpl.checkLive > > > Key: OAK-3616 > URL: https://issues.apache.org/jira/browse/OAK-3616 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: angela >Priority: Minor > > while running permission related benchmarks directly on Oak (without the > extra oak-jcr layer), i found a considerable difference between > - concurrent read with a single content session > - concurrent read with different content sessions > according to the built-in profiler information the former seemed to be > limited by the synchronized {{ContentSessionImpl.checkLive}} call: > {code} > # CugOakTest C min 10% 50% 90% max > N > Import deep tree: 8432 > All paths: 123545 > Oak-Tar1 15 18 22 25 31 >229 > Oak-Tar2 19 25 29 33 38 >344 > Oak-Tar4 26 33 39 45 51 >513 > Oak-Tar8 70 86 94 102 110 >431 > Oak-Tar 10 65 105 119 131 143 >427 > Oak-Tar 15 92 152 169 186 210 >449 > Oak-Tar 20 148 212 229 250 265 >440 > Oak-Tar 50 283 485 549 602 666 >480 > Profiler: top 20 stack trace(s) of 50647 ms: > 15517/45026 (34%): > at > org.apache.jackrabbit.oak.core.ContentSessionImpl.checkLive(ContentSessionImpl.java:85) > at org.apache.jackrabbit.oak.core.MutableRoot.checkLive(MutableRoot.java:172) > at org.apache.jackrabbit.oak.core.MutableTree.beforeRead(MutableTree.java:333) > at > org.apache.jackrabbit.oak.core.MutableTree.hasProperty(MutableTree.java:133) > at > org.apache.jackrabbit.oak.plugins.tree.TreeLocation$NodeLocation.getChild(TreeLocation.java:166) > at > org.apache.jackrabbit.oak.plugins.tree.TreeLocation.create(TreeLocation.java:62) > at org.apache.jackrabbit.oak.benchmark.CugOakTest.runTest(CugOakTest.java:95) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:347) > at > org.apache.jackrabbit.oak.benchmark.ReadDeepTreeTest.execute(ReadDeepTreeTest.java:35) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:356) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.access$000(AbstractTest.java:45) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest$Executor.run(AbstractTest.java:277) > 12279/45026 (27%): > at > org.apache.jackrabbit.oak.core.ContentSessionImpl.checkLive(ContentSessionImpl.java:85) > at org.apache.jackrabbit.oak.core.MutableRoot.checkLive(MutableRoot.java:172) > at org.apache.jackrabbit.oak.core.MutableTree.beforeRead(MutableTree.java:333) > at org.apache.jackrabbit.oak.core.MutableTree.getChild(MutableTree.java:160) > at > org.apache.jackrabbit.oak.plugins.tree.TreeLocation$NodeLocation.getChild(TreeLocation.java:169) > at > org.apache.jackrabbit.oak.plugins.tree.TreeLocation.create(TreeLocation.java:62) > at org.apache.jackrabbit.oak.benchmark.CugOakTest.runTest(CugOakTest.java:95) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:347) > at > org.apache.jackrabbit.oak.benchmark.ReadDeepTreeTest.execute(ReadDeepTreeTest.java:35) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:356) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.access$000(AbstractTest.java:45) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest$Executor.run(AbstractTest.java:277) > 3119/45026 (6%): > at > org.apache.jackrabbit.oak.core.ContentSessionImpl.checkLive(ContentSessionImpl.java:85) > at org.apache.jackrabbit.oak.core.MutableRoot.checkLive(MutableRoot.java:172) > at org.apache.jackrabbit.oak.core.MutableTree.beforeRead(MutableTree.java:333) > at > org.apache.jackrabbit.oak.core.MutableTree.hasProperty(MutableTree.java:133) > at > org.apache.jackrabbit.oak.plugins.tree.TreeLocation$PropertyLocation.exists(TreeLocation.java:222) > at org.apache.jackrabbit.oak.benchmark.CugOakTest.runTest(CugOakTest.java:96) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:347) > at > org.apache.jackrabbit.oak.benchmark.ReadDeepTreeTest.execute(ReadDeepTreeTest.java:35) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.execute(AbstractTest.java:356) > at > org.apache.jackrabbit.oak.benchmark.AbstractTest.access$000(AbstractTest.java:45) > at >
[jira] [Resolved] (OAK-1448) Tool to detect and possibly fix permission store inconsistencies
[ https://issues.apache.org/jira/browse/OAK-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-1448. - Resolution: Later to be reconsidered later > Tool to detect and possibly fix permission store inconsistencies > > > Key: OAK-1448 > URL: https://issues.apache.org/jira/browse/OAK-1448 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: core >Reporter: Michael Marth >Assignee: angela >Priority: Minor > Labels: production, resilience, tools > Attachments: OAK-1448-2.patch, OAK-1448-3.patch, OAK-1448_.patch > > > I think we should prepare for cases where the permission store (managed as a > tree mirrored to the content tree) goes out of sync with the content tree for > whatever reason. > Ideally, that would be an online tool (maybe exposed via JMX) that goes back > the MVCC revisions to find the offending commit (so that have a chance to > reduce the number of such occurences) and fixes the inconsistency on head. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-1411) Refactor Oak#whiteboard anonymous inner class
[ https://issues.apache.org/jira/browse/OAK-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-1411. - Resolution: Later > Refactor Oak#whiteboard anonymous inner class > - > > Key: OAK-1411 > URL: https://issues.apache.org/jira/browse/OAK-1411 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Tobias Bocanegra >Priority: Minor > > Oak#whiteboard is initialized with an anonymous override - If this code > should be reusable, we should refactor it out into its own class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4402) Add rm command to the oak-run console tool
[ https://issues.apache.org/jira/browse/OAK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654370#comment-15654370 ] angela commented on OAK-4402: - resolving wontfix due to concerns as expressed by [~jsedding]. > Add rm command to the oak-run console tool > -- > > Key: OAK-4402 > URL: https://issues.apache.org/jira/browse/OAK-4402 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: run >Affects Versions: 1.0.30, 1.4.2, 1.2.15 >Reporter: Andrew Khoury >Priority: Minor > > It would be great if the oak-run console provided an rm command. I > implemented the command against oak 1.0.x branch long ago and submitted a > pull request in the github repo. > Here's the pull request: > https://github.com/apache/jackrabbit-oak/pull/24 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4402) Add rm command to the oak-run console tool
[ https://issues.apache.org/jira/browse/OAK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-4402. - Resolution: Won't Fix > Add rm command to the oak-run console tool > -- > > Key: OAK-4402 > URL: https://issues.apache.org/jira/browse/OAK-4402 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: run >Affects Versions: 1.0.30, 1.4.2, 1.2.15 >Reporter: Andrew Khoury >Priority: Minor > > It would be great if the oak-run console provided an rm command. I > implemented the command against oak 1.0.x branch long ago and submitted a > pull request in the github repo. > Here's the pull request: > https://github.com/apache/jackrabbit-oak/pull/24 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2076) Concurrency issues while making users members of same group in clustered env
[ https://issues.apache.org/jira/browse/OAK-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-2076. - Resolution: Duplicate > Concurrency issues while making users members of same group in clustered env > > > Key: OAK-2076 > URL: https://issues.apache.org/jira/browse/OAK-2076 > Project: Jackrabbit Oak > Issue Type: Bug > Components: mongomk >Reporter: Swapnil Sahai > Labels: concurrency, scalability > > While creating users in bulk in a clustered setup, if we also make them > members of the same group (lets say 'everyone') then many a times it fails > with Unresolved Conflicts. > I think this is something that can be handled better and probably a merge can > be attempted. > {"changes":[],"error":{"class":"javax.jcr.InvalidItemStateException","message":"OakState0001: > Unresolved conflicts in > /home/groups/learncube/failover1/system/learncube-failover1-everyone/rep:membersList/6"},"status.code":500,"status.message":"","referer":""} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2988) More realistic test case
[ https://issues.apache.org/jira/browse/OAK-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2988: Summary: More realistic test case (was: OAK-RUN: More realistic test case) > More realistic test case > > > Key: OAK-2988 > URL: https://issues.apache.org/jira/browse/OAK-2988 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: bench, run >Affects Versions: 1.2.2, 1.0.15 >Reporter: Philipp Suter > Attachments: ConcurrentReadTest.diff > > > Use a more realistic test scenario for oak-run benchmark tests. > ConcurrentReadTest.java currently only creates one root node while in reality > a repository might have multiple root nodes. > Change is attached as diff. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2155) TokenAuthenticationTest#tokenCreationWithPreAuth test failing repeatedly
[ https://issues.apache.org/jira/browse/OAK-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2155: Fix Version/s: 1.8 > TokenAuthenticationTest#tokenCreationWithPreAuth test failing repeatedly > > > Key: OAK-2155 > URL: https://issues.apache.org/jira/browse/OAK-2155 > Project: Jackrabbit Oak > Issue Type: Bug > Components: pojosr >Reporter: Amit Jain >Assignee: Chetan Mehrotra > Labels: CI, test > Fix For: 1.8 > > > The test > {{org.apache.jackrabbit.oak.run.osgi.TokenAuthenticationTest#tokenCreationWithPreAuth}} > in oak-pojosr component failing repeatedly on the local system. > Also, failing repeatedly on http://ci.apache.org/builders/oak-trunk-win7. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-1588) Create more tests/validation to LDAP integration
[ https://issues.apache.org/jira/browse/OAK-1588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-1588: Issue Type: Task (was: Bug) > Create more tests/validation to LDAP integration > > > Key: OAK-1588 > URL: https://issues.apache.org/jira/browse/OAK-1588 > Project: Jackrabbit Oak > Issue Type: Task > Components: auth-ldap >Reporter: angela > Labels: test > Fix For: 1.8 > > > this is a follow up issue for the remaining tasks as mentioned by [~tripod] > in OAK-516: > {quote} > [...] needs more testing, validation [...] > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-1588) Create more tests/validation to LDAP integration
[ https://issues.apache.org/jira/browse/OAK-1588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-1588: Assignee: (was: Tobias Bocanegra) > Create more tests/validation to LDAP integration > > > Key: OAK-1588 > URL: https://issues.apache.org/jira/browse/OAK-1588 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: angela > Labels: test > Fix For: 1.8 > > > this is a follow up issue for the remaining tasks as mentioned by [~tripod] > in OAK-516: > {quote} > [...] needs more testing, validation [...] > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2906) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2906: Affects Version/s: 1.5.13 > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-2906 > URL: https://issues.apache.org/jira/browse/OAK-2906 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Affects Versions: 1.5.13 >Reporter: Julian Reschke > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-1588) Create more tests/validation to LDAP integration
[ https://issues.apache.org/jira/browse/OAK-1588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-1588: Fix Version/s: 1.8 > Create more tests/validation to LDAP integration > > > Key: OAK-1588 > URL: https://issues.apache.org/jira/browse/OAK-1588 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: angela >Assignee: Tobias Bocanegra > Labels: test > Fix For: 1.8 > > > this is a follow up issue for the remaining tasks as mentioned by [~tripod] > in OAK-516: > {quote} > [...] needs more testing, validation [...] > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2906) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654344#comment-15654344 ] Julian Reschke commented on OAK-2906: - I haven't seen it in regular testing (incl -PintegrationTesting), but I just tried to run just the individual test and got: {noformat} [INFO] Scanning for projects... [INFO] [INFO] [INFO] Building Oak LDAP Authentication Support 1.6-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ oak-auth-ldap --- [INFO] Deleting C:\projects\apache\oak\trunk\oak-auth-ldap\target [INFO] [INFO] --- jacoco-maven-plugin:0.7.1.201405082137:prepare-agent (pre-unit-test) @ oak-auth-ldap --- [INFO] Skipping JaCoCo execution because property jacoco.skip is set. [INFO] test.opts.coverage set to empty [INFO] [INFO] --- maven-remote-resources-plugin:1.4:process (default) @ oak-auth-ldap --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ oak-auth-ldap --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory C:\projects\apache\oak\trunk\oak-auth-ldap\src\main\resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ oak-auth-ldap --- [INFO] Compiling 10 source files to C:\projects\apache\oak\trunk\oak-auth-ldap\target\classes [INFO] [INFO] --- animal-sniffer-maven-plugin:1.15:check (animal-sniffer) @ oak-auth-ldap --- [INFO] Checking unresolved references to org.codehaus.mojo.signature:java17:1.0 [INFO] [INFO] --- maven-scr-plugin:1.16.0:scr (generate-scr-scrdescriptor) @ oak-auth-ldap --- [INFO] Generating 1 MetaType Descriptors in C:\projects\apache\oak\trunk\oak-auth-ldap\target\classes\OSGI-INF\metatype\org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapProviderConfig.xml [INFO] Writing 1 Service Component Descriptors to C:\projects\apache\oak\trunk\oak-auth-ldap\target\classes\OSGI-INF\org.apache.jackrabbit.oak.security.authentication.ldap.impl.LdapIdentityProvider.xml [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ oak-auth-ldap --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 3 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ oak-auth-ldap --- [INFO] Compiling 10 source files to C:\projects\apache\oak\trunk\oak-auth-ldap\target\test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ oak-auth-ldap --- [INFO] Surefire report directory: C:\projects\apache\oak\trunk\oak-auth-ldap\target\surefire-reports --- T E S T S --- Running org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest Tests run: 43, Failures: 0, Errors: 42, Skipped: 0, Time elapsed: 2.522 sec <<< FAILURE! testGetGroupByName(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) Time elapsed: 0.017 sec <<< ERROR! java.io.IOException: Unable to delete file: target\apacheds\cache\9b25a1cf-50f9-4e88-bb2b-9768ac820c39\ou=system.data at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2381) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2372) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575) at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2372) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1679) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1575) at org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:317) at org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:188) at org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) at org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest.before(LdapProviderTest.java:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at
[jira] [Updated] (OAK-2988) Improve ConcurrentReadTest
[ https://issues.apache.org/jira/browse/OAK-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2988: Summary: Improve ConcurrentReadTest (was: More realistic test case) > Improve ConcurrentReadTest > -- > > Key: OAK-2988 > URL: https://issues.apache.org/jira/browse/OAK-2988 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: bench, run >Affects Versions: 1.2.2, 1.0.15 >Reporter: Philipp Suter > Attachments: ConcurrentReadTest.diff > > > Use a more realistic test scenario for oak-run benchmark tests. > ConcurrentReadTest.java currently only creates one root node while in reality > a repository might have multiple root nodes. > Change is attached as diff. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2988) Improve ConcurrentReadTest
[ https://issues.apache.org/jira/browse/OAK-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2988: Priority: Minor (was: Major) > Improve ConcurrentReadTest > -- > > Key: OAK-2988 > URL: https://issues.apache.org/jira/browse/OAK-2988 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: bench, run >Affects Versions: 1.2.2, 1.0.15 >Reporter: Philipp Suter >Priority: Minor > Attachments: ConcurrentReadTest.diff > > > Use a more realistic test scenario for oak-run benchmark tests. > ConcurrentReadTest.java currently only creates one root node while in reality > a repository might have multiple root nodes. > Change is attached as diff. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3115) Support memberOf attribute within the user entity to lookup memberships in the LdapIdentityProvider
[ https://issues.apache.org/jira/browse/OAK-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3115: Fix Version/s: 1.8 > Support memberOf attribute within the user entity to lookup memberships in > the LdapIdentityProvider > --- > > Key: OAK-3115 > URL: https://issues.apache.org/jira/browse/OAK-3115 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-ldap >Affects Versions: 1.3.2 >Reporter: Konrad Windszus > Fix For: 1.8 > > > Some LDAPs (e.g. OpenLDAP via > http://www.openldap.org/doc/admin24/overlays.html), support a reverse lookup > of group memberships (i.e. without an additional search the group membership > can just be determined by looking at a specific attribute like "memberOf"). > It would be good if the {{LdapIdentityProvider}} would support that directly > (instead of executing an expensive search). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-3115) Support memberOf attribute within the user entity to lookup memberships in the LdapIdentityProvider
[ https://issues.apache.org/jira/browse/OAK-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654335#comment-15654335 ] angela commented on OAK-3115: - to reconsider for 1.8 > Support memberOf attribute within the user entity to lookup memberships in > the LdapIdentityProvider > --- > > Key: OAK-3115 > URL: https://issues.apache.org/jira/browse/OAK-3115 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-ldap >Affects Versions: 1.3.2 >Reporter: Konrad Windszus > Fix For: 1.8 > > > Some LDAPs (e.g. OpenLDAP via > http://www.openldap.org/doc/admin24/overlays.html), support a reverse lookup > of group memberships (i.e. without an additional search the group membership > can just be determined by looking at a specific attribute like "memberOf"). > It would be good if the {{LdapIdentityProvider}} would support that directly > (instead of executing an expensive search). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4920) Performance: DefaultSyncHandler.listIdentities() search too broad, triggers traversal warning
[ https://issues.apache.org/jira/browse/OAK-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela reassigned OAK-4920: --- Assignee: angela > Performance: DefaultSyncHandler.listIdentities() search too broad, triggers > traversal warning > - > > Key: OAK-4920 > URL: https://issues.apache.org/jira/browse/OAK-4920 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.4.8, 1.5.11 >Reporter: Alexander Klimetschek >Assignee: angela > Fix For: 1.8 > > > DefaultSyncHandler.listIdentities() collects users by [searching for all > nodes under > /home|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L143] > – the xpath query executed is > {noformat} > /jcr:root/home//element(*)[@jcr:primaryType] > {noformat} > With a few hundred users this easily gives an oak index traversal warning: > {noformat} > org.apache.jackrabbit.oak.spi.query.Cursors$TraversingCursor Traversed 1000 > nodes with filter Filter(query=select [jcr:path], [jcr:score], * from > [nt:base] as a where [jcr:primaryType] is not null and isdescendantnode(a, > '/home') /* xpath: /jcr:root/home//element(*)[@jcr:primaryType] */, > path=/home//*, property=[jcr:primaryType=[is not null]]); consider creating > an index or changing the query > {noformat} > A few lines later [it actually > reduces|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L151] > the result to authorizables which have a {{rep:externalId}}. Since OAK-4301 > there is an oak index for {{rep:externalId}}, so the query can be optimized > by searching for anything with {{rep:externalId}} instead: > {code:java} > userManager.findAuthorizables("rep:externalId", null); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4693) Impersonating users can't unlock nodes
[ https://issues.apache.org/jira/browse/OAK-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654326#comment-15654326 ] angela commented on OAK-4693: - the JCR locking feature is completely broken in Oak; adding link to lock related issues. > Impersonating users can't unlock nodes > -- > > Key: OAK-4693 > URL: https://issues.apache.org/jira/browse/OAK-4693 > Project: Jackrabbit Oak > Issue Type: Bug > Components: jcr >Affects Versions: 1.5.8 >Reporter: Zygmunt Wiercioch > > An impersonating user can lock a node, but can't unlock a node. Relaxed > locking was introduced in: https://issues.apache.org/jira/browse/OAK-1329, > but SessionImpl.impersonate() can't pass the attributes to the > RepositoryImpl.login() method. > {code} > return getRepository().login(impCreds, sd.getWorkspaceName()); > {code} > An attempt to unlock a node when impersonating will result in a failure > since "oak.relaxed-locking" is not set. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4920) Performance: DefaultSyncHandler.listIdentities() search too broad, triggers traversal warning
[ https://issues.apache.org/jira/browse/OAK-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4920: Fix Version/s: 1.8 > Performance: DefaultSyncHandler.listIdentities() search too broad, triggers > traversal warning > - > > Key: OAK-4920 > URL: https://issues.apache.org/jira/browse/OAK-4920 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.4.8, 1.5.11 >Reporter: Alexander Klimetschek >Assignee: angela > Fix For: 1.8 > > > DefaultSyncHandler.listIdentities() collects users by [searching for all > nodes under > /home|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L143] > – the xpath query executed is > {noformat} > /jcr:root/home//element(*)[@jcr:primaryType] > {noformat} > With a few hundred users this easily gives an oak index traversal warning: > {noformat} > org.apache.jackrabbit.oak.spi.query.Cursors$TraversingCursor Traversed 1000 > nodes with filter Filter(query=select [jcr:path], [jcr:score], * from > [nt:base] as a where [jcr:primaryType] is not null and isdescendantnode(a, > '/home') /* xpath: /jcr:root/home//element(*)[@jcr:primaryType] */, > path=/home//*, property=[jcr:primaryType=[is not null]]); consider creating > an index or changing the query > {noformat} > A few lines later [it actually > reduces|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L151] > the result to authorizables which have a {{rep:externalId}}. Since OAK-4301 > there is an oak index for {{rep:externalId}}, so the query can be optimized > by searching for anything with {{rep:externalId}} instead: > {code:java} > userManager.findAuthorizables("rep:externalId", null); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4920) Performance: DefaultSyncHandler.listIdentities() search too broad, triggers traversal warning
[ https://issues.apache.org/jira/browse/OAK-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654319#comment-15654319 ] angela commented on OAK-4920: - there is no guarantee that the index for {{rep:externalId}} exists as introducing the uniqueness constraint was considered quite intrusive (though desirable for security reasons)... but we can look into a better solution. > Performance: DefaultSyncHandler.listIdentities() search too broad, triggers > traversal warning > - > > Key: OAK-4920 > URL: https://issues.apache.org/jira/browse/OAK-4920 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: auth-external >Affects Versions: 1.4.8, 1.5.11 >Reporter: Alexander Klimetschek > > DefaultSyncHandler.listIdentities() collects users by [searching for all > nodes under > /home|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L143] > – the xpath query executed is > {noformat} > /jcr:root/home//element(*)[@jcr:primaryType] > {noformat} > With a few hundred users this easily gives an oak index traversal warning: > {noformat} > org.apache.jackrabbit.oak.spi.query.Cursors$TraversingCursor Traversed 1000 > nodes with filter Filter(query=select [jcr:path], [jcr:score], * from > [nt:base] as a where [jcr:primaryType] is not null and isdescendantnode(a, > '/home') /* xpath: /jcr:root/home//element(*)[@jcr:primaryType] */, > path=/home//*, property=[jcr:primaryType=[is not null]]); consider creating > an index or changing the query > {noformat} > A few lines later [it actually > reduces|https://github.com/apache/jackrabbit-oak/blob/b3e62e3467bf6433b5a419c2f371331f33e57820/oak-auth-external/src/main/java/org/apache/jackrabbit/oak/spi/security/authentication/external/impl/DefaultSyncHandler.java#L151] > the result to authorizables which have a {{rep:externalId}}. Since OAK-4301 > there is an oak index for {{rep:externalId}}, so the query can be optimized > by searching for anything with {{rep:externalId}} instead: > {code:java} > userManager.findAuthorizables("rep:externalId", null); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-2906) test failures for oak-auth-ldap on Windows
[ https://issues.apache.org/jira/browse/OAK-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15654313#comment-15654313 ] angela commented on OAK-2906: - [~reschke], is this still an issue or could we resolve this one? > test failures for oak-auth-ldap on Windows > -- > > Key: OAK-2906 > URL: https://issues.apache.org/jira/browse/OAK-2906 > Project: Jackrabbit Oak > Issue Type: Bug > Components: auth-ldap >Reporter: Julian Reschke > > testAuthenticateValidateTrueFalse(org.apache.jackrabbit.oak.security.authentication.ldap.LdapProviderTest) > Time elapsed: 0.01 sec <<< ERROR! > java.io.IOException: Unable to delete file: > target\apacheds\cache\5c3940f5-2ddb-4d47-8254-8b2266c1a684\ou=system.data > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270) > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653) > at > org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.doDelete(AbstractServer.java:264) > at > org.apache.jackrabbit.oak.security.authentication.ldap.AbstractServer.setUp(AbstractServer.java:183) > at > org.apache.jackrabbit.oak.security.authentication.ldap.InternalLdapServer.setUp(InternalLdapServer.java:33) > etc... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2437) 'shallow' access to a node and it's properties
[ https://issues.apache.org/jira/browse/OAK-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2437: Fix Version/s: 1.8 > 'shallow' access to a node and it's properties > -- > > Key: OAK-2437 > URL: https://issues.apache.org/jira/browse/OAK-2437 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Armand Planche >Assignee: angela > Fix For: 1.8 > > > in many cases it would be helpful to be able to restrict an access control > entry as 'shallow', so affecting only the corresponding node and it's > properties but not the subnodes (and their properties). > With the empty string glob restriction it's possible to restrict to a node > only, but the properties are not included in this case... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-103) JavaScript bindings for Oak
[ https://issues.apache.org/jira/browse/OAK-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-103. Resolution: Later > JavaScript bindings for Oak > --- > > Key: OAK-103 > URL: https://issues.apache.org/jira/browse/OAK-103 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: core >Reporter: Jukka Zitting > Labels: javascript > > A lot of content applications nowadays contain significant client-side > functionality written largely in JavaScript and leveraging frameworks like > JQuery or Backbone. JavaScript is also increasingly being used on the server. > To enable easy integration with such applications Oak should come with > first-class JavaScript bindings that work well with the major JavaScript > frameworks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-301) Document Oak
[ https://issues.apache.org/jira/browse/OAK-301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-301. Resolution: Fixed > Document Oak > > > Key: OAK-301 > URL: https://issues.apache.org/jira/browse/OAK-301 > Project: Jackrabbit Oak > Issue Type: Task > Components: doc >Reporter: Jukka Zitting > Labels: documentation > > To make it easier for new people to get involved, we should have some higher > level documentation than just javadocs and dev@ threads about key parts of > the internal design in Oak. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5079) Diff would not work for bundled nodes when done without journal support
[ https://issues.apache.org/jira/browse/OAK-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcel Reutegger updated OAK-5079: -- Attachment: OAK-5079-v2.patch I updated the patch with some minor modification: - There's an unused import in a test - Removed duplicate methods in DocumentBundlingTest now available in TestUtils - Moved asDocumentNodeState() to TestUtils - BundledDocumentDiffer contains a JavaDoc link to DelegatingDocumentNodeState which cannot be resolved. This will fail JavaDoc generation with Java 8. I replaced it with a simple code link. Other than that, I think the patch looks good. > Diff would not work for bundled nodes when done without journal support > --- > > Key: OAK-5079 > URL: https://issues.apache.org/jira/browse/OAK-5079 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: documentmk >Reporter: Chetan Mehrotra >Assignee: Chetan Mehrotra > Fix For: 1.6 > > Attachments: OAK-5079-v1.diff, OAK-5079-v2.patch > > > DocumentNodeState.diff logic relies on fact that all child nodes for any > given path are represented as NodeDocument. This would not work if we have > bundled child nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4064) Ensure oak-remote runs ITs only with integrationTesting
[ https://issues.apache.org/jira/browse/OAK-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari updated OAK-4064: Fix Version/s: (was: 1.5.13) 1.5.14 > Ensure oak-remote runs ITs only with integrationTesting > --- > > Key: OAK-4064 > URL: https://issues.apache.org/jira/browse/OAK-4064 > Project: Jackrabbit Oak > Issue Type: Bug > Components: remoting >Affects Versions: 1.3.16 >Reporter: Davide Giannella >Assignee: Francesco Mari >Priority: Minor > Fix For: 1.6, 1.5.14 > > > See http://markmail.org/thread/i6jwsaitjdk4ue2a -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4064) Ensure oak-remote runs ITs only with integrationTesting
[ https://issues.apache.org/jira/browse/OAK-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari resolved OAK-4064. - Resolution: Fixed Fixed at r1769086. > Ensure oak-remote runs ITs only with integrationTesting > --- > > Key: OAK-4064 > URL: https://issues.apache.org/jira/browse/OAK-4064 > Project: Jackrabbit Oak > Issue Type: Bug > Components: remoting >Affects Versions: 1.3.16 >Reporter: Davide Giannella >Assignee: Francesco Mari >Priority: Minor > Fix For: 1.6, 1.5.13 > > > See http://markmail.org/thread/i6jwsaitjdk4ue2a -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4943) Keep Lucene Commits so that if the Segments file gets corrupted recovery can be attempted.
[ https://issues.apache.org/jira/browse/OAK-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653769#comment-15653769 ] Ian Boston commented on OAK-4943: - The WebConsole to analyse the Segments can be found here https://github.com/ieb/oakui > Keep Lucene Commits so that if the Segments file gets corrupted recovery can > be attempted. > -- > > Key: OAK-4943 > URL: https://issues.apache.org/jira/browse/OAK-4943 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: lucene >Affects Versions: 1.6 >Reporter: Ian Boston >Assignee: Chetan Mehrotra > Fix For: 1.6 > > > Currently, Lucene deletes all previous generations of the segments files as > it has it uses the default IndexDeletionPolicy. Changing this to an > IndexDeletionPolicy that keeps a number of previous segments files will allow > an operator to find the most recent valid commit when the current segments > file reports corruption. The patch found at > https://github.com/apache/jackrabbit-oak/compare/trunk...ieb:KeepLuceneCommits > keeps 10 previous commits. > A more sophisticated policy could be used to save commits non-linearly over > time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5093) Failed compaction should return the number of the incomplete generation
[ https://issues.apache.org/jira/browse/OAK-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari resolved OAK-5093. - Resolution: Fixed Fixed at r1769081. > Failed compaction should return the number of the incomplete generation > --- > > Key: OAK-5093 > URL: https://issues.apache.org/jira/browse/OAK-5093 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.6, 1.5.14 > > > The {{compact()}} method in {{GarbageCollector}} doesn't always return the > new generation to the caller when the compaction operation fails. This > prevents the caller to react to a failed or interrupted compaction - e.g. by > cleaning up the new, invalid segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5093) Failed compaction should return the number of the incomplete generation
[ https://issues.apache.org/jira/browse/OAK-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-5093: --- Fix Version/s: 1.5.14 > Failed compaction should return the number of the incomplete generation > --- > > Key: OAK-5093 > URL: https://issues.apache.org/jira/browse/OAK-5093 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Reporter: Francesco Mari >Assignee: Francesco Mari > Fix For: 1.6, 1.5.14 > > > The {{compact()}} method in {{GarbageCollector}} doesn't always return the > new generation to the caller when the compaction operation fails. This > prevents the caller to react to a failed or interrupted compaction - e.g. by > cleaning up the new, invalid segments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3159: --- Issue Type: Technical task (was: Improvement) Parent: OAK-4292 > Extend documentation for SegmentNodeStoreService in > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore > --- > > Key: OAK-3159 > URL: https://issues.apache.org/jira/browse/OAK-3159 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: doc, segment-tar >Reporter: Konrad Windszus >Assignee: Michael Dürig > Labels: documentation > Fix For: 1.6, 1.5.17 > > > Currently the documentation at > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only > documents the properties > # repository.home and > # tarmk.size > All the other properties like customBlobStore, tarmk.mode, are not > documented. Please extend that. Also it would be good, if the table could be > extended with what type is supported for the individual properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3159: --- Labels: documentation (was: ) > Extend documentation for SegmentNodeStoreService in > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore > --- > > Key: OAK-3159 > URL: https://issues.apache.org/jira/browse/OAK-3159 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, segment-tar >Reporter: Konrad Windszus >Assignee: Michael Dürig > Labels: documentation > Fix For: 1.6, 1.5.17 > > > Currently the documentation at > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only > documents the properties > # repository.home and > # tarmk.size > All the other properties like customBlobStore, tarmk.mode, are not > documented. Please extend that. Also it would be good, if the table could be > extended with what type is supported for the individual properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-5009) ExternalToExternalMigrationTest failures on Windows
[ https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653724#comment-15653724 ] Tomek Rękawek edited comment on OAK-5009 at 11/10/16 10:46 AM: --- I've fixed the issue and re-enabled the tests in r1769078. was (Author: tomek.rekawek): I've fixed the issue and re-enabled the tests. > ExternalToExternalMigrationTest failures on Windows > --- > > Key: OAK-5009 > URL: https://issues.apache.org/jira/browse/OAK-5009 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, segment-tar >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Labels: test-failure > Fix For: 1.6, 1.5.13 > > > {noformat} > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 0.463 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643533-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 13.021 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643996-0\segmentstore\data0a.tar > Running > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 0.157 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657018-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 12.561 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657193-0\segmentstore\data0a.tar > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-5009) ExternalToExternalMigrationTest failures on Windows
[ https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek resolved OAK-5009. Resolution: Fixed > ExternalToExternalMigrationTest failures on Windows > --- > > Key: OAK-5009 > URL: https://issues.apache.org/jira/browse/OAK-5009 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, segment-tar >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Labels: test-failure > Fix For: 1.6, 1.5.13 > > > {noformat} > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 0.463 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643533-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 13.021 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643996-0\segmentstore\data0a.tar > Running > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 0.157 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657018-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 12.561 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657193-0\segmentstore\data0a.tar > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3262) oak-jcr: update test exclusions once JCR-3901 is resolved
[ https://issues.apache.org/jira/browse/OAK-3262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3262: Fix Version/s: (was: 1.6) 1.8 > oak-jcr: update test exclusions once JCR-3901 is resolved > - > > Key: OAK-3262 > URL: https://issues.apache.org/jira/browse/OAK-3262 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: jcr >Affects Versions: 1.2.3, 1.3.3, 1.0.18 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Fix For: 1.8 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3993) It should be possible to have UNION over 2 suggestion/spellcheck queries
[ https://issues.apache.org/jira/browse/OAK-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3993: Fix Version/s: (was: 1.6) > It should be possible to have UNION over 2 suggestion/spellcheck queries > > > Key: OAK-3993 > URL: https://issues.apache.org/jira/browse/OAK-3993 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: lucene >Reporter: Vikas Saurabh >Assignee: Vikas Saurabh >Priority: Minor > > It should be possible to get combined suggestions from 2 suggestion queries. > A useful case for this is to union suggestions from 2 (or more) sub-paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-1944) DocumentNodeStoreService: make table prefixes in RDB persistence configurable
[ https://issues.apache.org/jira/browse/OAK-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-1944: Fix Version/s: (was: 1.6) 1.8 > DocumentNodeStoreService: make table prefixes in RDB persistence configurable > - > > Key: OAK-1944 > URL: https://issues.apache.org/jira/browse/OAK-1944 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: rdbmk >Affects Versions: 1.1.0 >Reporter: Julian Reschke >Assignee: Julian Reschke >Priority: Minor > Fix For: 1.8 > > > It would be good if the table name prefixes could be configured: > - disambiguate from other tables that might be in the database > - easier config of test cases > (for the latter case, we should also support destroying the tables) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-5009) ExternalToExternalMigrationTest failures on Windows
[ https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653716#comment-15653716 ] Francesco Mari commented on OAK-5009: - I ignored the failing tests at r1769071. > ExternalToExternalMigrationTest failures on Windows > --- > > Key: OAK-5009 > URL: https://issues.apache.org/jira/browse/OAK-5009 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, segment-tar >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Labels: test-failure > Fix For: 1.6, 1.5.13 > > > {noformat} > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 0.463 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643533-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 13.021 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643996-0\segmentstore\data0a.tar > Running > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 0.157 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657018-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 12.561 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657193-0\segmentstore\data0a.tar > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3992) For suggestion/spellcheck, index planner should correctly pick an index which deeper in hierarchy
[ https://issues.apache.org/jira/browse/OAK-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3992: Fix Version/s: (was: 1.6) > For suggestion/spellcheck, index planner should correctly pick an index which > deeper in hierarchy > - > > Key: OAK-3992 > URL: https://issues.apache.org/jira/browse/OAK-3992 > Project: Jackrabbit Oak > Issue Type: Sub-task > Components: lucene >Reporter: Vikas Saurabh >Assignee: Vikas Saurabh >Priority: Minor > > Currently, if we have index I1 for oak:Unstructured at /oak:index/usc and > another at /some/hierarchy/oak:index/usc also at oak:Unstructured (or a > sub-type), then suggestion/spellcheck for both indices give min cost and > planner can pick any one of those arbitrarily for queries for > ISDESCENDANTNODE([/some/hierarchy/\]) > It'd be useful for index planner to pick index which is closer to > descendant-root. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3001) Simplify JournalGarbageCollector using a dedicated timestamp property
[ https://issues.apache.org/jira/browse/OAK-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3001: Fix Version/s: 1.5.14 > Simplify JournalGarbageCollector using a dedicated timestamp property > - > > Key: OAK-3001 > URL: https://issues.apache.org/jira/browse/OAK-3001 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, mongomk >Reporter: Stefan Egli >Assignee: Vikas Saurabh >Priority: Critical > Labels: scalability > Fix For: 1.6, 1.5.14 > > > This subtask is about spawning out a > [comment|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585733=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585733] > from [~chetanm] re JournalGC: > {quote} > Further looking at JournalGarbageCollector ... it would be simpler if you > record the journal entry timestamp as an attribute in JournalEntry document > and then you can delete all the entries which are older than some time by a > simple query. This would avoid fetching all the entries to be deleted on the > Oak side > {quote} > and a corresponding > [reply|https://issues.apache.org/jira/browse/OAK-2829?focusedCommentId=14585870=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14585870] > from myself: > {quote} > Re querying by timestamp: that would indeed be simpler. With the current set > of DocumentStore API however, I believe this is not possible. But: > [DocumentStore.query|https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/main/java/org/apache/jackrabbit/oak/plugins/document/DocumentStore.java#L127] > comes quite close: it would probably just require the opposite of that > method too: > {code} > public List query(Collection collection, > String fromKey, > String toKey, > String indexedProperty, > long endValue, > int limit) { > {code} > .. or what about generalizing this method to have both a {{startValue}} and > an {{endValue}} - with {{-1}} indicating when one of them is not used? > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3983) JournalGarbageCollector: use new DocumentStore remove() method
[ https://issues.apache.org/jira/browse/OAK-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3983: Fix Version/s: 1.5.14 > JournalGarbageCollector: use new DocumentStore remove() method > -- > > Key: OAK-3983 > URL: https://issues.apache.org/jira/browse/OAK-3983 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: documentmk >Reporter: Julian Reschke >Assignee: Vikas Saurabh > Fix For: 1.6, 1.5.14 > > > As introduced in OAK-3982. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3983) JournalGarbageCollector: use new DocumentStore remove() method
[ https://issues.apache.org/jira/browse/OAK-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3983: Assignee: Vikas Saurabh > JournalGarbageCollector: use new DocumentStore remove() method > -- > > Key: OAK-3983 > URL: https://issues.apache.org/jira/browse/OAK-3983 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: documentmk >Reporter: Julian Reschke >Assignee: Vikas Saurabh > Fix For: 1.6, 1.5.14 > > > As introduced in OAK-3982. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3985) MongoDocumentStore: implement new conditional remove method
[ https://issues.apache.org/jira/browse/OAK-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3985: Fix Version/s: 1.5.14 > MongoDocumentStore: implement new conditional remove method > --- > > Key: OAK-3985 > URL: https://issues.apache.org/jira/browse/OAK-3985 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: mongomk >Reporter: Julian Reschke >Assignee: Vikas Saurabh > Fix For: 1.6, 1.5.14 > > > As introduced in OAK-3982. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3985) MongoDocumentStore: implement new conditional remove method
[ https://issues.apache.org/jira/browse/OAK-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3985: Assignee: Vikas Saurabh > MongoDocumentStore: implement new conditional remove method > --- > > Key: OAK-3985 > URL: https://issues.apache.org/jira/browse/OAK-3985 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: mongomk >Reporter: Julian Reschke >Assignee: Vikas Saurabh > Fix For: 1.6, 1.5.14 > > > As introduced in OAK-3982. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3482) Lock object spec compliance issue
[ https://issues.apache.org/jira/browse/OAK-3482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3482: Fix Version/s: (was: 1.6) 1.8 > Lock object spec compliance issue > - > > Key: OAK-3482 > URL: https://issues.apache.org/jira/browse/OAK-3482 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: jcr >Affects Versions: 1.0.21, 1.2.6, 1.3.7 >Reporter: Julian Reschke > Fix For: 1.8 > > > The {{Lock}} object does not have any knowledge about whether a locked node > has been locked session-scoped or open-scoped. Consequently, > {{getLockToken()}} for a session-scoped lock can return the lock token > although it should not. This causes the test failure in > {{org.apache.jackrabbit.test.api.lock.LockTest#testNodeLocked}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4234) NodeDelegate: checking for locks should not require read access to system lock properties
[ https://issues.apache.org/jira/browse/OAK-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4234: Fix Version/s: (was: 1.6) 1.8 > NodeDelegate: checking for locks should not require read access to system > lock properties > - > > Key: OAK-4234 > URL: https://issues.apache.org/jira/browse/OAK-4234 > Project: Jackrabbit Oak > Issue Type: Technical task > Components: jcr >Reporter: Julian Reschke >Priority: Minor > Fix For: 1.8 > > > {{isLocked()}} currently will return false when the caller does not have read > access to the lock properties of the node. > For shallow nodes this might be harmless, but for deep locks it's not (so in > that case, the information is only accurate when the caller has read access > to all ancestor's system properties). > Furthermore. checking for locks might be done frequently, and could be faster > when unnecessary checks would be skipped. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2821) PersistentCache not used for RDBBlobStore
[ https://issues.apache.org/jira/browse/OAK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2821: Assignee: Thomas Mueller > PersistentCache not used for RDBBlobStore > - > > Key: OAK-2821 > URL: https://issues.apache.org/jira/browse/OAK-2821 > Project: Jackrabbit Oak > Issue Type: Bug > Components: rdbmk >Affects Versions: 1.2.12, 1.0.28, 1.4.0 >Reporter: Julian Reschke >Assignee: Thomas Mueller >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4 > Fix For: 1.8 > > Attachments: OAK-2821.diff > > > DocumentMK is currently inconsistent wrt to the use of the PersistentCache > for BlobStore. It is used for Mongo, but not for RDB. We should be consistent > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2821) PersistentCache not used for RDBBlobStore
[ https://issues.apache.org/jira/browse/OAK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2821: Priority: Minor (was: Major) > PersistentCache not used for RDBBlobStore > - > > Key: OAK-2821 > URL: https://issues.apache.org/jira/browse/OAK-2821 > Project: Jackrabbit Oak > Issue Type: Bug > Components: rdbmk >Affects Versions: 1.2.12, 1.0.28, 1.4.0 >Reporter: Julian Reschke >Priority: Minor > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4 > Fix For: 1.8 > > Attachments: OAK-2821.diff > > > DocumentMK is currently inconsistent wrt to the use of the PersistentCache > for BlobStore. It is used for Mongo, but not for RDB. We should be consistent > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2821) PersistentCache not used for RDBBlobStore
[ https://issues.apache.org/jira/browse/OAK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2821: Fix Version/s: (was: 1.6) 1.8 > PersistentCache not used for RDBBlobStore > - > > Key: OAK-2821 > URL: https://issues.apache.org/jira/browse/OAK-2821 > Project: Jackrabbit Oak > Issue Type: Bug > Components: rdbmk >Affects Versions: 1.2.12, 1.0.28, 1.4.0 >Reporter: Julian Reschke > Labels: candidate_oak_1_0, candidate_oak_1_2, candidate_oak_1_4 > Fix For: 1.8 > > Attachments: OAK-2821.diff > > > DocumentMK is currently inconsistent wrt to the use of the PersistentCache > for BlobStore. It is used for Mongo, but not for RDB. We should be consistent > here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2110) potential performance issues with VersionGarbageCollector
[ https://issues.apache.org/jira/browse/OAK-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-2110: Fix Version/s: (was: 1.5.13) > potential performance issues with VersionGarbageCollector > - > > Key: OAK-2110 > URL: https://issues.apache.org/jira/browse/OAK-2110 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, mongomk, rdbmk >Reporter: Julian Reschke > Fix For: 1.6 > > > This one currently special-cases Mongo. For other persistences, it > - fetches *all* documents > - filters by SD_TYPE > - filters by lastmod of versions > - deletes what remains > This is not only inefficient but also fails with OutOfMemory for any larger > repo. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-2110) potential performance issues with VersionGarbageCollector
[ https://issues.apache.org/jira/browse/OAK-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-2110. - Resolution: Fixed Fix Version/s: 1.5.13 > potential performance issues with VersionGarbageCollector > - > > Key: OAK-2110 > URL: https://issues.apache.org/jira/browse/OAK-2110 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, mongomk, rdbmk >Reporter: Julian Reschke > Fix For: 1.6, 1.5.13 > > > This one currently special-cases Mongo. For other persistences, it > - fetches *all* documents > - filters by SD_TYPE > - filters by lastmod of versions > - deletes what remains > This is not only inefficient but also fails with OutOfMemory for any larger > repo. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5069) Backup fails when called from RepositoryManagementMBean#startBackup
[ https://issues.apache.org/jira/browse/OAK-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrei Dulceanu updated OAK-5069: - Attachment: OAK-5069-01.patch I created a new property, {{repository.backup.dir}} in {{SegmentNodeStoreService}} with default value {{segmentstore-backup}}, as discussed. One thing noted while testing the fix: this is not an incremental backup, but I don't know if it was supposed to be. [~mduerig], WDYT? [~frm] Could you take a look at the patch, please? > Backup fails when called from RepositoryManagementMBean#startBackup > --- > > Key: OAK-5069 > URL: https://issues.apache.org/jira/browse/OAK-5069 > Project: Jackrabbit Oak > Issue Type: Bug > Components: segment-tar >Affects Versions: 1.5.12 >Reporter: Andrei Dulceanu >Assignee: Andrei Dulceanu > Fix For: 1.6, 1.5.14 > > Attachments: OAK-5069-01.patch > > > When calling {{RepositoryManagementMBean.startBackup}}, the operation fails > with the following stacktrace: > {code:java} > 04.11.2016 13:12:56.733 *ERROR* [qtp2039314079-250] > org.apache.jackrabbit.oak.management.ManagementOperation Backup failed > java.lang.IllegalStateException: /repository/segmentstore is in use by > another store. > at > org.apache.jackrabbit.oak.segment.file.FileStore.(FileStore.java:168) > at > org.apache.jackrabbit.oak.segment.file.FileStoreBuilder.build(FileStoreBuilder.java:304) > at > org.apache.jackrabbit.oak.backup.impl.FileStoreBackupImpl.backup(FileStoreBackupImpl.java:65) > at > org.apache.jackrabbit.oak.backup.impl.FileStoreBackupRestoreImpl$1.call(FileStoreBackupRestoreImpl.java:102) > at > org.apache.jackrabbit.oak.backup.impl.FileStoreBackupRestoreImpl$1.call(FileStoreBackupRestoreImpl.java:97) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.nio.channels.OverlappingFileLockException: null > at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) > at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) > at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:1062) > at java.nio.channels.FileChannel.lock(FileChannel.java:1053) > at > org.apache.jackrabbit.oak.segment.file.FileStore.(FileStore.java:166) > ... 8 common frames omitted > {code} > /cc [~mduerig] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (OAK-4080) Fulltext queries: compatibility for "contains" with special characters
[ https://issues.apache.org/jira/browse/OAK-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653685#comment-15653685 ] Thomas Mueller edited comment on OAK-4080 at 11/10/16 10:28 AM: [~teofili] do you know where documentation about supported / unsupported characters, and escaping of those, for Lucene can be found? was (Author: tmueller): [~teofili] do you know where documentation about supported / unsupported characters for Lucene can be found? > Fulltext queries: compatibility for "contains" with special characters > -- > > Key: OAK-4080 > URL: https://issues.apache.org/jira/browse/OAK-4080 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Fix For: 1.8 > > > Currently, "jcr:contains" with special characters don't work the same when > using a Lucene fulltext index compatVersion 2 compared to using compatVersion > 1 or Jackrabbit 2.x. > This needs to be documented. Also, it might make sense to provide a > compatibility flag, so that behavior is the same as with the old versions, at > least as much as possible, even thought new features would not be supported. > The one example I know is using "\*" as in: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!*') > {noformat} > With compatVersion 2, the "\*" needs to be escape as follows, in order to > match properties with exactly this text: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!\*') > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (OAK-4080) Fulltext queries: compatibility for "contains" with special characters
[ https://issues.apache.org/jira/browse/OAK-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653685#comment-15653685 ] Thomas Mueller commented on OAK-4080: - [~teofili] do you know where documentation about supported / unsupported characters for Lucene can be found? > Fulltext queries: compatibility for "contains" with special characters > -- > > Key: OAK-4080 > URL: https://issues.apache.org/jira/browse/OAK-4080 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Fix For: 1.8 > > > Currently, "jcr:contains" with special characters don't work the same when > using a Lucene fulltext index compatVersion 2 compared to using compatVersion > 1 or Jackrabbit 2.x. > This needs to be documented. Also, it might make sense to provide a > compatibility flag, so that behavior is the same as with the old versions, at > least as much as possible, even thought new features would not be supported. > The one example I know is using "\*" as in: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!*') > {noformat} > With compatVersion 2, the "\*" needs to be escape as follows, in order to > match properties with exactly this text: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!\*') > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4647) Multiplexing support in PropertyIndexStats MBean
[ https://issues.apache.org/jira/browse/OAK-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4647: Fix Version/s: (was: 1.6) 1.8 > Multiplexing support in PropertyIndexStats MBean > > > Key: OAK-4647 > URL: https://issues.apache.org/jira/browse/OAK-4647 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Chetan Mehrotra >Priority: Minor > Fix For: 1.8 > > > {{PropertyIndexStats}} MBean added in OAK-4144 allows introspecting property > index content. This needs to be adapted to support updated storage format > when multiplexing is enabled -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (OAK-4927) FileStore compaction should account for multiple valid checkpoints
[ https://issues.apache.org/jira/browse/OAK-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela resolved OAK-4927. - Resolution: Won't Fix Fix Version/s: (was: 1.6) > FileStore compaction should account for multiple valid checkpoints > -- > > Key: OAK-4927 > URL: https://issues.apache.org/jira/browse/OAK-4927 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segmentmk >Affects Versions: 1.4 >Reporter: Chetan Mehrotra >Priority: Minor > Labels: candidate_oak_1_4 > > With Oak 1.4 we have setup which configure multiple async indexers which lead > to multiple checkpoint present in the system. OAK-4043 addressed that in > oak-run. However currently the > {{org.apache.jackrabbit.oak.plugins.segment.file.FileStore#compact}} logs a > warning if more than one checkpoint is found. > {noformat} > 22:34:21.797 [main] WARN o.a.j.o.p.segment.file.FileStore - TarMK GC #0: > compaction found 2 checkpoints, you might need to run checkpoint cleanup > {noformat} > This warning should be fixed -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4957) SegmentRevisionGC MBean should report more detailed gc status information
[ https://issues.apache.org/jira/browse/OAK-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4957: Assignee: Andrei Dulceanu > SegmentRevisionGC MBean should report more detailed gc status information > --- > > Key: OAK-4957 > URL: https://issues.apache.org/jira/browse/OAK-4957 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: segment-tar >Reporter: Michael Dürig >Assignee: Andrei Dulceanu > Labels: gc, monitoring > Fix For: 1.6, 1.5.14 > > > Regarding this, the current "Status" is showing the last log info. This is > useful, but it would also be interesting to expose the real-time status. For > monitoring it would be useful to know exactly in which phase we are, e.g. a > field showing on of the following: > - idle > - estimation > - compaction > - compaction-retry-1 > - compaction-retry-2 > - compaction-forcecompact > - cleanup -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-5062) Test failure in DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobDataSource
[ https://issues.apache.org/jira/browse/OAK-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-5062: Assignee: Thomas Mueller > Test failure in > DocumentNodeStoreConfigTest.testRDBDocumentStore_CustomBlobDataSource > - > > Key: OAK-5062 > URL: https://issues.apache.org/jira/browse/OAK-5062 > Project: Jackrabbit Oak > Issue Type: Test > Components: documentmk >Reporter: Chetan Mehrotra >Assignee: Thomas Mueller >Priority: Minor > Labels: test > Fix For: 1.6 > > Attachments: unit-tests.log > > > Saw a test failure > [here|https://builds.apache.org/job/Apache%20Jackrabbit%20Oak%20matrix/1262/jdk=JDK%201.7%20(latest),label=Ubuntu,nsfixtures=DOCUMENT_NS,profile=unittesting/testReport/junit/org.apache.jackrabbit.oak.run.osgi/DocumentNodeStoreConfigTest/testRDBDocumentStore_CustomBlobDataSource/] > In the logs following can be seen > {noformat} > Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement > "create alias if not exists unix_timestamp as [*]$$ long unix_timestamp() { > return System.currentTimeMillis()/1000L; }"; SQL statement: > create alias if not exists unix_timestamp as $$ long unix_timestamp() { > return System.currentTimeMillis()/1000L; } $$; [42000-193] > at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.message.DbException.get(DbException.java:179) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.message.DbException.get(DbException.java:155) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.message.DbException.getSyntaxError(DbException.java:191) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Parser.getSyntaxError(Parser.java:530) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Parser.checkRunOver(Parser.java:3694) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Parser.initialize(Parser.java:3559) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Parser.parse(Parser.java:304) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Parser.parse(Parser.java:293) > ~[h2-1.4.193.jar:1.4.193] > at > org.h2.command.CommandContainer.recompileIfRequired(CommandContainer.java:73) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.CommandContainer.update(CommandContainer.java:93) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.command.Command.executeUpdate(Command.java:258) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:184) > ~[h2-1.4.193.jar:1.4.193] > at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:158) > ~[h2-1.4.193.jar:1.4.193] > at > org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.initialize(RDBDocumentStore.java:802) > ~[oak-core-1.6-SNAPSHOT.jar:1.6-SNAPSHOT] > at > org.apache.jackrabbit.oak.plugins.document.rdb.RDBDocumentStore.(RDBDocumentStore.java:209) > ~[oak-core-1.6-SNAPSHOT.jar:1.6-SNAPSHOT] > ... 26 common frames omitted > 03.11.2016 14:52:10.662 *ERROR* [CM Event Dispatcher (Fire > ConfigurationEvent: > pid=org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService)] > org.apache.jackrabbit.oak-core > [org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService(23)] > Failed creating the component instance; see log for reason > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4452) Consistently use the term segment-tar
[ https://issues.apache.org/jira/browse/OAK-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4452: Assignee: Michael Dürig > Consistently use the term segment-tar > - > > Key: OAK-4452 > URL: https://issues.apache.org/jira/browse/OAK-4452 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: documentation, production > Fix For: 1.6, 1.5.16 > > > We should make an effort to consistently use the term "segment-tar" instead > of "SegmentMK", "TarMK", etc. in logging, exceptions, labels, descriptions, > documentation etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4292) Document Oak segment-tar
[ https://issues.apache.org/jira/browse/OAK-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4292: Assignee: Michael Dürig > Document Oak segment-tar > > > Key: OAK-4292 > URL: https://issues.apache.org/jira/browse/OAK-4292 > Project: Jackrabbit Oak > Issue Type: Task > Components: doc, segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: documentation, gc > Fix For: 1.6, 1.5.17 > > > Document Oak Segment Tar. Specifically: > * New and changed configuration and monitoring options > * Changes in gc (OAK-3348 et. all) > * Changes in segment / tar format (OAK-3348) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3159: --- Component/s: segment-tar > Extend documentation for SegmentNodeStoreService in > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore > --- > > Key: OAK-3159 > URL: https://issues.apache.org/jira/browse/OAK-3159 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, segment-tar >Reporter: Konrad Windszus >Assignee: Michael Dürig > Fix For: 1.6, 1.5.17 > > > Currently the documentation at > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only > documents the properties > # repository.home and > # tarmk.size > All the other properties like customBlobStore, tarmk.mode, are not > documented. Please extend that. Also it would be good, if the table could be > extended with what type is supported for the individual properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4080) Fulltext queries: compatibility for "contains" with special characters
[ https://issues.apache.org/jira/browse/OAK-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller reassigned OAK-4080: --- Assignee: Thomas Mueller > Fulltext queries: compatibility for "contains" with special characters > -- > > Key: OAK-4080 > URL: https://issues.apache.org/jira/browse/OAK-4080 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Fix For: 1.8 > > > Currently, "jcr:contains" with special characters don't work the same when > using a Lucene fulltext index compatVersion 2 compared to using compatVersion > 1 or Jackrabbit 2.x. > This needs to be documented. Also, it might make sense to provide a > compatibility flag, so that behavior is the same as with the old versions, at > least as much as possible, even thought new features would not be supported. > The one example I know is using "\*" as in: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!*') > {noformat} > With compatVersion 2, the "\*" needs to be escape as follows, in order to > match properties with exactly this text: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!\*') > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig updated OAK-3159: --- Fix Version/s: 1.5.17 > Extend documentation for SegmentNodeStoreService in > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore > --- > > Key: OAK-3159 > URL: https://issues.apache.org/jira/browse/OAK-3159 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc, segment-tar >Reporter: Konrad Windszus >Assignee: Michael Dürig > Fix For: 1.6, 1.5.17 > > > Currently the documentation at > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only > documents the properties > # repository.home and > # tarmk.size > All the other properties like customBlobStore, tarmk.mode, are not > documented. Please extend that. Also it would be good, if the table could be > extended with what type is supported for the individual properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4030) DocumentNodeStore: required server time accuracy
[ https://issues.apache.org/jira/browse/OAK-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4030: Assignee: Marcel Reutegger > DocumentNodeStore: required server time accuracy > > > Key: OAK-4030 > URL: https://issues.apache.org/jira/browse/OAK-4030 > Project: Jackrabbit Oak > Issue Type: Documentation > Components: documentmk >Reporter: Julian Reschke >Assignee: Marcel Reutegger >Priority: Minor > Labels: documentation > Fix For: 1.6 > > > The DocumentNodeStore currently requires that the local time and the > persistence time differ at most 2 seconds. > I recently tried to run a cluster with two Windows machines, and despite them > being configured to use the same NTP service, they were still 4..5 s off. > https://blogs.technet.microsoft.com/askds/2007/10/23/high-accuracy-w32time-requirements/ > seems to confirm that by default, Windows can't provide the required > accuracy. > One workaround seems to be to install custom ntp clients; but do we really > want to require this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4309) Align property labels and descriptions in SegmentNodeStoreService
[ https://issues.apache.org/jira/browse/OAK-4309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4309: Assignee: Michael Dürig > Align property labels and descriptions in SegmentNodeStoreService > - > > Key: OAK-4309 > URL: https://issues.apache.org/jira/browse/OAK-4309 > Project: Jackrabbit Oak > Issue Type: Task > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: production > Fix For: 1.6, 1.5.16 > > > We need to align / improve the labels and descriptions in > {{SegmentNodeStoreService}} to match their actual purpose. At the same time I > would opt for changing "compaction" to "revision gc" in all places where it > is used synonymously for the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4074) LengthCachingDataStore should be enabled by default in oak-upgrade
[ https://issues.apache.org/jira/browse/OAK-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4074: Fix Version/s: (was: 1.6) 1.8 > LengthCachingDataStore should be enabled by default in oak-upgrade > -- > > Key: OAK-4074 > URL: https://issues.apache.org/jira/browse/OAK-4074 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: upgrade >Reporter: Tomek Rękawek > Fix For: 1.8 > > > OAK-2882 introduces {{LengthCachingDataStore}} which may increase the > performance of repeated upgrades (OAK-2619), especially if a slow blob store > is used (eg. s3). Let's have it enabled by default. Also, let's add a new CLI > parameter to change the default location of the {{mappingFilePath}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3159) Extend documentation for SegmentNodeStoreService in http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore
[ https://issues.apache.org/jira/browse/OAK-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-3159: Assignee: Michael Dürig > Extend documentation for SegmentNodeStoreService in > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore > --- > > Key: OAK-3159 > URL: https://issues.apache.org/jira/browse/OAK-3159 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: doc >Reporter: Konrad Windszus >Assignee: Michael Dürig > Fix For: 1.6 > > > Currently the documentation at > http://jackrabbit.apache.org/oak/docs/osgi_config.html#SegmentNodeStore only > documents the properties > # repository.home and > # tarmk.size > All the other properties like customBlobStore, tarmk.mode, are not > documented. Please extend that. Also it would be good, if the table could be > extended with what type is supported for the individual properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4839) Allow to register DocumentNodeStore as a NodeStoreProvider
[ https://issues.apache.org/jira/browse/OAK-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4839: Assignee: Tomek Rękawek > Allow to register DocumentNodeStore as a NodeStoreProvider > -- > > Key: OAK-4839 > URL: https://issues.apache.org/jira/browse/OAK-4839 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: documentmk >Reporter: Tomek Rękawek >Assignee: Tomek Rękawek >Priority: Minor > Fix For: 1.6 > > Attachments: OAK-4839.patch > > > NodeStoreProvider is a service that provides access to the NodeStore, but is > meant to be used together with other NodeStores on the same JVM. > SegmentNodeStore can be already registered as NodeStoreProvider. Let's add > similar feature to the DocumentNodeStoreService. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4080) Fulltext queries: compatibility for "contains" with special characters
[ https://issues.apache.org/jira/browse/OAK-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4080: Fix Version/s: (was: 1.6) 1.8 > Fulltext queries: compatibility for "contains" with special characters > -- > > Key: OAK-4080 > URL: https://issues.apache.org/jira/browse/OAK-4080 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: query >Reporter: Thomas Mueller > Fix For: 1.8 > > > Currently, "jcr:contains" with special characters don't work the same when > using a Lucene fulltext index compatVersion 2 compared to using compatVersion > 1 or Jackrabbit 2.x. > This needs to be documented. Also, it might make sense to provide a > compatibility flag, so that behavior is the same as with the old versions, at > least as much as possible, even thought new features would not be supported. > The one example I know is using "\*" as in: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!*') > {noformat} > With compatVersion 2, the "\*" needs to be escape as follows, in order to > match properties with exactly this text: > {noformat} > SELECT * FROM [nt:base] AS c WHERE CONTAINS(c.[test], 'athxv:!\*') > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4814) Add orderby support for nodename index
[ https://issues.apache.org/jira/browse/OAK-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] angela updated OAK-4814: Fix Version/s: (was: 1.6) 1.8 > Add orderby support for nodename index > -- > > Key: OAK-4814 > URL: https://issues.apache.org/jira/browse/OAK-4814 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Affects Versions: 1.5.10 >Reporter: Ankush Malhotra > Fix For: 1.8 > > > In OAK-1752 you have implemented the index support for :nodeName. The JCR > Query explain tool shows that it is used for conditions like equals. > But it is not used for ORDER BY name() . > Is name() supported in order by clause? If yes then we would need to add > support for that in oak-lucene -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-3976) journal should support large(r) entries
[ https://issues.apache.org/jira/browse/OAK-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Egli reassigned OAK-3976: Assignee: Stefan Egli > journal should support large(r) entries > --- > > Key: OAK-3976 > URL: https://issues.apache.org/jira/browse/OAK-3976 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: documentmk >Affects Versions: 1.3.14 >Reporter: Stefan Egli >Assignee: Stefan Egli > Fix For: 1.6, 1.5.15 > > > Journal entries are created in the background write. Normally this happens > every second. If for some reason there is a large delay between two > background writes, the number of pending changes can also accumulate. Which > can result in (arbitrary) large single journal entries (ie with large {{_c}} > property). > This can cause multiple problems down the road: > * journal gc at this point loads 450 entries - and if some are large this can > result in a very large memory consumption during gc (which can cause severe > stability problems for the VM, if not OOM etc). This should be fixed with > OAK-3001 (where we only get the id, thus do not care how big {{_c}} is) > * before OAK-3001 is done (which is currently scheduled after 1.4) what we > can do is reduce the delete batch size (OAK-3975) > * background reads however also read the journal entries and even if > OAK-3001/OAK-3975 are implemented the background read can still cause large > memory consumption. So we need to improve this one way or another. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-5009) ExternalToExternalMigrationTest failures on Windows
[ https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomek Rękawek reassigned OAK-5009: -- Assignee: Tomek Rękawek (was: Thomas Mueller) > ExternalToExternalMigrationTest failures on Windows > --- > > Key: OAK-5009 > URL: https://issues.apache.org/jira/browse/OAK-5009 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, segment-tar >Reporter: Julian Reschke >Assignee: Tomek Rękawek > Labels: test-failure > Fix For: 1.6, 1.5.13 > > > {noformat} > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 0.463 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643533-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 13.021 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643996-0\segmentstore\data0a.tar > Running > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 0.157 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657018-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 12.561 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657193-0\segmentstore\data0a.tar > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (OAK-5009) ExternalToExternalMigrationTest failures on Windows
[ https://issues.apache.org/jira/browse/OAK-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Mari reopened OAK-5009: - Reopening this issue as the fix seems to introduce test failures on oak-segment-tar. Starting from r1768995 the following failures occur when running unit tests. {noformat} Running org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.174 sec <<< FAILURE! - in org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) Time elapsed: 0.102 sec <<< ERROR! java.lang.NullPointerException blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) Time elapsed: 0.068 sec <<< ERROR! java.lang.NullPointerException Running org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.134 sec <<< FAILURE! - in org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) Time elapsed: 0.045 sec <<< ERROR! java.lang.NullPointerException blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) Time elapsed: 0.082 sec <<< ERROR! java.lang.NullPointerException {noformat} > ExternalToExternalMigrationTest failures on Windows > --- > > Key: OAK-5009 > URL: https://issues.apache.org/jira/browse/OAK-5009 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core, segment-tar >Reporter: Julian Reschke >Assignee: Thomas Mueller > Labels: test-failure > Fix For: 1.6, 1.5.13 > > > {noformat} > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 13.484 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 0.463 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643533-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.ExternalToExternalMigrationTest) > Time elapsed: 13.021 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483643996-0\segmentstore\data0a.tar > Running > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 12.719 sec > <<< FAILURE! - in > org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest > blobsCanBeReadAfterSwitchingBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 0.157 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657018-0\segmentstore\data1a.tar > blobsExistsOnTheNewBlobStore(org.apache.jackrabbit.oak.segment.migration.SegmentToExternalMigrationTest) > Time elapsed: 12.561 sec <<< ERROR! > java.io.IOException: Unable to delete file: > C:\tmp\1477483657193-0\segmentstore\data0a.tar > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4619) Unify RecordCacheStats and CacheStats
[ https://issues.apache.org/jira/browse/OAK-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig reassigned OAK-4619: -- Assignee: Michael Dürig > Unify RecordCacheStats and CacheStats > - > > Key: OAK-4619 > URL: https://issues.apache.org/jira/browse/OAK-4619 > Project: Jackrabbit Oak > Issue Type: Task > Components: core, segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig >Priority: Minor > Labels: technical_debt > Fix For: 1.5.14 > > > There is {{org.apache.jackrabbit.oak.cache.CacheStats}} in {{oak-core}} and > {{org.apache.jackrabbit.oak.segment.RecordCacheStats}} in > {{oak-segment-tar}}. Both exposing quite similar functionality. We should try > to unify them as much as possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2072) Lucene: inconsistent usage of the config option "persistence"
[ https://issues.apache.org/jira/browse/OAK-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2072: Fix Version/s: 1.6 > Lucene: inconsistent usage of the config option "persistence" > - > > Key: OAK-2072 > URL: https://issues.apache.org/jira/browse/OAK-2072 > Project: Jackrabbit Oak > Issue Type: Bug > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller >Priority: Minor > Fix For: 1.6, 1.5.13 > > > The Lucene index reader uses the configuration property "persistence", but > the editor (the component updating the index) does not. That leads to very > strange behavior if the property is missing, but the property "file" is set: > the reader would try to read from the file system, but those files are not > updated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4788) Fulltext parser sorts and unique-s parsed terms
[ https://issues.apache.org/jira/browse/OAK-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-4788: Fix Version/s: 1.6 > Fulltext parser sorts and unique-s parsed terms > --- > > Key: OAK-4788 > URL: https://issues.apache.org/jira/browse/OAK-4788 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: query >Reporter: Vikas Saurabh >Assignee: Thomas Mueller >Priority: Minor > Fix For: 1.6, 1.5.13 > > > Pasting a bit of discussion from OAK-4705: > {quote} > bq. whether it's a good idea to sort entries ("hello - world" becomes "- > hello world") and make them unique ("test test" becomes "test"). > I think parser shouldn't play with ordering .. but I can see the rational > that it allows consumer of parsed output to potentially have forward seeks in > their dictionaries. Otoh, I think making unique or not shouldn't be parsers's > concern at all. > I'd open a new issue to follow up on these aspects. > {quote} > /cc [~tmueller] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (OAK-4893) Document conflict handling
[ https://issues.apache.org/jira/browse/OAK-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Dürig reassigned OAK-4893: -- Assignee: Michael Dürig > Document conflict handling > -- > > Key: OAK-4893 > URL: https://issues.apache.org/jira/browse/OAK-4893 > Project: Jackrabbit Oak > Issue Type: Task > Components: doc >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: documentation > Fix For: 1.6 > > > We should add documentation how Oak deals with conflicts. This was once > documented in the Javadocs of {{MicroKernel.rebase()}} but got lost along > with that class. Note that OAK-1553 refines conflict handling but this > refinement has not been implemented in all backends yet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-2849) Improve revision gc on SegmentMK
[ https://issues.apache.org/jira/browse/OAK-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-2849: Fix Version/s: 1.6 > Improve revision gc on SegmentMK > > > Key: OAK-2849 > URL: https://issues.apache.org/jira/browse/OAK-2849 > Project: Jackrabbit Oak > Issue Type: Epic > Components: segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: compaction, gc > Fix For: 1.6, 1.5.13 > > Attachments: SegmentCompactionIT-conflicts.png > > > This is a container issue for the ongoing effort to improve revision gc of > the SegmentMK. > I'm exploring > * ways to make the reference graph as exact as possible and necessary: it > should not contain segments that are not referenceable any more and but must > contain all segments that are referenceable. > * ways to segregate the reference graph reducing dependencies between certain > set of segments as much as possible. > * Reducing the number of in memory references and their impact on gc as much > as possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4925) Don't call @Nonnull TypeEditor.getEffective() from constructor
[ https://issues.apache.org/jira/browse/OAK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-4925: Fix Version/s: 1.6 > Don't call @Nonnull TypeEditor.getEffective() from constructor > -- > > Key: OAK-4925 > URL: https://issues.apache.org/jira/browse/OAK-4925 > Project: Jackrabbit Oak > Issue Type: Bug > Components: core >Reporter: Michael Dürig >Assignee: Michael Dürig > Fix For: 1.6, 1.5.13 > > > {{TypeEditor.getEffective()}} is declared {{@Nonnull}}. However when called > from within the constructor and before its underlying field {{effective}} is > initialised in actually *does* return {{null}}. The fix would be to avoid > calling this method from the constructor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-3796) Prevent blob gc and revision gc from running concurrently
[ https://issues.apache.org/jira/browse/OAK-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-3796: Fix Version/s: 1.6 > Prevent blob gc and revision gc from running concurrently > - > > Key: OAK-3796 > URL: https://issues.apache.org/jira/browse/OAK-3796 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core >Reporter: Michael Dürig >Priority: Critical > Labels: datastore, gc, resilience > Fix For: 1.6, 1.5.13 > > > I think we should add some safe guard preventing blob gc and revision gc from > running concurrently. Running those jobs concurrently would only result in > unnecessary contention for IO/CPU and most likely adversely effect the > outcome of both while also impacting overall system performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4314) BlobReferenceRetriever#collectReferences should allow exceptions
[ https://issues.apache.org/jira/browse/OAK-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-4314: Fix Version/s: 1.6 > BlobReferenceRetriever#collectReferences should allow exceptions > > > Key: OAK-4314 > URL: https://issues.apache.org/jira/browse/OAK-4314 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: core, segment-tar >Reporter: Michael Dürig >Assignee: Michael Dürig > Labels: datastore, gc, resilience > Fix For: 1.6, 1.5.13 > > > {{BlobReferenceRetriever#collectReferences}} currently does not allow > implementations to throw an exception. In case anything goes wrong during > reference collection, implementations should be able to indicate this through > an exception so the DSGC can safely abort. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-4681) Automatically convert *all* "or" queries to "union" for SQL-2
[ https://issues.apache.org/jira/browse/OAK-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-4681: Fix Version/s: 1.6 > Automatically convert *all* "or" queries to "union" for SQL-2 > - > > Key: OAK-4681 > URL: https://issues.apache.org/jira/browse/OAK-4681 > Project: Jackrabbit Oak > Issue Type: New Feature > Components: query >Reporter: Thomas Mueller >Assignee: Thomas Mueller > Fix For: 1.6, 1.5.13 > > > Currently, in OAK-1617, simple SQL-2 queries that contain "or" are converted > to "union" if the cost is lower. However, more complex queries are not > converted, see AndImpl.java, convertToUnion(), "in this case prefer to be > conservative and don't optimize. This could happen when for example: WHERE (a > OR b) AND (c OR d)." > It is implemented for XPath, and works fine there, so I think it is > reasonable to do that for SQL-2 as well for trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (OAK-1558) Expose FileStoreBackupRestoreMBean for supported NodeStores
[ https://issues.apache.org/jira/browse/OAK-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Reschke updated OAK-1558: Fix Version/s: 1.6 > Expose FileStoreBackupRestoreMBean for supported NodeStores > --- > > Key: OAK-1558 > URL: https://issues.apache.org/jira/browse/OAK-1558 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: mongomk, segment-tar >Reporter: Michael Dürig >Assignee: Andrei Dulceanu > Labels: monitoring > Fix For: 1.6, 1.5.13 > > Attachments: OAK-1558-01.patch > > > {{NodeStore}} implementations should expose the > {{FileStoreBackupRestoreMBean}} in order to be interoperable with > {{RepositoryManagementMBean}}. See OAK-1160. -- This message was sent by Atlassian JIRA (v6.3.4#6332)