[jira] [Assigned] (JCRSITE-29) Implement Apache project branding requirements
[ https://issues.apache.org/jira/browse/JCRSITE-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRSITE-29: Assignee: (was: Jukka Zitting) > Implement Apache project branding requirements > -- > > Key: JCRSITE-29 > URL: https://issues.apache.org/jira/browse/JCRSITE-29 > Project: Jackrabbit Site > Issue Type: Improvement > Components: site >Reporter: Jukka Zitting >Priority: Major > > We should implement the requirements from > http://www.apache.org/foundation/marks/pmcs.html latest in Q1 next year. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRRMI-7) Use remote callbacks instead of polling for observation over JCR-RMI
[ https://issues.apache.org/jira/browse/JCRRMI-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRRMI-7: -- Assignee: (was: Jukka Zitting) > Use remote callbacks instead of polling for observation over JCR-RMI > > > Key: JCRRMI-7 > URL: https://issues.apache.org/jira/browse/JCRRMI-7 > Project: Jackrabbit JCR-RMI > Issue Type: Improvement >Reporter: Jukka Zitting >Priority: Major > > JCR-RMI currently uses polling to handle remote observation events. Change > the implementation to use remote callbacks instead. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRBENCH-3) Remove the jcr-benchmark dependency to jcr-tests
[ https://issues.apache.org/jira/browse/JCRBENCH-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRBENCH-3: Assignee: (was: Jukka Zitting) > Remove the jcr-benchmark dependency to jcr-tests > > > Key: JCRBENCH-3 > URL: https://issues.apache.org/jira/browse/JCRBENCH-3 > Project: Jackrabbit JCR Benchmark > Issue Type: Improvement >Reporter: Jukka Zitting >Priority: Major > > Currently the jackrabbit-jcr-benchmark component is designed as an extension > of jackrabbit-jcr-tests. This gives the benchmark suite some setup code for > free, but on the other hand makes it quite difficult to set up and use as an > ad-hoc benchmark suite. > I'd like to refactor the benchmark suite to consist of a generic test runner > (with a main method so it can be run from the command line) and a set of > standalone performance test classes that take an already initialized > repository and have their own setup and teardown methods. The runner could > take care of things like timing the test cases, measuring statistics over > multiple test runs, and producing a report of the results. > TestNG has some support for such use (much more so than JUnit), but it might > be that we still need to implement at least some of the above features. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRRMI-6) Streamline the JCR-RMI network interfaces
[ https://issues.apache.org/jira/browse/JCRRMI-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRRMI-6: -- Assignee: (was: Jukka Zitting) > Streamline the JCR-RMI network interfaces > - > > Key: JCRRMI-6 > URL: https://issues.apache.org/jira/browse/JCRRMI-6 > Project: Jackrabbit JCR-RMI > Issue Type: Improvement >Reporter: Jukka Zitting >Priority: Minor > > The JCR-RMI network layer makes an excessive amount of remote method calls in > some use cases. Use caching and other mechanisms to improve performance in > such cases. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRSITE-8) Installation guide
[ https://issues.apache.org/jira/browse/JCRSITE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRSITE-8: --- Assignee: (was: Jukka Zitting) > Installation guide > -- > > Key: JCRSITE-8 > URL: https://issues.apache.org/jira/browse/JCRSITE-8 > Project: Jackrabbit Site > Issue Type: Improvement > Components: site >Reporter: Jukka Zitting >Priority: Major > > The current Jackrabbit installation instructions are spread across a number > of different documents that each have a specific goal in mind. It would be > great to have a single installation document that would list the installation > dependencies and the available configuration options. More specific documents > could then just refer to this installation guide for more details. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRSERVLET-2) Login filters in jackrabbit-servlet
[ https://issues.apache.org/jira/browse/JCRSERVLET-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRSERVLET-2: -- Assignee: (was: Jukka Zitting) > Login filters in jackrabbit-servlet > --- > > Key: JCRSERVLET-2 > URL: https://issues.apache.org/jira/browse/JCRSERVLET-2 > Project: Jackrabbit JCR Servlets > Issue Type: New Feature >Reporter: Jukka Zitting >Priority: Minor > > It would be nice to have servlet filters that automatically logs in to a > repository when a request comes in, associates the resulting session with the > request, and logs out once the request has been processed. Different filters > could use different sources of the login credentials (predefined, HTTP Basic, > etc.). -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRSITE-37) Migrate web site from Confluence to svnpubsub
[ https://issues.apache.org/jira/browse/JCRSITE-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRSITE-37: Assignee: (was: Jukka Zitting) > Migrate web site from Confluence to svnpubsub > - > > Key: JCRSITE-37 > URL: https://issues.apache.org/jira/browse/JCRSITE-37 > Project: Jackrabbit Site > Issue Type: Task > Components: site >Reporter: Jukka Zitting >Priority: Major > Labels: cms, svnpubsub > > As discussed earlier (http://markmail.org/message/h5zp3f67g5zftltf) we should > migrate our web site away from the Confluence wiki to the Apache CMS system > (http://www.apache.org/dev/cms.html). -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRTCK-2) Export test cases fail with Java 5 on Mac OS X
[ https://issues.apache.org/jira/browse/JCRTCK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRTCK-2: -- Assignee: (was: Jukka Zitting) > Export test cases fail with Java 5 on Mac OS X > -- > > Key: JCRTCK-2 > URL: https://issues.apache.org/jira/browse/JCRTCK-2 > Project: Jackrabbit JCR Tests > Issue Type: Bug >Reporter: Jukka Zitting >Priority: Minor > Attachments: ASF.LICENSE.NOT.GRANTED--azydron.vcf, > ASF.LICENSE.NOT.GRANTED--azydron.vcf > > > As reported by Roy during the Jackrabbit 1.3.1 release vote: > I am getting test failures on OS X 10.4.10 (PPC) with java version "1.5.0_07" > Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_07-164) > Java HotSpot(TM) Client VM (build 1.5.0_07-87, mixed mode, sharing). > All failures are because of > junit.framework.AssertionFailedError: namespace: http://www.jcp.org/ > jcr/mix/1.0 not exported > More details below. > Roy > Running org.apache.jackrabbit.test.TestAll > [Fatal Error] :1:10: Attribute name "is" associated with an element > type "this" must be followed by the ' = ' character. > [Fatal Error] :1:10: Attribute name "is" associated with an element > type "this" must be followed by the ' = ' character. > [Fatal Error] :-1:-1: Premature end of file. > [Fatal Error] :-1:-1: Premature end of file. > Tests run: 1055, Failures: 8, Errors: 0, Skipped: 0, Time elapsed: > 119.126 sec <<< FAILURE! > Results : > Failed tests: > testExportDocView_handler_session_skipBinary_noRecurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_handler_session_skipBinary_recurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_handler_session_saveBinary_noRecurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_handler_session_saveBinary_recurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_stream_session_skipBinary_recurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_stream_session_skipBinary_noRecurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_stream_session_saveBinary_noRecurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > testExportDocView_stream_session_saveBinary_recurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) > Tests run: 1248, Failures: 8, Errors: 0, Skipped: 0 > > --- > Test set: org.apache.jackrabbit.test.TestAll > > --- > Tests run: 1055, Failures: 8, Errors: 0, Skipped: 0, Time elapsed: > 119.124 sec <<< FAILURE! > testExportDocView_handler_session_skipBinary_noRecurse > (org.apache.jackrabbit.test.api.ExportDocViewTest) Time elapsed: > 0.07 sec <<< FAILURE! > junit.framework.AssertionFailedError: namespace: http://www.jcp.org/ > jcr/mix/1.0 not exported -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (JCRRMI-4) RMI: Allow custom socket factories
[ https://issues.apache.org/jira/browse/JCRRMI-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCRRMI-4: -- Assignee: (was: Jukka Zitting) > RMI: Allow custom socket factories > -- > > Key: JCRRMI-4 > URL: https://issues.apache.org/jira/browse/JCRRMI-4 > Project: Jackrabbit JCR-RMI > Issue Type: New Feature >Reporter: Jukka Zitting >Priority: Minor > > The current JCR-RMI server classes always use the default RMI socket factory. > Provide a mechanism for specifying a custom socket factory. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (JCRVLT-53) vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak
[ https://issues.apache.org/jira/browse/JCRVLT-53?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14054942#comment-14054942 ] Jukka Zitting commented on JCRVLT-53: - I think we should make ~10k orderable child nodes still work reasonably well, as such scale is still possible also with Jackrabbit 2.x. But I wouldn't worry too much about scaling to 100k orderable child nodes or higher. At that point we should just tell the user to make their content unordered instead of us bending backwards to support that use case. vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak --- Key: JCRVLT-53 URL: https://issues.apache.org/jira/browse/JCRVLT-53 Project: Jackrabbit FileVault Issue Type: Improvement Reporter: Thomas Mueller Assignee: Thomas Mueller Attachments: JCR-3793.patch, ReorderTest.java The method org.apache.jackrabbit.vault.fs.api.NodeNameList.restoreOrder re-orders orderable child nodes by using Node.orderBefore. This is very slow if there are many child nodes, specially with Oak (minutes for 10'000 nodes, while only about 1 second for Jackrabbit 2.x). [~tripod], I wonder if a possible solution is to first check whether re-ordering is needed? For example using: {noformat} boolean isOrdered(ArrayListString names, Node parent) throws RepositoryException { NodeIterator it1 = parent.getNodes(); for (IteratorString it2 = names.iterator(); it2.hasNext();) { if (!it1.hasNext() || !it1.nextNode().getName().equals(it2.next())) { return false; } } return !it1.hasNext(); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3793) vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak
[ https://issues.apache.org/jira/browse/JCR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051514#comment-14051514 ] Jukka Zitting commented on JCR-3793: bq. My patch still improves performance about 20-fold (for Oak) and 10-fold (for Jackrabbit 2.x), so I think it's worth it. +1 vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak --- Key: JCR-3793 URL: https://issues.apache.org/jira/browse/JCR-3793 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Thomas Mueller Assignee: Thomas Mueller Attachments: JCR-3793.patch, ReorderTest.java The method org.apache.jackrabbit.vault.fs.api.NodeNameList.restoreOrder re-orders orderable child nodes by using Node.orderBefore. This is very slow if there are many child nodes, specially with Oak (minutes for 10'000 nodes, while only about 1 second for Jackrabbit 2.x). [~tripod], I wonder if a possible solution is to first check whether re-ordering is needed? For example using: {noformat} boolean isOrdered(ArrayListString names, Node parent) throws RepositoryException { NodeIterator it1 = parent.getNodes(); for (IteratorString it2 = names.iterator(); it2.hasNext();) { if (!it1.hasNext() || !it1.nextNode().getName().equals(it2.next())) { return false; } } return !it1.hasNext(); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3793) vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak
[ https://issues.apache.org/jira/browse/JCR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048844#comment-14048844 ] Jukka Zitting commented on JCR-3793: Do we have a good benchmark for this operation. It doesn't sound as if this should be particularly slow on Oak, so there could simply be a bug or a missing optimization that we need to apply instead of modifying vlt. vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak --- Key: JCR-3793 URL: https://issues.apache.org/jira/browse/JCR-3793 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Thomas Mueller The method org.apache.jackrabbit.vault.fs.api.NodeNameList.restoreOrder re-orders orderable child nodes by using Node.orderBefore. This is very slow if there are many child nodes, specially with Oak (minutes for 10'000 nodes, while only about 1 second for Jackrabbit 2.x). [~tripod], I wonder if a possible solution is to first check whether re-ordering is needed? For example using: {noformat} boolean isOrdered(ArrayListString names, Node parent) throws RepositoryException { NodeIterator it1 = parent.getNodes(); for (IteratorString it2 = names.iterator(); it2.hasNext();) { if (!it1.hasNext() || !it1.nextNode().getName().equals(it2.next())) { return false; } } return !it1.hasNext(); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3793) vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak
[ https://issues.apache.org/jira/browse/JCR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048973#comment-14048973 ] Jukka Zitting commented on JCR-3793: It looks like we could actually optimize Oak for this case. See OAK-1934. vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak --- Key: JCR-3793 URL: https://issues.apache.org/jira/browse/JCR-3793 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Thomas Mueller Attachments: JCR-3793.patch, ReorderTest.java The method org.apache.jackrabbit.vault.fs.api.NodeNameList.restoreOrder re-orders orderable child nodes by using Node.orderBefore. This is very slow if there are many child nodes, specially with Oak (minutes for 10'000 nodes, while only about 1 second for Jackrabbit 2.x). [~tripod], I wonder if a possible solution is to first check whether re-ordering is needed? For example using: {noformat} boolean isOrdered(ArrayListString names, Node parent) throws RepositoryException { NodeIterator it1 = parent.getNodes(); for (IteratorString it2 = names.iterator(); it2.hasNext();) { if (!it1.hasNext() || !it1.nextNode().getName().equals(it2.next())) { return false; } } return !it1.hasNext(); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3793) vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak
[ https://issues.apache.org/jira/browse/JCR-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049142#comment-14049142 ] Jukka Zitting commented on JCR-3793: I was able to drop the reordered time from 99306 ms to 1965 ms in OAK-1934. vlt: with many child nodes, NodeNameList.restoreOrder is very slow with Oak --- Key: JCR-3793 URL: https://issues.apache.org/jira/browse/JCR-3793 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Thomas Mueller Attachments: JCR-3793.patch, ReorderTest.java The method org.apache.jackrabbit.vault.fs.api.NodeNameList.restoreOrder re-orders orderable child nodes by using Node.orderBefore. This is very slow if there are many child nodes, specially with Oak (minutes for 10'000 nodes, while only about 1 second for Jackrabbit 2.x). [~tripod], I wonder if a possible solution is to first check whether re-ordering is needed? For example using: {noformat} boolean isOrdered(ArrayListString names, Node parent) throws RepositoryException { NodeIterator it1 = parent.getNodes(); for (IteratorString it2 = names.iterator(); it2.hasNext();) { if (!it1.hasNext() || !it1.nextNode().getName().equals(it2.next())) { return false; } } return !it1.hasNext(); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (JCR-3667) Possible regression with accepted content types when extracting and indexing binary values
[ https://issues.apache.org/jira/browse/JCR-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3667: --- Fix Version/s: (was: 2.8) Possible regression with accepted content types when extracting and indexing binary values -- Key: JCR-3667 URL: https://issues.apache.org/jira/browse/JCR-3667 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.4.4, 2.6.3 Reporter: Cédric Damioli Assignee: Jukka Zitting Labels: patch JCR-3476 introduced a mime-type test before parsing binary values, based on Tika's supported parsers. This may lead to incorrect behaviours, with a text/xml not being extracted and indexed because the XMLParser does not declare text/xml as a supported type. The problem here is that there is a regression between 2.4.3 and 2.4.4, because the same content was previously well recognized by Tika's Detector and then extracted. Furthermore, it seems to me inconsistent on one hand to rely on the declared content type and on the other hand to delegate the actual type detection to Tika ? This may lead to cases where the jcr:mimeType value is set to eg. application/pdf but detected and parsed by Tika as text/plain with no error. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (JCR-3775) Avoid lock contention in ISO8601.parse()
Jukka Zitting created JCR-3775: -- Summary: Avoid lock contention in ISO8601.parse() Key: JCR-3775 URL: https://issues.apache.org/jira/browse/JCR-3775 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jcr-commons Reporter: Jukka Zitting Assignee: Jukka Zitting The ISO8601.parse() method calls the synchronized TimeZone.getTimeZone() method, which causes lock contention in concurrent code that frequently parses ISO8601 strings. To avoid the synchronization, we could use a static flyweight map of all known time zones, and only fall back to the getTimeZone() method if some unknown time zone is encountered. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (JCR-3775) Avoid lock contention in ISO8601.parse()
[ https://issues.apache.org/jira/browse/JCR-3775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3775. Resolution: Fixed Fix Version/s: 2.8 It turns out that an even simpler fix of just keeping a flyweight instance of the GMT time zone works just as well since the vast majority of timestamps in the repository are normalized to GMT. Fixed in revision 1590123. Avoid lock contention in ISO8601.parse() Key: JCR-3775 URL: https://issues.apache.org/jira/browse/JCR-3775 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jcr-commons Reporter: Jukka Zitting Assignee: Jukka Zitting Fix For: 2.8 The ISO8601.parse() method calls the synchronized TimeZone.getTimeZone() method, which causes lock contention in concurrent code that frequently parses ISO8601 strings. To avoid the synchronization, we could use a static flyweight map of all known time zones, and only fall back to the getTimeZone() method if some unknown time zone is encountered. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3775) Avoid lock contention in ISO8601.parse()
[ https://issues.apache.org/jira/browse/JCR-3775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981558#comment-13981558 ] Jukka Zitting commented on JCR-3775: Actually there are common enough cases where non-GMT time zones are used in timestamps, so I added the originally described map in revision 1590132. Avoid lock contention in ISO8601.parse() Key: JCR-3775 URL: https://issues.apache.org/jira/browse/JCR-3775 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jcr-commons Reporter: Jukka Zitting Assignee: Jukka Zitting Fix For: 2.8 The ISO8601.parse() method calls the synchronized TimeZone.getTimeZone() method, which causes lock contention in concurrent code that frequently parses ISO8601 strings. To avoid the synchronization, we could use a static flyweight map of all known time zones, and only fall back to the getTimeZone() method if some unknown time zone is encountered. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (JCR-3676) Make QueryResultImpl#isAccessGranted proctected
[ https://issues.apache.org/jira/browse/JCR-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3676: --- Fix Version/s: 2.8 Make QueryResultImpl#isAccessGranted proctected --- Key: JCR-3676 URL: https://issues.apache.org/jira/browse/JCR-3676 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Ard Schrijvers Assignee: Ard Schrijvers Fix For: 2.6.4, 2.8 Because we mapped our security model to lucene queries, we'd like to override the expensive QueryResultImpl#isAccessGranted method. I will make the method protected instead of private. If someone has strong objections, please let me know Regards Ard -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (JCR-3676) Make QueryResultImpl#isAccessGranted proctected
[ https://issues.apache.org/jira/browse/JCR-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974142#comment-13974142 ] Jukka Zitting commented on JCR-3676: FTR, this was also done in trunk in revision 1529089. Make QueryResultImpl#isAccessGranted proctected --- Key: JCR-3676 URL: https://issues.apache.org/jira/browse/JCR-3676 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Ard Schrijvers Assignee: Ard Schrijvers Fix For: 2.6.4, 2.8 Because we mapped our security model to lucene queries, we'd like to override the expensive QueryResultImpl#isAccessGranted method. I will make the method protected instead of private. If someone has strong objections, please let me know Regards Ard -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (JCR-3496) no search manager configured for this workspace
[ https://issues.apache.org/jira/browse/JCR-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3496: --- Fix Version/s: (was: 2.2.9) no search manager configured for this workspace --- Key: JCR-3496 URL: https://issues.apache.org/jira/browse/JCR-3496 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.2.9 Environment: debian squeeze jackrabbit 2.2.9 embedded in glassfish application with mysql datasource Reporter: Eric Berryman Hello! I have a JavaEE application that using jackrabbit 2.2.9 embedded. I use Derby for testing, and searching works fine. But, in production, the datasource is set to mysql and I get the following error when trying to search. javax.jcr.RepositoryException: no search manager configured for this workspace|#] this is my repository.xml file: ?xml version=1.0? !-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the License); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -- !DOCTYPE Repository PUBLIC -//The Apache Software Foundation//DTD Jackrabbit 2.0//EN http://jackrabbit.apache.org/dtd/repository-2.0.dtd; !-- Example Repository Configuration File Used by - org.apache.jackrabbit.core.config.RepositoryConfigTest.java - -- Repository !-- virtual file system where the repository stores global state (e.g. registered namespaces, custom node types, etc.) -- FileSystem class=org.apache.jackrabbit.core.fs.db.DbFileSystem param name=driver value=com.mysql.jdbc.Driver/ param name=url value=url/ param name=user value=user / param name=password value=password / param name=schema value=mysql/ param name=schemaObjectPrefix value=J_R_FS_/ /FileSystem !-- data store configuration -- DataStore class=org.apache.jackrabbit.core.data.db.DbDataStore param name=driver value=com.mysql.jdbc.Driver / param name=url value=url/ param name=user value=user / param name=password value=password / param name=databaseType value=mysql/ param name=minRecordLength value=1024/ param name=maxConnections value=3/ param name=copyWhenReading value=true/ param name=tablePrefix value=/ /DataStore !-- security configuration -- Security appName=Jackrabbit !-- security manager: class: FQN of class implementing the JackrabbitSecurityManager interface -- SecurityManager class=org.apache.jackrabbit.core.security.simple.SimpleSecurityManager workspaceName=security !-- workspace access: class: FQN of class implementing the WorkspaceAccessManager interface -- !-- WorkspaceAccessManager class=.../ -- !-- param name=config value=${rep.home}/security.xml/ -- /SecurityManager !-- access manager: class: FQN of class implementing the AccessManager interface -- AccessManager class=org.apache.jackrabbit.core.security.simple.SimpleAccessManager !-- param name=config value=${rep.home}/access.xml/ -- /AccessManager LoginModule class=org.apache.jackrabbit.core.security.simple.SimpleLoginModule !-- anonymous user name ('anonymous' is the default value) param name=anonymousId value=anonymous/ -- !-- administrator user id (default value if param is missing is 'admin') param name=adminId value=admin/ -- /LoginModule /Security !-- location of workspaces root directory and name of default workspace -- Workspaces rootPath=${rep.home}/workspaces defaultWorkspace=olog/ !-- workspace configuration template: used to create the initial workspace if there's no workspace yet
[jira] [Resolved] (JCR-3284) Provide jackrabbit standalone jar
[ https://issues.apache.org/jira/browse/JCR-3284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3284. Resolution: Won't Fix Fix Version/s: (was: 2.0-alpha11) Won't Fix as explained above. Provide jackrabbit standalone jar - Key: JCR-3284 URL: https://issues.apache.org/jira/browse/JCR-3284 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-standalone Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 03:44:56-0500) Maven home: C:\Java\apache-maven-3.0.4\bin\.. Java version: 1.6.0_31, vendor: Sun Microsystems Inc. Java home: C:\Program Files\Java\jdk1.6.0_31\jre Default locale: en_US, platform encoding: Cp1252 OS name: windows 7, version: 6.1, arch: amd64, family: windows Reporter: Gary Gregory Assignee: Jukka Zitting Hello all, I would like to use jackrabbit-standalone from Maven for testing our upcoming Apache Commons VFS 2.1. If you have a better idea on how to do the following, please advise. I want to run our VFS WebDAV unit tests using Jackrabbit as a server embedded in the test. In previous versions, a developer had to set up a WebDAV server manually and run the one test. What I started to do is use jackrabbit-standalone 1.6.5 but it does not have the JcrUtils class which came in with Jackrabbit 2.0. My current code: # Create a temp dir # Create a TransientRepository point to the temp dir # Use JcrUtils to import a directory its subdirectories full of test files (I cannot do this ATM.) # Shutdown the TransientRepository # Start Jackrabbit with org.apache.jackrabbit.standalone.Main: {noformat} org.apache.jackrabbit.standalone.Main.main(new String[] { --port, Integer.toString(SocketPort), --repo, repoDirectory.toString() }); {noformat} # The tests run I would like to use the latest jackrabbit but I am stuck without the standalone jar. Thoughts? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (JCR-3738) CLONE - Deadlock on LOCAL_REVISION table in clustering environment
[ https://issues.apache.org/jira/browse/JCR-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3738: --- Assignee: (was: Jukka Zitting) So far I couldn't find any clear reasons for why this would happen. One possible lead is that this might have something to do with the way ConnectionHelper uses the TransactionContext.getCurrentThreadId() to share a connection across threads when they're linked into the same transaction. There's a workaround in configuring the database backend to timeout blocked statements. It breaks this deadlock, so a partial solution (that doesn't fix the root cause) might be to try using Statement.setQueryTimeout() in DatabaseJournal.doLock(). CLONE - Deadlock on LOCAL_REVISION table in clustering environment -- Key: JCR-3738 URL: https://issues.apache.org/jira/browse/JCR-3738 Project: Jackrabbit Content Repository Issue Type: Bug Components: clustering Affects Versions: 2.6.2 Environment: CQ5.6.1 with jackrabbit-core 2.6.2 backed off ibm db2 v10.5 Reporter: Ankush Malhotra Priority: Critical Attachments: before-lock.zip, db-deadlock-info.txt, stat-cache.log, threaddumps.zip Original, cloned description: When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (JCR-3738) CLONE - Deadlock on LOCAL_REVISION table in clustering environment
[ https://issues.apache.org/jira/browse/JCR-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3738: --- Description: Original, cloned description: When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. was: Original, cloned description: {quote}When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. {quote} This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. CLONE - Deadlock on LOCAL_REVISION table in clustering environment -- Key: JCR-3738 URL: https://issues.apache.org/jira/browse/JCR-3738 Project: Jackrabbit Content Repository Issue Type: Bug Components: clustering Affects Versions: 2.6.2 Environment: CQ5.6.1 with jackrabbit-core 2.6.2 backed off ibm db2 v10.5 Reporter: Ankush Malhotra Assignee: Jukka Zitting Priority: Critical Attachments: db-deadlock-info.txt, stat-cache.log, threaddumps.zip Original, cloned description: When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (JCR-3738) CLONE - Deadlock on LOCAL_REVISION table in clustering environment
[ https://issues.apache.org/jira/browse/JCR-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3738: --- Description: Original, cloned description: {quote}When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. {quote} This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. was: When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. Affects Version/s: (was: 2.4.3) 2.6.2 Fix Version/s: (was: 2.5.3) Assignee: Jukka Zitting (was: Bart van der Schans) CLONE - Deadlock on LOCAL_REVISION table in clustering environment -- Key: JCR-3738 URL: https://issues.apache.org/jira/browse/JCR-3738 Project: Jackrabbit Content Repository Issue Type: Bug Components: clustering Affects Versions: 2.6.2 Environment: CQ5.6.1 with jackrabbit-core 2.6.2 backed off ibm db2 v10.5 Reporter: Ankush Malhotra Assignee: Jukka Zitting Priority: Critical Attachments: db-deadlock-info.txt, stat-cache.log, threaddumps.zip Original, cloned description: {quote}When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. {quote} This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3738) CLONE - Deadlock on LOCAL_REVISION table in clustering environment
[ https://issues.apache.org/jira/browse/JCR-3738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13916455#comment-13916455 ] Jukka Zitting commented on JCR-3738: The key threads here seem to be (with non-essential stack frames excluded): pool-7-thread-2-Granite Workflow External Process Job Queue(com/adobe/granite/workflow/external/job/etc/workflow/models/dam/update_asset/jcr_content/model) daemon prio=10 tid=0x7f12a46bf800 nid=0x2712 in Object.wait() [0x7f125b4bf000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on 0xb4151370 (a org.apache.jackrabbit.core.state.DefaultISMLocking) at org.apache.jackrabbit.core.state.SharedItemStateManager.acquireWriteLock(SharedItemStateManager.java:1898) at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:579) at org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:1507) at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1537) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:812) pool-7-thread-1 daemon prio=10 tid=0x7f12a46ea800 nid=0x2711 runnable [0x7f12631b1000] java.lang.Thread.State: RUNNABLE at com.ibm.db2.jcc.am.qo.execute(qo.java:2724) - locked 0xba4b7e30 (a com.ibm.db2.jcc.t4.b) at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172) at org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:288) at org.apache.jackrabbit.core.journal.DatabaseJournal$DatabaseRevision.set(DatabaseJournal.java:834) - locked 0xb41afb40 (a org.apache.jackrabbit.core.journal.DatabaseJournal$DatabaseRevision) at org.apache.jackrabbit.core.cluster.ClusterNode.setRevision(ClusterNode.java:872) at org.apache.jackrabbit.core.cluster.ClusterNode$WorkspaceUpdateChannel.updateCommitted(ClusterNode.java:703) at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:845) at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1537) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:812) It looks like the first thread managed to acquire the cluster lock in the database as that's done before the Java-level acquireWriteLock() call. But the second thread was already inside that critical section (or somehow managed to enter it afterwards), and now isn't able to complete the transaction because the database won't allow it. I'll look into this in more detail next week. CLONE - Deadlock on LOCAL_REVISION table in clustering environment -- Key: JCR-3738 URL: https://issues.apache.org/jira/browse/JCR-3738 Project: Jackrabbit Content Repository Issue Type: Bug Components: clustering Affects Versions: 2.6.2 Environment: CQ5.6.1 with jackrabbit-core 2.6.2 backed off ibm db2 v10.5 Reporter: Ankush Malhotra Assignee: Jukka Zitting Priority: Critical Attachments: db-deadlock-info.txt, stat-cache.log, threaddumps.zip Original, cloned description: When inserting a lot of nodes concurrently (100/200 threads) the system hangs generating a deadlock on the LOCAL_REVISION table. There is a thread that starts a transaction but the transaction remains open, while another thread tries to acquire the lock on the table. This actually happen even if there is only a server up but configured in cluster mode. I found that in AbstractJournal, we try to write the LOCAL_REVISION even if we don't sync any record because they're generated by the same journal of the thread running. Removing this unnecessary (to me :-) ) write to the LOCAL_REVISION table, remove the deadlock. This might not be the exact same case with this issue. See the attached thread dumps etc. for full details. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3735) Efficient copying of binaries in Jackrabbit DataStores
[ https://issues.apache.org/jira/browse/JCR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13914611#comment-13914611 ] Jukka Zitting commented on JCR-3735: bq. So, what you suggest as the next step? Given that the benefits of using {{FileChannel}} seem to be fairly limited and in some cases negative, I don't think this feature is worth the extra complexity. Thus I'd resolve this as Won't Fix and take another look at client code for options on avoiding the extra temporary file. Efficient copying of binaries in Jackrabbit DataStores -- Key: JCR-3735 URL: https://issues.apache.org/jira/browse/JCR-3735 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-core Affects Versions: 2.7.4 Reporter: Amit Jain In the DataStore implementations an additional temporary file is created for every binary uploaded. This step is an additional overhead when the upload process itself creates a temporary file. So, the solution proposed is to check if the input stream passed is a FileInputStream and then use the FileChannel object associated with the input stream to copy the file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3735) Efficient copying of binaries in Jackrabbit DataStores
[ https://issues.apache.org/jira/browse/JCR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13913019#comment-13913019 ] Jukka Zitting commented on JCR-3735: bq. Using FileChannel which should be faster for large files. Do you have numbers to back that statement? A quick benchmark on my laptop (Windows 7, Java 7, 64 bit, SSD) shows {{FileChannel.transferTo()}} to actually be an order of magnitude *slower* than a simple buffered copy from one file to another. Efficient copying of binaries in Jackrabbit DataStores -- Key: JCR-3735 URL: https://issues.apache.org/jira/browse/JCR-3735 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-core Affects Versions: 2.7.4 Reporter: Amit Jain In the DataStore implementations an additional temporary file is created for every binary uploaded. This step is an additional overhead when the upload process itself creates a temporary file. So, the solution proposed is to check if the input stream passed is a FileInputStream and then use the FileChannel object associated with the input stream to copy the file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3735) Efficient copying of binaries in Jackrabbit DataStores
[ https://issues.apache.org/jira/browse/JCR-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13910370#comment-13910370 ] Jukka Zitting commented on JCR-3735: bq. This step is an additional overhead when the upload process itself creates a temporary file. IMHO the additional overhead is the temporary file created by the upload process before passing the incoming stream to the DataStore. bq. use the FileChannel object associated with the input stream to copy the file We can do that, but the gains will be much smaller than if we could avoid the temporary file entirely. Efficient copying of binaries in Jackrabbit DataStores -- Key: JCR-3735 URL: https://issues.apache.org/jira/browse/JCR-3735 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-core Affects Versions: 2.7.4 Reporter: Amit Jain In the DataStore implementations an additional temporary file is created for every binary uploaded. This step is an additional overhead when the upload process itself creates a temporary file. So, the solution proposed is to check if the input stream passed is a FileInputStream and then use the FileChannel object associated with the input stream to copy the file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3724) Increase the jcr-commons osgi package export versions
[ https://issues.apache.org/jira/browse/JCR-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896410#comment-13896410 ] Jukka Zitting commented on JCR-3724: OSGi semantic versioning requires an update of the minor version number (x.Y.z) for all backwards-compatible API changes, like the ones seen here. Micro version (x.y.Z) updates are used to signal things like bug fixes that don't change the public APIs. Increase the jcr-commons osgi package export versions - Key: JCR-3724 URL: https://issues.apache.org/jira/browse/JCR-3724 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-commons Affects Versions: 2.7.4 Reporter: Alex Parvulescu Assignee: Alex Parvulescu Priority: Minor Fix For: 2.7.5 As noticed by Tobias, the exported package versions got left behind. - the 'org.apache.jackrabbit.commons.iterator' package will go to 2.3.1 - the 'org.apache.jackrabbit.stats' package will go to 2.7.5 (new package) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3721) Slow and actively called NodeId.toString()
[ https://issues.apache.org/jira/browse/JCR-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896411#comment-13896411 ] Jukka Zitting commented on JCR-3721: Good point about the id-to-string conversion in the query engine. Even if it doesn't address the mentioned hierarchy constraint limitations, even a modest speedup would be nice. I'll look into applying the patch. Slow and actively called NodeId.toString() -- Key: JCR-3721 URL: https://issues.apache.org/jira/browse/JCR-3721 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.5, 2.7 Environment: Debian/GNU Linux 7.3 / Oracle JDK 7 / Apache Tomcat 7.0; Windows Server 2008 / IBM WebSphere AppServer 7.0 Reporter: Maxim Zinal Attachments: NodeIdToString.patch I performed some JackRabbit profiling while trying to investigate the reason of low performance of our application. The mostly interesting thing I've found is that NodeId.toString() method is heavily used for hierarchy-based XPath queries, and it performs really bad. This are the numbers for my test application: - Total CPU time: 879 178 msec - CPU time in NodeId.toString(), including subcalls: 223 705 msec A quick check against NodeId.toString() implementation shows that it is based on UUID.toString(), which itself is very ineffective in both in Oracle and IBM JDK. I've wrote a quick replacement for this method, and my measurements show that overall performance became significantly better for our case. Hope that this will help to improve JackRabbit performance for similiar applications. P.S. Another interesting thing I've found is that a lot of time is spent inside log4j.Category.getEffectiveLevel() method - I suspect this is caused by numerous log.debug() calls without proper isDebugEnabled() handling. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (JCR-3721) Slow and actively called NodeId.toString()
[ https://issues.apache.org/jira/browse/JCR-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3721: --- Resolution: Fixed Fix Version/s: 2.7.5 Assignee: Jukka Zitting Status: Resolved (was: Patch Available) Patch applied in revision 158. A simple micro-benchmark showed that the duration of a single toString() call went from about 500ns to 100ns on my laptop. Thanks! Revision 159 optimized the call a bit further, bringing the method duration to about 70ns. Resolving as fixed. Slow and actively called NodeId.toString() -- Key: JCR-3721 URL: https://issues.apache.org/jira/browse/JCR-3721 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.5, 2.7 Environment: Debian/GNU Linux 7.3 / Oracle JDK 7 / Apache Tomcat 7.0; Windows Server 2008 / IBM WebSphere AppServer 7.0 Reporter: Maxim Zinal Assignee: Jukka Zitting Fix For: 2.7.5 Attachments: NodeIdToString.patch I performed some JackRabbit profiling while trying to investigate the reason of low performance of our application. The mostly interesting thing I've found is that NodeId.toString() method is heavily used for hierarchy-based XPath queries, and it performs really bad. This are the numbers for my test application: - Total CPU time: 879 178 msec - CPU time in NodeId.toString(), including subcalls: 223 705 msec A quick check against NodeId.toString() implementation shows that it is based on UUID.toString(), which itself is very ineffective in both in Oracle and IBM JDK. I've wrote a quick replacement for this method, and my measurements show that overall performance became significantly better for our case. Hope that this will help to improve JackRabbit performance for similiar applications. P.S. Another interesting thing I've found is that a lot of time is spent inside log4j.Category.getEffectiveLevel() method - I suspect this is caused by numerous log.debug() calls without proper isDebugEnabled() handling. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (JCR-3721) Slow and actively called NodeId.toString()
[ https://issues.apache.org/jira/browse/JCR-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3721: --- Fix Version/s: 2.6.6 Merged to the 2.6 branch in revision 1566708. Slow and actively called NodeId.toString() -- Key: JCR-3721 URL: https://issues.apache.org/jira/browse/JCR-3721 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.5, 2.7 Environment: Debian/GNU Linux 7.3 / Oracle JDK 7 / Apache Tomcat 7.0; Windows Server 2008 / IBM WebSphere AppServer 7.0 Reporter: Maxim Zinal Assignee: Jukka Zitting Fix For: 2.6.6, 2.7.5 Attachments: NodeIdToString.patch I performed some JackRabbit profiling while trying to investigate the reason of low performance of our application. The mostly interesting thing I've found is that NodeId.toString() method is heavily used for hierarchy-based XPath queries, and it performs really bad. This are the numbers for my test application: - Total CPU time: 879 178 msec - CPU time in NodeId.toString(), including subcalls: 223 705 msec A quick check against NodeId.toString() implementation shows that it is based on UUID.toString(), which itself is very ineffective in both in Oracle and IBM JDK. I've wrote a quick replacement for this method, and my measurements show that overall performance became significantly better for our case. Hope that this will help to improve JackRabbit performance for similiar applications. P.S. Another interesting thing I've found is that a lot of time is spent inside log4j.Category.getEffectiveLevel() method - I suspect this is caused by numerous log.debug() calls without proper isDebugEnabled() handling. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-3721) Slow and actively called NodeId.toString()
[ https://issues.apache.org/jira/browse/JCR-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888990#comment-13888990 ] Jukka Zitting commented on JCR-3721: Can you trace where those NodeId.toString() calls are being made? Normally those calls shouldn't be needed, so instead of optimizing the toString() calls we should be able to avoid them entirely. Slow and actively called NodeId.toString() -- Key: JCR-3721 URL: https://issues.apache.org/jira/browse/JCR-3721 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.5, 2.7 Environment: Debian/GNU Linux 7.3 / Oracle JDK 7 / Apache Tomcat 7.0; Windows Server 2008 / IBM WebSphere AppServer 7.0 Reporter: Maxim Zinal Attachments: NodeIdToString.patch I performed some JackRabbit profiling while trying to investigate the reason of low performance of our application. The mostly interesting thing I've found is that NodeId.toString() method is heavily used for hierarchy-based XPath queries, and it performs really bad. This are the numbers for my test application: - Total CPU time: 879 178 msec - CPU time in NodeId.toString(), including subcalls: 223 705 msec A quick check against NodeId.toString() implementation shows that it is based on UUID.toString(), which itself is very ineffective in both in Oracle and IBM JDK. I've wrote a quick replacement for this method, and my measurements show that overall performance became significantly better for our case. Hope that this will help to improve JackRabbit performance for similiar applications. P.S. Another interesting thing I've found is that a lot of time is spent inside log4j.Category.getEffectiveLevel() method - I suspect this is caused by numerous log.debug() calls without proper isDebugEnabled() handling. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (JCR-2958) Deprecate JackrabbitRepository#shutdown
[ https://issues.apache.org/jira/browse/JCR-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866738#comment-13866738 ] Jukka Zitting commented on JCR-2958: +1 Deprecate JackrabbitRepository#shutdown --- Key: JCR-2958 URL: https://issues.apache.org/jira/browse/JCR-2958 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-api Reporter: angela If i am not mistaken JackrabbitRepository#shutdown can be called by every single session by means of Session.getRepository. I would therefore opt for deprecating the method in favor of RepositoryManager#stop. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (JCR-3705) Extract data store API and implementations from jackrabbit-core
Jukka Zitting created JCR-3705: -- Summary: Extract data store API and implementations from jackrabbit-core Key: JCR-3705 URL: https://issues.apache.org/jira/browse/JCR-3705 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-core Reporter: Jukka Zitting In Oak we'd like to use the Jackrabbit data stores (OAK-805). Doing so would currently require a direct dependency to jackrabbit-core, which is troublesome for various reasons. Since the DataStore interface and its implementations are mostly independent of the rest of Jackrabbit internals, it should be possible to avoid that dependency by moving the data store bits to some other component. One alternative would be to place them in jackrabbit-jcr-commons, another to create a separate new jackrabbit-data component for this purpose. WDYT? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3692) MoveAtRootTest fails and is not included in test suite
[ https://issues.apache.org/jira/browse/JCR-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3692: --- Fix Version/s: 2.7.3 MoveAtRootTest fails and is not included in test suite -- Key: JCR-3692 URL: https://issues.apache.org/jira/browse/JCR-3692 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Reporter: Bertrand Delacretaz Assignee: angela Priority: Minor Fix For: 2.7.3 Attachments: activate-moveatroot.patch The MoveAtRootTest introduced by JCR-2680 fails when executed against the current jackrabbit-core trunk (mvn clean test -Dtest=MoveAtRootTest), with javax.jcr.RepositoryException: Attempt to remove/move the admin user. The operation that fails is Session.move(/MoveAtRootTest_A, /MoveAtRootTest_B) AFAICS this is caused by the JCR-3686 changes. The same test passes on the http://svn.apache.org/repos/asf/jackrabbit/tags/2.6.4 revision. I'll attach a patch that includes the test in the core test suite. If there's a good reason to forbid such a move, it should be documented and the test changed to reflect the expected behavior. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3691) Search index consistency check logs unnecessary warnings for repairable errors
[ https://issues.apache.org/jira/browse/JCR-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3691: --- Affects Version/s: 2.6.4 Fix Version/s: 2.6.5 Merged to the 2.6 branch in revision 1546211. Search index consistency check logs unnecessary warnings for repairable errors -- Key: JCR-3691 URL: https://issues.apache.org/jira/browse/JCR-3691 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.4, 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.6.5, 2.7.3 When encountering an orphan node in the search index, the consistency check first marks it as a removed node that it can fix, but then also marks it as an unrepairable orphan node. The resulting log entries are a bit confusing, especially if one only logs warnings: [INFO] Removing deleted node from index: 29cac4c6-9306-8392-cd76-1fcd770b2af5 [WARN] Not repairable: Node 29cac4c6-9306-8392-cd76-1fcd770b2af5 has unknown parent: 947f69b1-6143-bf42-700d-055ec5a83cb5 It would be better if the consistency check disabled further sanity checks for nodes that it already has marked for deletion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3692) MoveAtRootTest fails and is not included in test suite
[ https://issues.apache.org/jira/browse/JCR-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3692: --- Affects Version/s: 2.7.2 MoveAtRootTest fails and is not included in test suite -- Key: JCR-3692 URL: https://issues.apache.org/jira/browse/JCR-3692 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.2 Reporter: Bertrand Delacretaz Assignee: angela Priority: Minor Fix For: 2.7.3 Attachments: activate-moveatroot.patch The MoveAtRootTest introduced by JCR-2680 fails when executed against the current jackrabbit-core trunk (mvn clean test -Dtest=MoveAtRootTest), with javax.jcr.RepositoryException: Attempt to remove/move the admin user. The operation that fails is Session.move(/MoveAtRootTest_A, /MoveAtRootTest_B) AFAICS this is caused by the JCR-3686 changes. The same test passes on the http://svn.apache.org/repos/asf/jackrabbit/tags/2.6.4 revision. I'll attach a patch that includes the test in the core test suite. If there's a good reason to forbid such a move, it should be documented and the test changed to reflect the expected behavior. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3653) SessionState logs nano seconds but writes 'us'
[ https://issues.apache.org/jira/browse/JCR-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3653: --- Affects Version/s: 2.6.4 Fix Version/s: 2.6.5 Merged to the 2.6 branch in revision 1546289. SessionState logs nano seconds but writes 'us' -- Key: JCR-3653 URL: https://issues.apache.org/jira/browse/JCR-3653 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.6.4, 2.7 Reporter: Tobias Bocanegra Assignee: Tobias Bocanegra Priority: Minor Fix For: 2.6.5, 2.7.1 either convert to micro seconds or change the log statement -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3700) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/JCR-3700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3700. Resolution: Fixed Assignee: Jukka Zitting Done in revision 3721. Please delete old releases from mirroring system Key: JCR-3700 URL: https://issues.apache.org/jira/browse/JCR-3700 Project: Jackrabbit Content Repository Issue Type: Bug Environment: http://www.apache.org/dist/jackrabbit/oak/ Reporter: Sebb Assignee: Jukka Zitting To reduce the load on the ASF mirrors, projects are required to delete old releases [1] Please can you remove all non-current releases? Thanks! [Note that older releases are always available from the ASF archive server] [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3700) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/JCR-3700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3700: --- Issue Type: Task (was: Bug) Please delete old releases from mirroring system Key: JCR-3700 URL: https://issues.apache.org/jira/browse/JCR-3700 Project: Jackrabbit Content Repository Issue Type: Task Environment: http://www.apache.org/dist/jackrabbit/oak/ Reporter: Sebb Assignee: Jukka Zitting To reduce the load on the ASF mirrors, projects are required to delete old releases [1] Please can you remove all non-current releases? Thanks! [Note that older releases are always available from the ASF archive server] [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (JCR-3691) Search index consistency check logs unnecessary warnings for repairable errors
Jukka Zitting created JCR-3691: -- Summary: Search index consistency check logs unnecessary warnings for repairable errors Key: JCR-3691 URL: https://issues.apache.org/jira/browse/JCR-3691 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor When encountering an orphan node in the search index, the consistency check first marks it as a removed node that it can fix, but then also marks it as an unrepairable orphan node. The resulting log entries are a bit confusing, especially if one only logs warnings: [INFO] Removing deleted node from index: 29cac4c6-9306-8392-cd76-1fcd770b2af5 [WARN] Not repairable: Node 29cac4c6-9306-8392-cd76-1fcd770b2af5 has unknown parent: 947f69b1-6143-bf42-700d-055ec5a83cb5 It would be better if the consistency check disabled further sanity checks for nodes that it already has marked for deletion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3691) Search index consistency check logs unnecessary warnings for repairable errors
[ https://issues.apache.org/jira/browse/JCR-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3691. Resolution: Fixed Fix Version/s: 2.7.3 Fixed in revision 1539030. Search index consistency check logs unnecessary warnings for repairable errors -- Key: JCR-3691 URL: https://issues.apache.org/jira/browse/JCR-3691 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.7.3 When encountering an orphan node in the search index, the consistency check first marks it as a removed node that it can fix, but then also marks it as an unrepairable orphan node. The resulting log entries are a bit confusing, especially if one only logs warnings: [INFO] Removing deleted node from index: 29cac4c6-9306-8392-cd76-1fcd770b2af5 [WARN] Not repairable: Node 29cac4c6-9306-8392-cd76-1fcd770b2af5 has unknown parent: 947f69b1-6143-bf42-700d-055ec5a83cb5 It would be better if the consistency check disabled further sanity checks for nodes that it already has marked for deletion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3364) Moving of nodes requires read access to all parent nodes of the destination node
[ https://issues.apache.org/jira/browse/JCR-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3364. Resolution: Fixed Fix Version/s: 2.7.2 Actually the cycle is already detected during persistence. The problem in JCR-3291 was more about the stack overflow caused by a getPath() call on such a cyclic node. In revision 1535539 I adjusted the solution so that instead of creating dummy transient items for the ancestors of the destination node in order to avoid the path cycle, we let the path cycle to occur, but detect it early and throw an InvalidItemStateException instead of letting a stack overflow occur. This should address both JCR-3291 and this issue. Moving of nodes requires read access to all parent nodes of the destination node Key: JCR-3364 URL: https://issues.apache.org/jira/browse/JCR-3364 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.2.12, 2.4.2, 2.5 Reporter: Thomas März Assignee: Jukka Zitting Fix For: 2.7.2 Before JCR-3291 was fixed, Session#move(String, String) could move nodes without having read-access to the whole tree. - Deny jcr:read on /home and grant jcr:all on /home/users/usera to usera - Move nodes from /home/users/usera/from to /home/users/usera/to with usera's session - AccessDeniedException is thrown http://article.gmane.org/gmane.comp.apache.jackrabbit.user/18892 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3364) Moving of nodes requires read access to all parent nodes of the destination node
[ https://issues.apache.org/jira/browse/JCR-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3364: --- Fix Version/s: 2.6.5 2.4.6 Merged to the 2.6 branch in revision 1535542 and to the 2.4 branch in revision 1535543. Moving of nodes requires read access to all parent nodes of the destination node Key: JCR-3364 URL: https://issues.apache.org/jira/browse/JCR-3364 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.2.12, 2.4.2, 2.5 Reporter: Thomas März Assignee: Jukka Zitting Fix For: 2.4.6, 2.6.5, 2.7.2 Before JCR-3291 was fixed, Session#move(String, String) could move nodes without having read-access to the whole tree. - Deny jcr:read on /home and grant jcr:all on /home/users/usera to usera - Move nodes from /home/users/usera/from to /home/users/usera/to with usera's session - AccessDeniedException is thrown http://article.gmane.org/gmane.comp.apache.jackrabbit.user/18892 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3619) After a move operation all ancestors of the destination path are modified
[ https://issues.apache.org/jira/browse/JCR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3619. Resolution: Duplicate Fixed in JCR-3364. After a move operation all ancestors of the destination path are modified - Key: JCR-3619 URL: https://issues.apache.org/jira/browse/JCR-3619 Project: Jackrabbit Content Repository Issue Type: Bug Reporter: Unico Hommes Assignee: Unico Hommes This is the result of the fix in JCR-3291. We should consider a different solution there. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (JCR-3364) Moving of nodes requires read access to all parent nodes of the destination node
[ https://issues.apache.org/jira/browse/JCR-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802177#comment-13802177 ] Jukka Zitting commented on JCR-3364: Instead of handling the cycle detection in transient space like done in JCR-3291, I think it would be better to postpone the check to be done against the ChangeLog instance in SharedItemStateManager.Update.begin(). I'll give it a look. Moving of nodes requires read access to all parent nodes of the destination node Key: JCR-3364 URL: https://issues.apache.org/jira/browse/JCR-3364 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.2.12, 2.4.2, 2.5 Reporter: Thomas März Before JCR-3291 was fixed, Session#move(String, String) could move nodes without having read-access to the whole tree. - Deny jcr:read on /home and grant jcr:all on /home/users/usera to usera - Move nodes from /home/users/usera/from to /home/users/usera/to with usera's session - AccessDeniedException is thrown http://article.gmane.org/gmane.comp.apache.jackrabbit.user/18892 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (JCR-3619) After a move operation all ancestors of the destination path are modified
[ https://issues.apache.org/jira/browse/JCR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13802305#comment-13802305 ] Jukka Zitting commented on JCR-3619: The solution suggested in JCR-3364 should address also this issue. After a move operation all ancestors of the destination path are modified - Key: JCR-3619 URL: https://issues.apache.org/jira/browse/JCR-3619 Project: Jackrabbit Content Repository Issue Type: Bug Reporter: Unico Hommes Assignee: Unico Hommes This is the result of the fix in JCR-3291. We should consider a different solution there. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (JCR-3364) Moving of nodes requires read access to all parent nodes of the destination node
[ https://issues.apache.org/jira/browse/JCR-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCR-3364: -- Assignee: Jukka Zitting Moving of nodes requires read access to all parent nodes of the destination node Key: JCR-3364 URL: https://issues.apache.org/jira/browse/JCR-3364 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.2.12, 2.4.2, 2.5 Reporter: Thomas März Assignee: Jukka Zitting Before JCR-3291 was fixed, Session#move(String, String) could move nodes without having read-access to the whole tree. - Deny jcr:read on /home and grant jcr:all on /home/users/usera to usera - Move nodes from /home/users/usera/from to /home/users/usera/to with usera's session - AccessDeniedException is thrown http://article.gmane.org/gmane.comp.apache.jackrabbit.user/18892 -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (JCR-3682) If we get a unexpected exception from the jdbc driver it's possible create a unreleased VersioningLock
[ https://issues.apache.org/jira/browse/JCR-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797864#comment-13797864 ] Jukka Zitting commented on JCR-3682: We should catch every Exception in BundleDbPersistenceManager.readBundle() to prevent that situation. Wouldn't a cleaner approach be to handle the unlocking in a finally block? That way we wound't need to worry about unchecked exceptions. If we get a unexpected exception from the jdbc driver it's possible create a unreleased VersioningLock -- Key: JCR-3682 URL: https://issues.apache.org/jira/browse/JCR-3682 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core, transactions Affects Versions: 2.6.4, 2.7.1 Reporter: Claus Köll Assignee: Claus Köll Fix For: 2.6.5, 2.7.2 Attachments: JCR-3682.patch If we get a unexpected exception from the jdbc driver the VersioningLock from the versionMgr.getXAResourceEnd() XAResource will never be released so the repository is locked forever. We should catch every Exception in BundleDbPersistenceManager.readBundle() to prevent that situation. Following Stacktrace shows the problem ... Caused by: java.lang.ArrayIndexOutOfBoundsException at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readLongMSB(T4CSocketInputStreamWrapper.java:201) at oracle.jdbc.driver.T4CMAREngine.buffer2Value(T4CMAREngine.java:2374) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB4(T4CMAREngine.java:1310) at oracle.jdbc.driver.T4CTTIoer.unmarshal(T4CTTIoer.java:257) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:447) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192) at oracle.jdbc.driver.T4C8TTILob.read(T4C8TTILob.java:146) at oracle.jdbc.driver.T4CConnection.getBytes(T4CConnection.java:2392) at oracle.sql.BLOB.getBytes(BLOB.java:348) at oracle.jdbc.driver.OracleBlobInputStream.needBytes(OracleBlobInputStream.java:181) at oracle.jdbc.driver.OracleBufferedStream.readInternal(OracleBufferedStream.java:174) at oracle.jdbc.driver.OracleBufferedStream.read(OracleBufferedStream.java:143) at org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:75) at org.apache.commons.io.input.CountingInputStream.read(CountingInputStream.java:74) at java.io.DataInputStream.readFully(DataInputStream.java:189) at java.io.DataInputStream.readFully(DataInputStream.java:163) at org.apache.jackrabbit.core.persistence.util.BundleReader.readBytes(BundleReader.java:669) at org.apache.jackrabbit.core.persistence.util.BundleReader.readName(BundleReader.java:520) at org.apache.jackrabbit.core.persistence.util.BundleReader.readQName(BundleReader.java:469) at org.apache.jackrabbit.core.persistence.util.BundleReader.readBundleNew(BundleReader.java:194) at org.apache.jackrabbit.core.persistence.util.BundleReader.readBundle(BundleReader.java:145) at org.apache.jackrabbit.core.persistence.util.BundleBinding.readBundle(BundleBinding.java:152) at org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.readBundle(BundleDbPersistenceManager.java:927) at org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.loadBundle(BundleDbPersistenceManager.java:889) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.getBundleCacheMiss(AbstractBundlePersistenceManager.java:766) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.getBundle(AbstractBundlePersistenceManager.java:749) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.storeInternal(AbstractBundlePersistenceManager.java:633) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.store(AbstractBundlePersistenceManager.java:590) at org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.store(BundleDbPersistenceManager.java:482) at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:788) at org.apache.jackrabbit.core.state.XAItemStateManager.commit(XAItemStateManager.java:181) at org.apache.jackrabbit.core.TransactionContext.commit(TransactionContext.java:195) at org.apache.jackrabbit.core.XASessionImpl.commit(XASessionImpl.java:326) at org.apache.jackrabbit.jca.TransactionBoundXAResource.commit(TransactionBoundXAResource.java:49) at com.ibm.ejs.j2c.XATransactionWrapper.commit(XATransactionWrapper.java:490) -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3654) Error MembershipCache if a group node contains MV property
[ https://issues.apache.org/jira/browse/JCR-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3654: --- Affects Version/s: (was: 2.6) (was: 2.4) 2.4.4 2.6.4 Fix Version/s: 2.6.5 Merged to the 2.6 branch in revision 1530356. Error MembershipCache if a group node contains MV property -- Key: JCR-3654 URL: https://issues.apache.org/jira/browse/JCR-3654 Project: Jackrabbit Content Repository Issue Type: Bug Components: security Affects Versions: 2.2, 2.3, 2.4.4, 2.5, 2.6.4, 2.7 Reporter: Tobias Bocanegra Assignee: Tobias Bocanegra Fix For: 2.4.5, 2.6.5, 2.7.1 the MembershipCache.collectDeclaredMembershipFromTraversal traverses the entire /home/groups tree and analyzes all properties if they contain a reference to the authorizable node. this is very suboptimal and in case there is a multivalue, this even throws an error. suggest: * do an intelligent traversal instead using the TraversingItemVisitor. * be cautious not to read MV properties unchecked. Potential error: com.day.crx.security.ldap.LDAPLoginModule Cause: javax.jcr.ValueFormatException: propert y /home/groups/a/administrators/jcr:mixinTypes is a multi-valued property, so it's values can only be retrieved as an array at org.apache.jackrabbit.core.PropertyImpl.internalGetValue(PropertyImpl.java:483) at org.apache.jackrabbit.core.PropertyImpl.getValue(PropertyImpl.java:510) at org.apache.jackrabbit.core.PropertyImpl.getString(PropertyImpl.java:520) at org.apache.jackrabbit.core.security.user.MembershipCache$1.entering(MembershipCache.java:363) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:160) at org.apache.jackrabbit.core.PropertyImpl.accept(PropertyImpl.java:904) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:187) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1720) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:191) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1720) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:191) at org.apache.jackrabbit.core.security.user.MembershipCache.collectDeclaredMembershipFromTraversal(MembershipCache.java:374) at org.apache.jackrabbit.core.security.user.MembershipCache.collectDeclaredMembership(MembershipCache.java:200) at org.apache.jackrabbit.core.security.user.AuthorizableImpl.collectMembership(AuthorizableImpl.java:358) at org.apache.jackrabbit.core.security.user.AuthorizableImpl.declaredMemberOf(AuthorizableImpl.java:89) at org.apache.jackrabbit.core.security.user.UserImpl.declaredMemberOf(UserImpl.java:38) -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (JCR-3678) MembershipCache max size is hard coded to 5000
[ https://issues.apache.org/jira/browse/JCR-3678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCR-3678: -- Assignee: Jukka Zitting MembershipCache max size is hard coded to 5000 -- Key: JCR-3678 URL: https://issues.apache.org/jira/browse/JCR-3678 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core, security Affects Versions: 2.7 Reporter: Andrew Khoury Assignee: Jukka Zitting Priority: Minor The jackrabbit membership cache is hard coded to maximum of 5000: http://svn.apache.org/viewvc/jackrabbit/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/security/user/MembershipCache.java?revision=1519376view=markuppathrev=1519376 This is a request to make the cache size configurable. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3678) MembershipCache max size is hard coded to 5000
[ https://issues.apache.org/jira/browse/JCR-3678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3678. Resolution: Fixed Fix Version/s: 2.7.2 2.6.4 2.4.5 In revision 1530005 added a org.apache.jackrabbit.MembershipCache system property for controlling the cache size. We can turn that to a proper repository configuration option later on if it's needed in more than just a few deployments. Merged to the 2.6 branch in revision 1530009, and to the 2.4 branch in revision 1530011. MembershipCache max size is hard coded to 5000 -- Key: JCR-3678 URL: https://issues.apache.org/jira/browse/JCR-3678 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core, security Affects Versions: 2.7 Reporter: Andrew Khoury Assignee: Jukka Zitting Priority: Minor Fix For: 2.4.5, 2.6.4, 2.7.2 The jackrabbit membership cache is hard coded to maximum of 5000: http://svn.apache.org/viewvc/jackrabbit/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/security/user/MembershipCache.java?revision=1519376view=markuppathrev=1519376 This is a request to make the cache size configurable. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3658) MembershipCache not consistently synchronized
[ https://issues.apache.org/jira/browse/JCR-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3658: --- Affects Version/s: 2.4.3 2.6.3 Fix Version/s: 2.4.4 2.6.4 Marcel merged the change to the 2.4 branch in revision 1519378, and I just did the same for 2.6 in revision 1530020. MembershipCache not consistently synchronized - Key: JCR-3658 URL: https://issues.apache.org/jira/browse/JCR-3658 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core, security Affects Versions: 2.4.3, 2.6.3, 2.7 Reporter: Tobias Bocanegra Assignee: Marcel Reutegger Priority: Minor Fix For: 2.4.4, 2.6.4, 2.7.1 Attachments: current.png, JCR-3658.patch, JCR-3658.patch, JCR-3658-test.patch, patched.png the membership cache access is mostly synchronized on 'this' but in the onEvent() handler, the internal cache object is synchronized. suggest: to improve cache access by a r/w lock instead. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3652) Bundle serialization broken
[ https://issues.apache.org/jira/browse/JCR-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3652: --- Priority: Major (was: Minor) Issue Type: Bug (was: New Feature) Bundle serialization broken --- Key: JCR-3652 URL: https://issues.apache.org/jira/browse/JCR-3652 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Reporter: Thomas Mueller Assignee: Thomas Mueller Fix For: 2.4.4, 2.6.4, 2.7.1 Attachments: JCR-3652-b.patch, JCR-3652.patch, JCR-3652-test-case.patch I have got a strange case where some node bundle is broken, seemingly because a byte is missing. I can't explain the missing byte, but it is reproducible, meaning that writing the bundles again will break them again. There are 11 broken bundles, 10 of them have the size 480 bytes and one is slightly larger. It is always a boolean property value that is missing, always the value for the property jcr:isCheckedOut. As a (temporary) solution, and to help analyze what the problem might be, I will create a patch that does the following: * When serializing a bundle, check if the byte array can be de-serialized. If not, then try again. Starting with the 3th try, use a slower variant where before and after writing the boolean value the buffer is flushed. I'm aware that ByteArrayOutputStream.flush doesn't do much, but maybe it solves the problem (let's see) if the problem is related to a JVM issue. * If de-serializing a bundle fails, check if it's because of a missing boolean property value. If yes, insert the missing byte. I have also added some log messages (warning / error) to help analyze the problem. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (JCR-3667) Possible regression with accepted content types when extracting and indexing binary values
[ https://issues.apache.org/jira/browse/JCR-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCR-3667: -- Assignee: Jukka Zitting Possible regression with accepted content types when extracting and indexing binary values -- Key: JCR-3667 URL: https://issues.apache.org/jira/browse/JCR-3667 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.4.4, 2.6.3 Reporter: Cédric Damioli Assignee: Jukka Zitting Labels: patch Fix For: 2.7.2 JCR-3476 introduced a mime-type test before parsing binary values, based on Tika's supported parsers. This may lead to incorrect behaviours, with a text/xml not being extracted and indexed because the XMLParser does not declare text/xml as a supported type. The problem here is that there is a regression between 2.4.3 and 2.4.4, because the same content was previously well recognized by Tika's Detector and then extracted. Furthermore, it seems to me inconsistent on one hand to rely on the declared content type and on the other hand to delegate the actual type detection to Tika ? This may lead to cases where the jcr:mimeType value is set to eg. application/pdf but detected and parsed by Tika as text/plain with no error. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3667) Possible regression with accepted content types when extracting and indexing binary values
[ https://issues.apache.org/jira/browse/JCR-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3667: --- Fix Version/s: (was: 2.6.4) (was: 2.4.5) Possible regression with accepted content types when extracting and indexing binary values -- Key: JCR-3667 URL: https://issues.apache.org/jira/browse/JCR-3667 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.4.4, 2.6.3 Reporter: Cédric Damioli Labels: patch Fix For: 2.7.2 JCR-3476 introduced a mime-type test before parsing binary values, based on Tika's supported parsers. This may lead to incorrect behaviours, with a text/xml not being extracted and indexed because the XMLParser does not declare text/xml as a supported type. The problem here is that there is a regression between 2.4.3 and 2.4.4, because the same content was previously well recognized by Tika's Detector and then extracted. Furthermore, it seems to me inconsistent on one hand to rely on the declared content type and on the other hand to delegate the actual type detection to Tika ? This may lead to cases where the jcr:mimeType value is set to eg. application/pdf but detected and parsed by Tika as text/plain with no error. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (JCR-3667) Possible regression with accepted content types when extracting and indexing binary values
[ https://issues.apache.org/jira/browse/JCR-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788509#comment-13788509 ] Jukka Zitting commented on JCR-3667: OK, I see the problem. We'll probably want to handle the 1.3 to 1.4 upgrade in a separate improvement issue, and come up with a separate solution to this problem. IIUC, the problem is that Tika in this case does not properly normalize the type names which leads to the mismatch between the detected and supported types. To avoid that problem we could explicitly ask Tika to normalize the type names. Possible regression with accepted content types when extracting and indexing binary values -- Key: JCR-3667 URL: https://issues.apache.org/jira/browse/JCR-3667 Project: Jackrabbit Content Repository Issue Type: Bug Affects Versions: 2.4.4, 2.6.3 Reporter: Cédric Damioli Assignee: Jukka Zitting Labels: patch Fix For: 2.7.2 JCR-3476 introduced a mime-type test before parsing binary values, based on Tika's supported parsers. This may lead to incorrect behaviours, with a text/xml not being extracted and indexed because the XMLParser does not declare text/xml as a supported type. The problem here is that there is a regression between 2.4.3 and 2.4.4, because the same content was previously well recognized by Tika's Detector and then extracted. Furthermore, it seems to me inconsistent on one hand to rely on the declared content type and on the other hand to delegate the actual type detection to Tika ? This may lead to cases where the jcr:mimeType value is set to eg. application/pdf but detected and parsed by Tika as text/plain with no error. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3652) Bundle serialization broken
[ https://issues.apache.org/jira/browse/JCR-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3652: --- Fix Version/s: (was: 2.4.4) 2.4.5 Bundle serialization broken --- Key: JCR-3652 URL: https://issues.apache.org/jira/browse/JCR-3652 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Reporter: Thomas Mueller Assignee: Thomas Mueller Fix For: 2.4.5, 2.6.4, 2.7.1 Attachments: JCR-3652-b.patch, JCR-3652.patch, JCR-3652-test-case.patch I have got a strange case where some node bundle is broken, seemingly because a byte is missing. I can't explain the missing byte, but it is reproducible, meaning that writing the bundles again will break them again. There are 11 broken bundles, 10 of them have the size 480 bytes and one is slightly larger. It is always a boolean property value that is missing, always the value for the property jcr:isCheckedOut. As a (temporary) solution, and to help analyze what the problem might be, I will create a patch that does the following: * When serializing a bundle, check if the byte array can be de-serialized. If not, then try again. Starting with the 3th try, use a slower variant where before and after writing the boolean value the buffer is flushed. I'm aware that ByteArrayOutputStream.flush doesn't do much, but maybe it solves the problem (let's see) if the problem is related to a JVM issue. * If de-serializing a bundle fails, check if it's because of a missing boolean property value. If yes, insert the missing byte. I have also added some log messages (warning / error) to help analyze the problem. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (JCR-3658) MembershipCache not consistently synchronized
[ https://issues.apache.org/jira/browse/JCR-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3658: --- Fix Version/s: (was: 2.4.4) 2.4.5 MembershipCache not consistently synchronized - Key: JCR-3658 URL: https://issues.apache.org/jira/browse/JCR-3658 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core, security Affects Versions: 2.4.3, 2.6.3, 2.7 Reporter: Tobias Bocanegra Assignee: Marcel Reutegger Priority: Minor Fix For: 2.4.5, 2.6.4, 2.7.1 Attachments: current.png, JCR-3658.patch, JCR-3658.patch, JCR-3658-test.patch, patched.png the membership cache access is mostly synchronized on 'this' but in the onEvent() handler, the internal cache object is synchronized. suggest: to improve cache access by a r/w lock instead. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (JCR-3677) Invalid SQL2OrderByTest.testOrderByScore test case
Jukka Zitting created JCR-3677: -- Summary: Invalid SQL2OrderByTest.testOrderByScore test case Key: JCR-3677 URL: https://issues.apache.org/jira/browse/JCR-3677 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.1 Reporter: Jukka Zitting Priority: Minor The SQL2OrderByTest.testOrderByScore test case makes a query like the following: SELECT * FROM [nt:base] WHERE ISCHILDNODE([/testroot]) ORDER BY [jcr:score] The test then expects that he matching nodes are returned in a specific order. This is wrong on two counts: 1) The score of a search result is defined only for full text queries. It is meaningless for other queries and undefined by the spec. 2) Even if the score was a defined for such queries, the SQL2 syntax for accessing it is SCORE(), not [jcr:score]. Thus I suggest to either remove the test case or to make it use full text search and/or just verify that the results are ordered according to their scores instead of being in any specific predetermined order. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3677) Invalid SQL2OrderByTest.testOrderByScore test case
[ https://issues.apache.org/jira/browse/JCR-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3677. Resolution: Fixed Fix Version/s: 2.7.2 Assignee: Jukka Zitting Fixed in revision 1528966 by using SCORE() instead of [jcr:score] and simply verifying that the returned rows have an ascending sequence of scores. Invalid SQL2OrderByTest.testOrderByScore test case -- Key: JCR-3677 URL: https://issues.apache.org/jira/browse/JCR-3677 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.7.2 The SQL2OrderByTest.testOrderByScore test case makes a query like the following: SELECT * FROM [nt:base] WHERE ISCHILDNODE([/testroot]) ORDER BY [jcr:score] The test then expects that he matching nodes are returned in a specific order. This is wrong on two counts: 1) The score of a search result is defined only for full text queries. It is meaningless for other queries and undefined by the spec. 2) Even if the score was a defined for such queries, the SQL2 syntax for accessing it is SCORE(), not [jcr:score]. Thus I suggest to either remove the test case or to make it use full text search and/or just verify that the results are ordered according to their scores instead of being in any specific predetermined order. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (JCR-3671) Config DTD doesn't allow ProtectedItemImporter
[ https://issues.apache.org/jira/browse/JCR-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3671. Resolution: Fixed Fix Version/s: 2.7.2 2.6.4 2.4.5 Fixed in revisions 1526928 and 1526945. Merged to the 2.6 branch in revisions 1526944 and 1526946, and to the 2.4 branch in revision 1526947. I also updated the DTD copies in http://jackrabbit.apache.org/dtd/. Config DTD doesn't allow ProtectedItemImporter -- Key: JCR-3671 URL: https://issues.apache.org/jira/browse/JCR-3671 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.4.4, 2.6.3, 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.4.5, 2.6.4, 2.7.2 The repository configuration parser accepts all of ProtectedItemImporter, ProtectedPropertyImporter and ProtectedNodeImporter as synonyms inside the Import configuration element, but the related DTD only declares the latter two as allowed elements. We should fix the DTD to prevent incorrect warnings. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (JCR-3671) Config DTD doesn't allow ProtectedItemImporter
Jukka Zitting created JCR-3671: -- Summary: Config DTD doesn't allow ProtectedItemImporter Key: JCR-3671 URL: https://issues.apache.org/jira/browse/JCR-3671 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7.1, 2.6.3, 2.4.4 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor The repository configuration parser accepts all of {{ProtectedItemImporter}}, {{ProtectedPropertyImporter}} and {{ProtectedNodeImporter}} as synonyms inside the {{Import}} configuration element, but the related DTD only declares the latter two as allowed elements. We should fix the DTD to prevent incorrect warnings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3671) Config DTD doesn't allow ProtectedItemImporter
[ https://issues.apache.org/jira/browse/JCR-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3671: --- Description: The repository configuration parser accepts all of ProtectedItemImporter, ProtectedPropertyImporter and ProtectedNodeImporter as synonyms inside the Import configuration element, but the related DTD only declares the latter two as allowed elements. We should fix the DTD to prevent incorrect warnings. (was: The repository configuration parser accepts all of {{ProtectedItemImporter}}, {{ProtectedPropertyImporter}} and {{ProtectedNodeImporter}} as synonyms inside the {{Import}} configuration element, but the related DTD only declares the latter two as allowed elements. We should fix the DTD to prevent incorrect warnings. ) Config DTD doesn't allow ProtectedItemImporter -- Key: JCR-3671 URL: https://issues.apache.org/jira/browse/JCR-3671 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.4.4, 2.6.3, 2.7.1 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor The repository configuration parser accepts all of ProtectedItemImporter, ProtectedPropertyImporter and ProtectedNodeImporter as synonyms inside the Import configuration element, but the related DTD only declares the latter two as allowed elements. We should fix the DTD to prevent incorrect warnings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3663) FileVault: tweak gitignore file
[ https://issues.apache.org/jira/browse/JCR-3663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756950#comment-13756950 ] Jukka Zitting commented on JCR-3663: A better solution would be to tweak the build so that everything it produces goes inside the target folder. See the jackrabbit-core POM for an example on how to do that for the derby.log file (search for derby.stream.error.file). FileVault: tweak gitignore file --- Key: JCR-3663 URL: https://issues.apache.org/jira/browse/JCR-3663 Project: Jackrabbit Content Repository Issue Type: Task Components: jackrabbit-jcr-commons Reporter: Robert Munteanu Assignee: Tobias Bocanegra Priority: Trivial Attachments: JCR-3663-1.patch After a full build the bin directory and derby.log files are shown as untracked. I'll attach a trivial patch which fixes this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3635) Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node
[ https://issues.apache.org/jira/browse/JCR-3635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3635: --- Resolution: Fixed Fix Version/s: 2.6.4 2.4.5 2.2.14 Status: Resolved (was: Patch Available) I committed the patch and the test case in http://svn.apache.org/r1509101. Thanks! In the code I made a small adjustment to also ignore the jcr:frozenPrimaryType and jcr:frozenMixinTypes properties if they for whatever reason exist in the node being checked in. I also backported the fix to the 2.6, 2.4 and 2.2 maintenance branches, so it'll get shipped along with the next patch releases. Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node - Key: JCR-3635 URL: https://issues.apache.org/jira/browse/JCR-3635 Project: Jackrabbit Content Repository Issue Type: Bug Components: versioning Affects Versions: 2.7 Reporter: Florin Iordache Assignee: Jukka Zitting Fix For: 2.2.14, 2.4.5, 2.6.4, 2.7.1 Attachments: CopyFrozenUuidTest.java, JCR-3635.patch Let's assume we have node N with a manually assigned jcr:frozenUuid property (e.g. taken from an existing frozenNode version of another node). When creating versions of node N, the manually assigned frozenUuid property will overwrite the frozenUuid automatically created by the VersionManager in the versioning process because the jcr:frozenUuid property is not skipped when copying the existing properties from the versioned node to the frozen node in the version subtree. This can potentially cause issues whenever the jcr:frozenUuid is used in the future, since it would basically point to a different versioned node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (JCR-3635) Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node
[ https://issues.apache.org/jira/browse/JCR-3635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting reassigned JCR-3635: -- Assignee: Jukka Zitting Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node - Key: JCR-3635 URL: https://issues.apache.org/jira/browse/JCR-3635 Project: Jackrabbit Content Repository Issue Type: Bug Components: versioning Affects Versions: 2.7 Reporter: Florin Iordache Assignee: Jukka Zitting Fix For: 2.7.1 Attachments: CopyFrozenUuidTest.java, JCR-3635.patch Let's assume we have node N with a manually assigned jcr:frozenUuid property (e.g. taken from an existing frozenNode version of another node). When creating versions of node N, the manually assigned frozenUuid property will overwrite the frozenUuid automatically created by the VersionManager in the versioning process because the jcr:frozenUuid property is not skipped when copying the existing properties from the versioned node to the frozen node in the version subtree. This can potentially cause issues whenever the jcr:frozenUuid is used in the future, since it would basically point to a different versioned node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3635) Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node
[ https://issues.apache.org/jira/browse/JCR-3635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725282#comment-13725282 ] Jukka Zitting commented on JCR-3635: See JCR-517 for a related issue where something similar was discussed earlier. Perhaps we should revisit that discussion, and declare at least some properties like the mentioned jcr:frozenUuid to be reserved, even if it's not declared as protected in the parent node type. Manually specified jcr:frozenUuid overwriting the one assigned by the VersionManager when versioning node - Key: JCR-3635 URL: https://issues.apache.org/jira/browse/JCR-3635 Project: Jackrabbit Content Repository Issue Type: Bug Components: versioning Affects Versions: 2.7 Reporter: Florin Iordache Assignee: Jukka Zitting Fix For: 2.7.1 Attachments: CopyFrozenUuidTest.java, JCR-3635.patch Let's assume we have node N with a manually assigned jcr:frozenUuid property (e.g. taken from an existing frozenNode version of another node). When creating versions of node N, the manually assigned frozenUuid property will overwrite the frozenUuid automatically created by the VersionManager in the versioning process because the jcr:frozenUuid property is not skipped when copying the existing properties from the versioned node to the frozen node in the version subtree. This can potentially cause issues whenever the jcr:frozenUuid is used in the future, since it would basically point to a different versioned node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3634) New method: JackrabbitRepository.login(Credentials, MapString, Object)
[ https://issues.apache.org/jira/browse/JCR-3634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723620#comment-13723620 ] Jukka Zitting commented on JCR-3634: I'd rather keep the explicit workspace argument, with null for the default workspace like in the existing login() methods. Also, I'd avoid overloading the credentials attributes for this. If an attributes map is passed to the proposed method, they are taken to apply to the session being created, not to the credentials being passed. The approach of using credential attributes to pass session parameters is IMHO not semantically correct. Something like the mentioned auto-refresh mode has nothing to do with access credentials. And the proposed definition leaves something like login(new GuestCredentials(), Collections.singletonMap(AutoRefresh, true)) undefined, as GuestCredentials does not support attributes. Instead I'd define the method as follows: /** * Equivalent to {@code login(credentials, workspaceName)} except that the returned * Session instance contains the given extra session attributes in addition to any * included in the given Credentials instance. * p * The attributes are implementation-specific and may affect the behavior of the returned * session. Unlike credentials attributes, these separately passed session attributes * are guaranteed not to affect the authentication of the client. * p * An implementation that does not support a particular session attribute is expected * to ignore it and not make it available through the returned session. A client that * depends on specific behavior defined by a particular attribute can check whether * the returned session contains that attribute to verify whether the underlying * repository implementation supports that feature. * * @param credentials the credentials of the user * @param workspaceName the name of a workspace * @param attributes implementation-specific session attributes * @return a valid session for the user to access the repository * @throws LoginException if authentication or authorization for the specified workspace fails * @throws NoSuchWorkspaceException if the specified workspace is not recognized * @throws RepositoryException if another error occurs */ Session login(Credentials credentials, String workspaceName, MapString, Object attributes) throws LoginException, NoSuchWorkspaceException, RepositoryException; Note the last paragraph of the definition, which allows the following naive default implementation: public Session login(Credentials credentials, String workspaceName, MapString, Object attributes) throws LoginException, NoSuchWorkspaceException, RepositoryException { return login(credentials, workspaceName); } New method: JackrabbitRepository.login(Credentials, MapString, Object) Key: JCR-3634 URL: https://issues.apache.org/jira/browse/JCR-3634 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api Affects Versions: 2.7.1 Reporter: Michael Dürig As discussed [1] we need a way for passing session attributes on login without having to fall back to credentials. The latter might not support credentials or not be present at all when authentication is handled externally. I suggest to add the following method to JackrabbitRepository: /** * Equivalent to codelogin(credentials, workspace)/code where * ul * licodeworkspace = attributes.get(ATT_WORKSPACE_NAME)/code,/li * licodecredentials/code carry all and only the attributes passed * through the codeattributes/code map./li * /ul * * @param credentials the credentials of the user * @param attributes the attributes to put into the session * @return a valid session for the user to access the repository. * @throws javax.jcr.LoginException if authentication or authorization for the * specified workspace fails. * @throws javax.jcr.NoSuchWorkspaceException if the specified workspace is not recognized. * @throws javax.jcr.RepositoryException if another error occurs. */ Session login(Credentials credentials, MapString, Object attributes); See also OAK-803 for some more background. [1] http://markmail.org/message/lwhpglehee3jgpip -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3626) NodeTypeTest.getPrimaryItemName can get ssssslllllloooowwwww
[ https://issues.apache.org/jira/browse/JCR-3626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3626: --- Issue Type: Improvement (was: Task) NodeTypeTest.getPrimaryItemName can get sllw Key: JCR-3626 URL: https://issues.apache.org/jira/browse/JCR-3626 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jcr-tests Affects Versions: 2.4.4, 2.6.2, 2.7 Reporter: Julian Reschke Assignee: Julian Reschke Priority: Minor Fix For: 2.4.5, 2.6.3, 2.7.1 Attachments: JCR-3626.diff This is because it does a full repository traversal in order to find a node with a primary item. At least when running over WebDAV, it always descends into /jcr:system first, where no such item will be found. And, of course, /jcr:system/jcr:versionStorage keeps growing with each test run. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3630) XSS in DirListingExportHandler
[ https://issues.apache.org/jira/browse/JCR-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3630: --- Affects Version/s: 2.2.13 2.4.4 2.6.2 Fix Version/s: 2.7.1 2.6.3 2.4.5 2.2.14 Merged the fix to the 2.6, 2.4 and 2.2 branches. XSS in DirListingExportHandler -- Key: JCR-3630 URL: https://issues.apache.org/jira/browse/JCR-3630 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-server Affects Versions: 2.2.13, 2.4.4, 2.6.2 Reporter: angela Fix For: 2.2.14, 2.4.5, 2.6.3, 2.7.1 Attachments: jackrabbit_dirlisting_patch.txt lars krapf reported an XSS in the DirListingExportHandler and provided the attached patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3228) WebDav/DavEx remoting throws workspace mismatch exceptions when running on port 80
[ https://issues.apache.org/jira/browse/JCR-3228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3228: --- Fix Version/s: (was: 2.6) 2.6.3 WebDav/DavEx remoting throws workspace mismatch exceptions when running on port 80 -- Key: JCR-3228 URL: https://issues.apache.org/jira/browse/JCR-3228 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-spi2dav, jackrabbit-webdav Affects Versions: 2.2.13, 2.4.4, 2.6.2, 2.7 Reporter: Timothee Maret Assignee: Julian Reschke Priority: Minor Fix For: 2.4.5, 2.6.3, 2.7.1 Attachments: JCR-3228.diff When running on port 80, the webdav remoting shows unexpected behavior such as listing incomplete folder content. Moreover the following exception is thrown: The exception I get: java.lang.IllegalArgumentException: Workspace missmatch. [org.apache.jackrabbit.spi2dav.IdURICache.add(IdURICache.java:60), org.apache.jackrabbit.spi2dav.URIResolverImpl.getItemUri(URIResolverImpl.java:129), org.apache.jackrabbit.spi2dav.RepositoryServiceImpl.getItemUri(RepositoryServiceImpl.java:391), org.apache.jackrabbit.spi2davex.RepositoryServiceImpl.getPath(RepositoryServiceImpl.java:149), org.apache.jackrabbit.spi2davex.RepositoryServiceImpl.getPath(RepositoryServiceImpl.java:138), org.apache.jackrabbit.spi2davex.RepositoryServiceImpl.getItemInfos(RepositoryServiceImpl.java:265), org.apache.jackrabbit.jcr2spi.state.WorkspaceItemStateFactory.createNodeState(WorkspaceItemStateFactory.java:93), org.apache.jackrabbit.jcr2spi.state.TransientISFactory.createNodeState(TransientISFactory.java:97), org.apache.jackrabbit.jcr2spi.hierarchy.NodeEntryImpl.doResolve(NodeEntryImpl.java:990), org.apache.jackrabbit.jcr2spi.hierarchy.HierarchyEntryImpl.resolve(HierarchyEntryImpl.java:133), org.apache.jackrabbit.jcr2spi.hierarchy.HierarchyEntryImpl.getItemState(HierarchyEntryImpl.java:252), org.apache.jackrabbit.jcr2spi.hierarchy.NodeEntryImpl.getItemState(NodeEntryImpl.java:71), org.apache.jackrabbit.jcr2spi.ItemManagerImpl.getItem(ItemManagerImpl.java:199), org.apache.jackrabbit.jcr2spi.LazyItemIterator.prefetchNext(LazyItemIterator.java:138), org.apache.jackrabbit.jcr2spi.LazyItemIterator.next(LazyItemIterator.java:251), org.apache.jackrabbit.jcr2spi.LazyItemIterator.nextNode(LazyItemIterator.java:154), com.adobe.drive.connector.adep.GetChildrenHandler.execute(GetChildrenHandler.java:121), com.adobe.drive.connector.adep.GetChildrenHandler.execute(GetChildrenHandler.java:43), com.adobe.drive.model.internal.synchronization.AssetSynchronizer.execute(AssetSynchronizer.java:432), com.adobe.drive.model.internal.synchronization.AssetSynchronizer.synchronizeStructure(AssetSynchronizer.java:352), com.adobe.drive.internal.data.manager.DataManager.getChildren(DataManager.java:2602), com.adobe.drive.internal.biz.versioncue.service.call.GetChildren$1.call(GetChildren.java:98), com.adobe.drive.internal.biz.versioncue.service.call.GetChildren$1.call(GetChildren.java:73), com.adobe.drive.model.context.Context.run(Context.java:88), com.adobe.drive.internal.biz.versioncue.service.call.GetChildren.executeItem(GetChildren.java:126), com.adobe.drive.internal.biz.versioncue.service.call.GetChildren.executeItem(GetChildren.java:50), com.adobe.drive.internal.biz.versioncue.service.call.VersionCueCall$1.run(VersionCueCall.java:125), com.adobe.drive.internal.biz.versioncue.service.call.VersionCueCall$1.run(VersionCueCall.java:119), com.adobe.drive.data.internal.persistence.PersistenceRunner.run(PersistenceRunner.java:119), com.adobe.drive.internal.biz.versioncue.service.call.VersionCueCall.execute(VersionCueCall.java:134), com.adobe.drive.internal.biz.versioncue.service.VersionCueService.getChildren(VersionCueService.java:269), com.adobe.drive.ncomm.versioncue.GetChildren.handle(GetChildren.java:59), com.adobe.drive.ncomm.versioncue.VersionCueRequestHandler$1.run(VersionCueRequestHandler.java:185), com.adobe.drive.core.internal.jobs.JobHandler$JobWrapper.run(JobHandler.java:270), com.adobe.drive.core.internal.jobs.JobHandler$JobWrapper.run(JobHandler.java:286), java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886), java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908), java.lang.Thread.run(Thread.java:680)] I have tracked this issue and actually the HTTP Host header which is used to identify the webdav server does not contain the port (only the host) when running on port 80, whereas it contains the host:port when running on any other port. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more
[jira] [Created] (JCR-3620) JCA deployment descriptor for Apache Geronimo
Jukka Zitting created JCR-3620: -- Summary: JCA deployment descriptor for Apache Geronimo Key: JCR-3620 URL: https://issues.apache.org/jira/browse/JCR-3620 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jca Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor We have custom template descriptors for JBoss and Websphere. It would be good to have a ready-made template also for Apache Geronimo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCR-3620) JCA deployment descriptor for Apache Geronimo
[ https://issues.apache.org/jira/browse/JCR-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3620. Resolution: Fixed Fix Version/s: 2.7.1 Added a basic template. JCA deployment descriptor for Apache Geronimo - Key: JCR-3620 URL: https://issues.apache.org/jira/browse/JCR-3620 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jca Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.7.1 We have custom template descriptors for JBoss and Websphere. It would be good to have a ready-made template also for Apache Geronimo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3614) Checkin source code
[ https://issues.apache.org/jira/browse/JCR-3614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696291#comment-13696291 ] Jukka Zitting commented on JCR-3614: Why not directly to the final target location, i.e. JCR-3615? Checkin source code --- Key: JCR-3614 URL: https://issues.apache.org/jira/browse/JCR-3614 Project: Jackrabbit Content Repository Issue Type: Sub-task Components: sandbox Reporter: Tobias Bocanegra Assignee: Tobias Bocanegra Priority: Trivial suggested location: https://svn.apache.org/repos/asf/jackrabbit/sandbox/filevault -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3615) Move source code to final place
[ https://issues.apache.org/jira/browse/JCR-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696292#comment-13696292 ] Jukka Zitting commented on JCR-3615: Do we have someone to take care of cutting timely releases from jackrabbit/commons? So far we haven't been too successful with that. Otherwise I suggest to put the code to .../jackrabbit/trunk/filevault and have it as a part of the normal Jackrabbit release cycle. Move source code to final place --- Key: JCR-3615 URL: https://issues.apache.org/jira/browse/JCR-3615 Project: Jackrabbit Content Repository Issue Type: Sub-task Components: sandbox Reporter: Tobias Bocanegra Assignee: Tobias Bocanegra Priority: Trivial suggested location: https://svn.apache.org/repos/asf/jackrabbit/commons/filevault/trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCRSITE-42) Fix Javadoc frame injection vulnerability
Jukka Zitting created JCRSITE-42: Summary: Fix Javadoc frame injection vulnerability Key: JCRSITE-42 URL: https://issues.apache.org/jira/browse/JCRSITE-42 Project: Jackrabbit Site Issue Type: Bug Components: site Reporter: Jukka Zitting Assignee: Jukka Zitting Some of the Jackrabbit javadocs are affected by the javadoc frame injection vulnerability described in http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCRSITE-42) Fix Javadoc frame injection vulnerability
[ https://issues.apache.org/jira/browse/JCRSITE-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCRSITE-42. -- Resolution: Duplicate Didn't notice that Marcel already took care of this in JCRSITE-41. Resolving as duplicate. Fix Javadoc frame injection vulnerability - Key: JCRSITE-42 URL: https://issues.apache.org/jira/browse/JCRSITE-42 Project: Jackrabbit Site Issue Type: Bug Components: site Reporter: Jukka Zitting Assignee: Jukka Zitting Some of the Jackrabbit javadocs are affected by the javadoc frame injection vulnerability described in http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCR-3608) MBeans for tracking event listeners
Jukka Zitting created JCR-3608: -- Summary: MBeans for tracking event listeners Key: JCR-3608 URL: https://issues.apache.org/jira/browse/JCR-3608 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core, jackrabbit-jcr-commons Reporter: Jukka Zitting Assignee: Jukka Zitting Related to JCR-3186 and OAK-804, it would be useful to have JMX MBeans that expose information about all the registered event listeners. Besides basic details like the registration parameters, the MBeans could track backwards compatibility information like mentioned in OAK-804 and execution statistics like the number of events delivered and the time taken to process them. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCR-3604) NodeMixinUtil.getAddableMixinName() can return mixins already inherited by the node
Jukka Zitting created JCR-3604: -- Summary: NodeMixinUtil.getAddableMixinName() can return mixins already inherited by the node Key: JCR-3604 URL: https://issues.apache.org/jira/browse/JCR-3604 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-tests Affects Versions: 2.7 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor This is troublesome since an addMixin() with such a mixin type is defined as a no-op, which ends up confusing test cases like NodeAddMixinTest. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCR-3604) NodeMixinUtil.getAddableMixinName() can return mixins already inherited by the node
[ https://issues.apache.org/jira/browse/JCR-3604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3604. Resolution: Fixed Fix Version/s: 2.7.1 Fixed in revision 1488687. NodeMixinUtil.getAddableMixinName() can return mixins already inherited by the node --- Key: JCR-3604 URL: https://issues.apache.org/jira/browse/JCR-3604 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-tests Affects Versions: 2.7 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.7.1 This is troublesome since an addMixin() with such a mixin type is defined as a no-op, which ends up confusing test cases like NodeAddMixinTest. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCR-3601) AbstractJCRTest.cleanUpTestRoot() does not properly set testNodeType
[ https://issues.apache.org/jira/browse/JCR-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3601. Resolution: Fixed Fix Version/s: 2.7.1 Fixed in revision 1486864. AbstractJCRTest.cleanUpTestRoot() does not properly set testNodeType Key: JCR-3601 URL: https://issues.apache.org/jira/browse/JCR-3601 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-tests Affects Versions: 2.7 Reporter: Jukka Zitting Assignee: Jukka Zitting Fix For: 2.7.1 The cleanUpTestRoot() method is supposed to leave the test root in a state as if it was just newly created. If the node already exists, this is done by removing all its children. Unfortunately this risks leaving the node type of the test root unchanged, which can lead to issues like the one we're seeing in OAK-802. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCR-3601) AbstractJCRTest.cleanUpTestRoot() does not properly set testNodeType
Jukka Zitting created JCR-3601: -- Summary: AbstractJCRTest.cleanUpTestRoot() does not properly set testNodeType Key: JCR-3601 URL: https://issues.apache.org/jira/browse/JCR-3601 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-tests Affects Versions: 2.7 Reporter: Jukka Zitting Assignee: Jukka Zitting The cleanUpTestRoot() method is supposed to leave the test root in a state as if it was just newly created. If the node already exists, this is done by removing all its children. Unfortunately this risks leaving the node type of the test root unchanged, which can lead to issues like the one we're seeing in OAK-802. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3534: --- Fix Version/s: 2.7.1 2.6.2 Tagging this for the 2.6 branch. I think the solution here is getting stable enough to be backported. I'm not sure about 3 as with URIs potentially pointing to localhost such a mechanism may become unreliable Good point. The connection URI might not even be easily available if a data source is used. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Fix For: 2.6.2, 2.7.1 Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.6.patch, JCR-3534.7.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3402) getSize() returning too many often -1
[ https://issues.apache.org/jira/browse/JCR-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3402: --- Fix Version/s: 2.6.2 getSize() returning too many often -1 - Key: JCR-3402 URL: https://issues.apache.org/jira/browse/JCR-3402 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Cédric Damioli Assignee: Cédric Damioli Fix For: 2.6.2, 2.7 Attachments: QueryResultImpl.patch I've came accross the well known behaviour of query results returning -1 when asked for getSize(). While this is ok for optimization reasons (lazy results fetching), I just discovered that the default resultFetchSize value in lucene queries is Integer.MAX_VALUE, so in all queries I've ever executed, all results were actually fetched before asking for getSize, so IMHO nothing prevents getSize() to return the real value instead -1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3402) getSize() returning too many often -1
[ https://issues.apache.org/jira/browse/JCR-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662790#comment-13662790 ] Jukka Zitting commented on JCR-3402: Merged to the 2.6 branch in revision 1484685. getSize() returning too many often -1 - Key: JCR-3402 URL: https://issues.apache.org/jira/browse/JCR-3402 Project: Jackrabbit Content Repository Issue Type: Improvement Reporter: Cédric Damioli Assignee: Cédric Damioli Fix For: 2.6.2, 2.7 Attachments: QueryResultImpl.patch I've came accross the well known behaviour of query results returning -1 when asked for getSize(). While this is ok for optimization reasons (lazy results fetching), I just discovered that the default resultFetchSize value in lucene queries is Integer.MAX_VALUE, so in all queries I've ever executed, all results were actually fetched before asking for getSize, so IMHO nothing prevents getSize() to return the real value instead -1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3550) Methods for determining type of array of values
[ https://issues.apache.org/jira/browse/JCR-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3550: --- Fix Version/s: 2.6.2 Merged to the 2.6 branch in revision 1484695. Methods for determining type of array of values --- Key: JCR-3550 URL: https://issues.apache.org/jira/browse/JCR-3550 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-jcr-commons Reporter: Michael Dürig Assignee: Michael Dürig Fix For: 2.6.2, 2.7 Attachments: JCR-3550.patch I suggest to add a method for determining the type of a homogeneous array of values: public static int getType(Value[] values) throws ValueFormatException {...} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3531) Borrow all available RepositoryHelpers
[ https://issues.apache.org/jira/browse/JCR-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3531: --- Fix Version/s: 2.6.2 Merged to the 2.6 branch in revision 1484698. Borrow all available RepositoryHelpers -- Key: JCR-3531 URL: https://issues.apache.org/jira/browse/JCR-3531 Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-jcr-tests Reporter: Marcel Reutegger Assignee: Marcel Reutegger Priority: Minor Fix For: 2.6.2, 2.7 In order to make the TCK tests reusable in different test setups, I'd like to add a method to the RepositoryHelperPool to borrow all available helpers. E.g. we'd like to use this method in oak-jcr to replace all helpers with a RepositoryStub based on Oak. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (JCR-3543) TCK does not allow a property to be re-bound to a different definition
[ https://issues.apache.org/jira/browse/JCR-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting updated JCR-3543: --- Fix Version/s: 2.6.2 Merged to the 2.6 branch in revision 1484705. TCK does not allow a property to be re-bound to a different definition -- Key: JCR-3543 URL: https://issues.apache.org/jira/browse/JCR-3543 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-jcr-tests Affects Versions: 2.6 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.6.2, 2.7 The JCR spec says the following about Node.setProperty: Some repositories may allow P to be dynamically re-bound to a different property definition (based for example, on the new value being of a different type than the original value) while other repositories may not allow such dynamic re-binding. However, the current TCK requires the implementation to keep the definition of the existing property. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3534. Resolution: Fixed Excellent! I'd consider this resolved then. We can track further improvements (e.g. implementing this also for the DbDataStore or using something like Java's KeyStore for the reference key) in followup issues. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Fix For: 2.6.2, 2.7.1 Attachments: JCR-3534.26.patch, JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.6.patch, JCR-3534.7.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661918#comment-13661918 ] Jukka Zitting commented on JCR-3534: I committed a somewhat revised version of the patch in http://svn.apache.org/r1484440. Instead of overloading the normal record storage mechanism (and having to deal with interference from things like the garbage collector), I just let it open for each DataStore implementation to figure out how or if it wants to store the reference key. Angela: we should not have a plain txt referenceKey/secret stored anywhere What would you propose as an alternative? Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.6.patch, JCR-3534.7.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661937#comment-13661937 ] Jukka Zitting commented on JCR-3534: In http://svn.apache.org/r148 I changed the getIdentiferFromReference method to getRecordFromReference to make sure that a reference can only be used if the referenced binary actually does exist. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.6.patch, JCR-3534.7.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661970#comment-13661970 ] Jukka Zitting commented on JCR-3534: override the AbstractDataStore#getOrCreateReferenceKey method in DbDataStore Yes. I left that bit undone to keep things simple for now and to highlight that not all implementations need to implement this feature. I'm also not sure whether option 2 or 3 from above would be the best key mechanism for the DbDataStore. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.6.patch, JCR-3534.7.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCR-3598) Oak in Jackrabbit deployment packages
Jukka Zitting created JCR-3598: -- Summary: Oak in Jackrabbit deployment packages Key: JCR-3598 URL: https://issues.apache.org/jira/browse/JCR-3598 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-jca, jackrabbit-standalone, jackrabbit-webapp Reporter: Jukka Zitting Assignee: Jukka Zitting As a first step in integrating Oak into Jackrabbit trunk, I'd like to get Oak 0.7 included in the Jackrabbit war, rar and runnable jar packages. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCR-3595) AbstractJournal logging is too verbose
Jukka Zitting created JCR-3595: -- Summary: AbstractJournal logging is too verbose Key: JCR-3595 URL: https://issues.apache.org/jira/browse/JCR-3595 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.7, 2.6.1 Reporter: Jukka Zitting Priority: Minor The AbstractJournal class often logs a lot of INFO messages when syncing up with the journal. Since the information value of these log entries is pretty low for normal use and they occur pretty often, having them at DEBUG level would be more appropriate. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (JCR-3595) AbstractJournal logging is too verbose
[ https://issues.apache.org/jira/browse/JCR-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jukka Zitting resolved JCR-3595. Resolution: Fixed Fix Version/s: 2.7.1 2.6.2 Assignee: Jukka Zitting Fixed in revision 1483286. Backported to 2.6 in revision 1483291. AbstractJournal logging is too verbose -- Key: JCR-3595 URL: https://issues.apache.org/jira/browse/JCR-3595 Project: Jackrabbit Content Repository Issue Type: Bug Components: jackrabbit-core Affects Versions: 2.6.1, 2.7 Reporter: Jukka Zitting Assignee: Jukka Zitting Priority: Minor Fix For: 2.6.2, 2.7.1 The AbstractJournal class often logs a lot of INFO messages when syncing up with the journal. Since the information value of these log entries is pretty low for normal use and they occur pretty often, having them at DEBUG level would be more appropriate. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659438#comment-13659438 ] Jukka Zitting commented on JCR-3534: the problem is that that value should not be stored as plain text value in a file that is readable for everyone that can write new File Anyone with access to the local file system can read the entire repository.xml, and thus in any case has full access to all content inside the repository. Putting the reference key in some other location and/or encrypting it in some way doesn't make the system any more secure. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (JCR-3534) Efficient copying of binaries across repositories with the same data store
[ https://issues.apache.org/jira/browse/JCR-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656108#comment-13656108 ] Jukka Zitting commented on JCR-3534: Chatting with Tommaso we realized that getting the reference into DataIdentifiers is a bit tricky since we have some code paths that instantiate DataIdentifiers directly froma string without consulting the respective DataStore. Instead of changing those places, a simpler alternative turned out to be to move the getReference() method from DataIdentifier to DataRecord, which we implemented in http://svn.apache.org/r1481964. Efficient copying of binaries across repositories with the same data store -- Key: JCR-3534 URL: https://issues.apache.org/jira/browse/JCR-3534 Project: Jackrabbit Content Repository Issue Type: New Feature Components: jackrabbit-api, jackrabbit-core Affects Versions: 2.6 Reporter: Felix Meschberger Assignee: Tommaso Teofili Attachments: JCR-3534.2.patch, JCR-3534.3.patch, JCR-3534.4.patch, JCR-3534.patch, JCR-3534.patch we have a couple of use cases, where we would like to leverage the global data store to prevent sending around and copying around large binary data unnecessarily: We have two separate Jackrabbit instances configured to use the same DataStore (for the sake of this discussion assume we have the problems of concurrent access and garbage collection under control). When sending content from one instance to the other instance we don't want to send potentially large binary data (e.g. video files) if not needed. The idea is for the sender to just send the content identity from JackrabbitValue.getContentIdentity(). The receiver would then check whether the such content already exists and would reuse if so: String ci = contentIdentity_from_sender; try { Value v = session.getValueByContentIdentity(ci); Property p = targetNode.setProperty(propName, v); } catch (ItemNotFoundException ie) { // unknown or invalid content Identity } catch (RepositoryException re) { // some other exception } Thus the proposed JackrabbitSession.getValueByContentIdentity(String) method would allow for round tripping the JackrabbitValue.getContentIdentity() preventing superfluous binary data copying and moving. See also the dev@ thread http://jackrabbit.markmail.org/thread/gedk5jsrp6offkhi -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira