Re: Jackrabbit JCR over DAV Extensions - Production ready?
hi ampie, On Thu, Oct 23, 2014 at 7:06 AM, Ampie Barnard amp...@gmail.com wrote: Hi All, I am preparing a proposal for the use of Jackrabbit for a fairly simple, isolated document management requirement we have. We are happy that our functional requirements are 100% satisfied by Jackrabbit, for which I thank you all. that's good to hear, thanks! However, we do have a requirement for significant scaling, clustering and remote access. From the documentation it would seem that the JCR over RMI implementation is not ideal for these requirements, which is why we are rather considering using the JCR over DAV (with extensions). We would however like to confirm that this module was intended for production use. yes, JCR over DAVEx was intended for production use. it's in use for many years now. cheers stefan Your input would be much appreciated. Kind Regards
Re: Node.getPath() return value
hi jan On Mon, May 19, 2014 at 11:07 PM, Jan Haderka jan.hade...@magnolia-cms.com wrote: Hi, I just wanted to double check that this is actually correct behaviour (as it doesn’t seem to me that way). Consider following code: session.getRootNode().addNode(“foo”); session.save(); Node fooNode = session.getNode(/foo); assertEquals(/foo, fooNode.getPath()); session.move(/foo, /bar); Node barNode = session.getNode(/bar); assertEquals(“/bar”, barNode.getPath()); == this line actually fails, because barNode.getPath() still returns “/foo” I understand that from a repo point of view, move didn’t happen as it was not persisted yet. But I’m working in single session and in that session I did move, so my “local” view should be consistent. Or am I wrong? Now aside from the weirdness of the above code, there is also consistency problem, because if I remove the save() call and run code like shown below, it will actually pass, so getPath() after move will behave differently whether or not was BEFORE the move persisted in the repo. session.getRootNode().addNode(“foo”); Node fooNode = session.getNode(/foo); assertEquals(/foo, fooNode.getPath()); session.move(/foo, /bar); Node barNode = session.getNode(/bar); assertEquals(“/bar”, barNode.getPath()); This is tested with Jackrabbit 2.6.4 Thx for info/explanation of the behaviour. this is a bug. it's probably related to JCR-3239 and JCR-3368. BTW: the problem only occurs when modifying the root node. it's most likely caused by a bug in CachingHierarchyManager which caches the root node state. could you please file an issue? cheers stefan Jan Haderka / Magnolia Czech Republic Magnolia Chobot 1578, 76701 Kromeříž, Česká Republika Tel: +420 571 118 715 www.magnolia-cms.cz Attend Magnolia Worldwide User Conference June 24-26th, 2014
Re: jcr-2.0.jar license
On Sun, Sep 8, 2013 at 9:34 AM, Hylton Peimer hyl...@aternity.com wrote: Jackrabbit maven brings along the jcr-2.0.jar file. If I understand correctly this is the specification for JCR. What license is this JAR distributed under? see [1] for the license of the JCR 2.0 specification and and the jcr-2.0.jar. cheers stefan [1] http://www.day.com/specs/jcr/2.0/license.html Hylton Peimer, RD Team Leader , Aternity Inc. +972-9-7620514 (Office) +972-54-4872406 (Mobile) hylton.pei...@aternity.commailto:hylton.pei...@aternity.com www.aternity.comhttp://www.aternity.com/ Follow us on Twitterhttp://www.twitter.com/aternityinc, like us on Facebookhttp://www.facebook.com/pages/Westborough-MA/Aternity-Inc/238213150100?ref=mf, recommend us on LinkedInhttp://www.linkedin.com/company/25350/12729/product
Re: Reference to same-name sibling lost when retrieving back node from intermediate storing class
the use of same-name siblings (sns) is discouraged since paths containing sns segments become instable, see [1] and [2]. cheers stefan [1] http://wiki.apache.org/jackrabbit/DavidsModel#Rule_.234:_Beware_of_Same_Name_Siblings. [2] http://www.day.com/specs/jcr/2.0/22_Same-Name_Siblings.html On Mon, Jul 15, 2013 at 12:15 AM, Ulrich for...@gombers.de wrote: I see a similiar effect for node-iterated result of a query; the delivered sibling depends on my actions during the loop. I'm running the code: NodeIterator nodeIterator = queryResult.getNodes(); while (nodeIterator.hasNext()) { Node selectNode = nodeIterator.nextNode(); String nodename=selectNode.getPath(); LOGGER.debug(Nodename1={},nodename); DoSomething doSomething = new DoSomething(selectNode); } The DoSomething-Class doesn't change anything; it searches itself the repository tree for more infos. If I run DoSomething full functioning the list of logged nodenames is: Nodename1=../jcr:content/content/contentcontainer2cols[4]/itemsLeft/richtextimage_3 Nodename1=../jcr:content/content/contentcontainer2cols[2]/itemsRight/richtextimage_4 Nodename1=../jcr:content/content/contentcontainer2cols[3]/itemsLeft/richtextimage_1 Nodename1=../jcr:content/content/contentcontainer2cols[2]/itemsRight/richtextimage_0 Nodename1=../jcr:content/content/contentcontainer2cols/itemsLeft/richtextimage Nodename1=../jcr:content/content/contentcontainer2cols[3]/itemsRight/richtextimage_2 if I suppress all actions in DoSomething the result is: Nodename1=../jcr:content/content/contentcontainer2cols[4]/itemsLeft/richtextimage_3 Nodename1=../jcr:content/content/contentcontainer2cols[4]/itemsRight/richtextimage_4 Nodename1=../jcr:content/content/contentcontainer2cols[3]/itemsLeft/richtextimage_1 Nodename1=../jcr:content/content/contentcontainer2cols[2]/itemsRight/richtextimage_0 Nodename1=../jcr:content/content/contentcontainer2cols/itemsLeft/richtextimage Nodename1=../jcr:content/content/contentcontainer2cols[3]/itemsRight/richtextimage_2 In the second name the same-named sibling has changed to from contentcontainer2cols[2] to contentcontainer2cols[4]. This is quite annoying; from now I can't work with the retrieved node directly any more; I have to store the names in a list and iterate on the list then: NodeIterator nodeIterator = queryResult.getNodes(); ListString nodenames = new ArrayListyString(); while (nodeIterator.hasNext()) { Node selectNode = nodeIterator.nextNode(); String nodename=selectNode.getPath(); LOGGER.debug(Nodename1={},nodename); nodenames.add(nodename); } for (String nodename : nodenames) { Node selectNode = session.getNode(nodename); DoSomething doSomething = new DoSomething(selectNode); } brgds, Ulrich Ulrich for...@gombers.de hat am 11. Juli 2013 um 15:25 geschrieben: When changing ComparableNode-Constructor to: public ComparableNode(Node node) throws Exception { this.node=node; this.session=node.getSession(); LOGGER.info(Nodename2=+getNode().getPath()); buildEffectiveACL(); } I see these messages: Nodename1=/content/sibling[2]/mynode Nodename3=/content/sibling[2]/mynode Nodename2=/content/sibling[2]/mynode Nodename3=/content/sibling/mynode Nodename4=/content/sibling/mynode Ulrich for...@gombers.de hat am 11. Juli 2013 um 15:20 geschrieben: While researching nodes I build for each of these nodes a new object with some methods to be tested. So I have a class ComparableNode: private Session session; private final Node node; public ComparableNode(Node node) throws Exception { this.node=node; this.session=node.getSession(); LOGGER.info(Nodename2=+node.getPath()); buildEffectiveACL(); } public Node getNode() throws RepositoryException { LOGGER.info(Nodename3=+node.getPath()); return this.node; } The class is instantiated by: LOGGER.info(Nodename1=+node.getPath()); ComparableNode cmpNode = new ComparableNode(node); LOGGER.info(Nodename4=+cmpNode.getNode()); The node itself partially contains a same-name sibling; by retrieving the node from my ComparableNode I lose the index of the sibling. So when researching: /content/sibling[2]/mynode I get these messages: Nodename1=/content/sibling[2]/mynode Nodename2=/content/sibling[2]/mynode Nodename3=/content/sibling/mynode Nodename4=/content/sibling/mynode As you can see the node itself is stored to a final variable by the constructor of ComparableNode. But when retrieving it, it represents a different node. Any idea, whats going on here? brgds, Ulrich
Re: Item does not exist anymore
hi ista, On Wed, Jun 12, 2013 at 1:59 AM, Ista Pouss ista...@gmail.com wrote: Hi, I often see this exception : Caused by: javax.jcr.InvalidItemStateException: Item does not exist anymore: 09cbfaf2-afe6-4b91-975c-e945318ba4f0 at org.apache.jackrabbit.core.ItemImpl.itemSanityCheck(ItemImpl.java:116) at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:90) at org.apache.jackrabbit.core.NodeImpl.getNodes(NodeImpl.java:2186) [couic] I use jackrabbit 2.6.1. After my impressions and googling, It seems that there is some timeout on a node reference : I get a node, and some time later (1 hour), I get some child on this node. If so, how I can refresh a node reference ? Is there other explanations ? there's no such token timeout in jackrabbit. what you're seeing (Item does not exist anymore exception) happens when calling a method on a Node or Property instance that has been persistently removed in the meantime (probably by another session). cheers stefan Thanks. -- Les dérives de rue : Le projet de théâtre de Saint-Étienne emporté par le venthttp://drivrsdu.fr/le-projet-de-theatre-de-saint-etienne-emporte-par-le-vent/ http://drivrsdu.fr/profession-emotion/
Re: no space left on device-exception by Property.getStream()
On Fri, Apr 12, 2013 at 12:21 AM, Ulrich for...@gombers.de wrote: While retrieving lots of data in a loop from several nt:file nodes I always get a no space left on device-exception. The code is: Node filenode = Node.getNode(jcr:content); Property jcrdata = filenode.getProperty(jcr:data); InputStream is = jcrdata.getBinary().getStream(); It seems that the InputStream is buffered somewhere for the current session and that the total buffer size for a session is limited. Is this true and if so, how can I control this size? Or is there an opportunity to free the space? I can probably close my session and open a new one but I would need to change the logic of my program, Any hint is very welcome. larger binaries are buffered in temp files on read (smaller ones are buffered in-mem). therefore, reading a lot of binaries concurrently will result in a lot of temp files. those temp files will go away once they're not referenced anymore. your obviously running out of disk space. the following should help: 1. make sure you close the input stream as early as possible 2. if this is a specific job you're running (such as e.g. an export) you could try forcing gc cycles in between 3. increase your disk space cheers stefan Ulrich
Re: no space left on device-exception by Property.getStream()
On Fri, Apr 12, 2013 at 3:29 PM, Ulrich for...@gombers.de wrote: Retrieving data is completely sequential, no concurrent processing at all. I changed the code to session.logout() and session.connect() after every step, this didn't help. So the code works like this: while (String path : pathList) { Session session = ... Node currentNode = session.getNode(path); Node filenode = Node.getNode(jcr:content); Property jcrdata = filenode.getProperty(jcr:data); InputStream is = jcrdata.getBinary().getStream(); is.close(); session.logout(); } To be honest, this is not the exact code; the logic is spread over two classes - but it shows the effective data flow. Nevertheless - the problem remains. But when I retry the whole sequence later on, I get the same result - this means the buffer has been cleared in the meantime. It looks as if there is a kind of garbage collector yes, it's your jvm's garbage collector. , running asynchronously not fast enough for avoiding the error but being done after a while. yes, that's expected behaviour. the jvm's garbage collection runs async with a low priority (unless you're running out of memory of course). I tried to track the storage space by 'df -vk' but couldn't see a problem here. did you check inodes as well ('df -i /')? as i already mentioned: reading a lot of binaries will create a lot of temp files. those temp files will eventually be deleted once the gc determines that they're not used anymore (see [1]). but this can take some time, due to the async nature of java's gc. an example: assume you have 500mb free disk space. now when you're reading 1k binaries from the repository, 1mb size each, in a loop, you're likely going to see said exception. and the exception's message, 'no space left on device', is pretty clear: you're (temporarily) running out of disk space. did you try forcing gc cycles during your processing? [1] http://jackrabbit.apache.org/api/2.0/org/apache/jackrabbit/util/TransientFileFactory.html On Monday (I'm not in office right now) I will insert a Thread.sleep(2) to the workflow above to verify my theory. Best regards, Ulrich Am 12.04.2013 um 10:13 schrieb Stefan Guggisberg stefan.guggisb...@gmail.com: On Fri, Apr 12, 2013 at 12:21 AM, Ulrich for...@gombers.de wrote: While retrieving lots of data in a loop from several nt:file nodes I always get a no space left on device-exception. The code is: Node filenode = Node.getNode(jcr:content); Property jcrdata = filenode.getProperty(jcr:data); InputStream is = jcrdata.getBinary().getStream(); It seems that the InputStream is buffered somewhere for the current session and that the total buffer size for a session is limited. Is this true and if so, how can I control this size? Or is there an opportunity to free the space? I can probably close my session and open a new one but I would need to change the logic of my program, Any hint is very welcome. larger binaries are buffered in temp files on read (smaller ones are buffered in-mem). therefore, reading a lot of binaries concurrently will result in a lot of temp files. those temp files will go away once they're not referenced anymore. your obviously running out of disk space. the following should help: 1. make sure you close the input stream as early as possible 2. if this is a specific job you're running (such as e.g. an export) you could try forcing gc cycles in between 3. increase your disk space cheers stefan Ulrich
Re: no space left on device-exception by Property.getStream()
On Fri, Apr 12, 2013 at 4:53 PM, Julian Reschke julian.resc...@gmx.de wrote: On 2013-04-12 16:24, Stefan Guggisberg wrote: On Fri, Apr 12, 2013 at 3:29 PM, Ulrich for...@gombers.de wrote: Retrieving data is completely sequential, no concurrent processing at all. I changed the code to session.logout() and session.connect() after every step, this didn't help. So the code works like this: while (String path : pathList) { Session session = ... Node currentNode = session.getNode(path); Node filenode = Node.getNode(jcr:content); Property jcrdata = filenode.getProperty(jcr:data); InputStream is = jcrdata.getBinary().getStream(); is.close(); session.logout(); } To be honest, this is not the exact code; the logic is spread over two classes - but it shows the effective data flow. Nevertheless - the problem remains. But when I retry the whole sequence later on, I get the same result - this means the buffer has been cleared in the meantime. It looks as if there is a kind of garbage collector yes, it's your jvm's garbage collector. , running asynchronously not fast enough for avoiding the error but being done after a while. yes, that's expected behaviour. the jvm's garbage collection runs async with a low priority (unless you're running out of memory of course). I tried to track the storage space by 'df -vk' but couldn't see a problem here. did you check inodes as well ('df -i /')? as i already mentioned: reading a lot of binaries will create a lot of temp files. those temp files will eventually be deleted once the gc determines that they're not used anymore (see [1]). but this can take some time, due to the async nature of java's gc. an example: assume you have 500mb free disk space. now when you're reading 1k binaries from the repository, 1mb size each, in a loop, you're likely going to see said exception. ... But why don't we purge the transient file once it's not needed anymore? because determining whether it's not needed anymore is the tricky part. and that's one thing the jvm's garbage collection is pretty good at. Relying on GC appears to be an anti-pattern to me... this so called anti-pattern did work pretty well during the past 8 years... ;) Best regards, Julian
Re: Node.getProperty(String arg0) fails with javax.jcr.PathNotFoundException: jcr:data
On Thursday, March 14, 2013, UMAIL wrote: Am 14.03.2013 um 17:50 schrieb Stefan Guggisberg stefan.guggisb...@gmail.com javascript:;: On Thursday, March 14, 2013, Ulrich wrote: When running this code: Property jcrdata; for (PropertyIterator pi = fileNode.getProperties(); pi.hasNext(); ) { Property p = pi.nextProperty(); LOGGER.info(Property is: + p.getName() + = + p.getValue().getString()); } jcrdata = fileNode.getProperty(jcr:data); the for-loop works fine and displays the property jcr:data and it's value. But the last line: Property jcrdata = fileNode.getProperty(jcr:data); fails with javax.jcr.PathNotFoundException: jcr:data. what is the node type of fileNode? i assume it's nt:file. nt:file doesn't declare a jcr:data property. it declares a jcr:content node which commonly is a nt:resource. nt:resource does declare a jcr:data property. i.e. fileNode.getNode(jcr:content).getProperty(jcr:data) cheers stefan fileNode is the child of a nt:file-node and has the node-name jcr:content. The complete code would have shown. But this is not the point here. As you can see in the second sample, the property can be processed for this node, so it is proofed, it is there (and I know it for sure - I have defined it). There is something very strange with my repository, I think. The program used to run until today. Ok, I changed something but not here. I have no clue, how my changes may have affected the behaviour this way. if you can provide a self-contained test case including your changes i'll have a look. otherwise there's not much i can do. cheers stefan Property jcrdata; for (PropertyIterator pi = fileNode.getProperties(); pi.hasNext(); ) Property p = pi.nextProperty(); LOGGER.info(Property is: + p.getName() + =); p.getValue().getString()); if (p.getName().equals(jcr:data)) jcrdata = p; } jcrdata = fileNode.getProperty(jcr:data); This code also fails in the last line - this proofs that the property is retrieved from the very same node. Ulrich
Re: Node.getProperty(String arg0) fails with javax.jcr.PathNotFoundException: jcr:data
On Thursday, March 14, 2013, Ulrich wrote: When running this code: Property jcrdata; for (PropertyIterator pi = fileNode.getProperties(); pi.hasNext(); ) { Property p = pi.nextProperty(); LOGGER.info(Property is: + p.getName() + = + p.getValue().getString()); } jcrdata = fileNode.getProperty(jcr:data); the for-loop works fine and displays the property jcr:data and it's value. But the last line: Property jcrdata = fileNode.getProperty(jcr:data); fails with javax.jcr.PathNotFoundException: jcr:data. what is the node type of fileNode? i assume it's nt:file. nt:file doesn't declare a jcr:data property. it declares a jcr:content node which commonly is a nt:resource. nt:resource does declare a jcr:data property. i.e. fileNode.getNode(jcr:content).getProperty(jcr:data) cheers stefan When I change the code to Property jcrdata; for (PropertyIterator pi = fileNode.getProperties(); pi.hasNext(); ) { Property p = pi.nextProperty(); LOGGER.info(Property is: + p.getName() + = + p.getValue().getString()); if (p.getName().equals(jcr:data)) { jcrdata = p; } the program works fine. I don't see what might go wrong in the first sample. I would rather prefer the first one. Does anyone has an explanation for this? Ulrich
Re: How to use MySQL persistence manager with Jackrabbit standalone
hi lars, On Fri, Mar 1, 2013 at 4:08 PM, Lars Janssen lars-listm...@ukmix.net wrote: Hi all, I've been successfully using Apache Jackrabbit 2.4.3 and now 2.6.0 (standalone server in both cases) using the default configuration, so the repository is stored on the filesystem. How can I make it connect to a MySQL back end instead? I don't need to worry about migration, just set it up as a fresh install. After trying the steps below, Jackrabbit fails to start correctly or populate the DATASTORE table in the database, and I find this error in the logs: ERROR [main] RepositoryImpl.java:366 failed to start Repository: Cannot instantiate persistence manager org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager javax.jcr.RepositoryException: Cannot instantiate persistence manager org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager [... oodles of backtrace cut ...] could you please provide the full stack trace, or, even better, the complete error log? cheers stefan What I've done so far: I've created the Jackrabbit database/user, which I can connect to no problem: mysql -D jackrabbit -u jackrabbit -h localhost -pjackrabbit I started with a clean slate (empty /var/jackrabbit directory), except the configuration file comes from here: https://raw.github.com/wiki/jackalope/jackalope/files/repository.xml Here's the startup script I'm using: https://github.com/sixty-nine/Jackrabbit-startup-script And here's the java process that runs: java -XX:MaxPermSize=128m -Xmx512M -Xms128M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port= -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/opt/jackrabbit/startup/jmx.user -Dcom.sun.management.jmxremote.access.file=/opt/jackrabbit/startup/jmx.role -jar /opt/jackrabbit/jackrabbit-standalone-2.6.0.jar -h 127.0.0.1 -p 8080 I don't think I get far enough for it to matter, but I'm using MySQL 5.5.28-1. I'm having the above problem with both 2.4.3 and 2.6.0. Also: java version 1.6.0_24 OpenJDK Runtime Environment (IcedTea6 1.11.5) (6b24-1.11.5-1) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) Plus I subsequently installed the MySQL JDBC library on Debian: apt-get install libmysql-java Thanks, Lars.
Re: How to use MySQL persistence manager with Jackrabbit standalone
] at org.apache.jackrabbit.core.util.db.ConnectionFactory.createDataSource(ConnectionFactory.java:233) ~[jackrabbit-standalone-2.6.0.jar:na] at org.apache.jackrabbit.core.util.db.ConnectionFactory.getDataSource(ConnectionFactory.java:169) ~[jackrabbit-standalone-2.6.0.jar:na] at org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.getDataSource(BundleDbPersistenceManager.java:569) ~[jackrabbit-standalone-2.6.0.jar:na] at org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.init(BundleDbPersistenceManager.java:537) ~[jackrabbit-standalone-2.6.0.jar:na] at org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager.init(MySqlPersistenceManager.java:51) ~[jackrabbit-standalone-2.6.0.jar:na] at org.apache.jackrabbit.core.RepositoryImpl.createPersistenceManager(RepositoryImpl.java:1349) ~[jackrabbit-standalone-2.6.0.jar:na] ... 23 common frames omitted Caused by: java.lang.ClassNotFoundException: org.gjt.mm.mysql.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) ~[na:1.6.0_24] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.6.0_24] at java.net.URLClassLoader.findClass(URLClassLoader.java:205) ~[na:1.6.0_24] at java.lang.ClassLoader.loadClass(ClassLoader.java:321) ~[na:1.6.0_24] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) ~[na:1.6.0_24] at java.lang.ClassLoader.loadClass(ClassLoader.java:266) ~[na:1.6.0_24] at java.lang.Class.forName0(Native Method) ~[na:1.6.0_24] at java.lang.Class.forName(Class.java:186) ~[na:1.6.0_24] at org.apache.jackrabbit.core.util.db.ConnectionFactory.getDriverClass(ConnectionFactory.java:260) ~[jackrabbit-standalone-2.6.0.jar:na] ... 29 common frames omitted 2013-03-01 15:35:27.721 INFO [main] RepositoryAccessServlet.java:98 RepositoryAccessServlet initialized. 2013-03-01 15:35:27.727 INFO [main] AbstractWebdavServlet.java:169 authenticate-header = Basic realm=Jackrabbit Webdav Server 2013-03-01 15:35:27.727 INFO [main] AbstractWebdavServlet.java:174 csrf-protection = null 2013-03-01 15:35:27.727 INFO [main] AbstractWebdavServlet.java:181 createAbsoluteURI = true 2013-03-01 15:35:27.728 INFO [main] SimpleWebdavServlet.java:144 resource-path-prefix = '/repository' 2013-03-01 15:35:27.838 INFO [main] AbstractWebdavServlet.java:169 authenticate-header = Basic realm=Jackrabbit Webdav Server 2013-03-01 15:35:27.838 INFO [main] AbstractWebdavServlet.java:174 csrf-protection = null 2013-03-01 15:35:27.838 INFO [main] AbstractWebdavServlet.java:181 createAbsoluteURI = true Best regards, Lars. On Fri, Mar 1, 2013 at 3:19 PM, Stefan Guggisberg stefan.guggisb...@gmail.com wrote: hi lars, On Fri, Mar 1, 2013 at 4:08 PM, Lars Janssen lars-listm...@ukmix.net wrote: Hi all, I've been successfully using Apache Jackrabbit 2.4.3 and now 2.6.0 (standalone server in both cases) using the default configuration, so the repository is stored on the filesystem. How can I make it connect to a MySQL back end instead? I don't need to worry about migration, just set it up as a fresh install. After trying the steps below, Jackrabbit fails to start correctly or populate the DATASTORE table in the database, and I find this error in the logs: ERROR [main] RepositoryImpl.java:366 failed to start Repository: Cannot instantiate persistence manager org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager javax.jcr.RepositoryException: Cannot instantiate persistence manager org.apache.jackrabbit.core.persistence.pool.MySqlPersistenceManager [... oodles of backtrace cut ...] could you please provide the full stack trace, or, even better, the complete error log? cheers stefan What I've done so far: I've created the Jackrabbit database/user, which I can connect to no problem: mysql -D jackrabbit -u jackrabbit -h localhost -pjackrabbit I started with a clean slate (empty /var/jackrabbit directory), except the configuration file comes from here: https://raw.github.com/wiki/jackalope/jackalope/files/repository.xml Here's the startup script I'm using: https://github.com/sixty-nine/Jackrabbit-startup-script And here's the java process that runs: java -XX:MaxPermSize=128m -Xmx512M -Xms128M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port= -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/opt/jackrabbit/startup/jmx.user -Dcom.sun.management.jmxremote.access.file=/opt/jackrabbit/startup/jmx.role -jar /opt/jackrabbit/jackrabbit-standalone-2.6.0.jar -h 127.0.0.1 -p 8080 I don't think I get far enough for it to matter, but I'm using MySQL 5.5.28-1. I'm having the above problem with both 2.4.3 and 2.6.0. Also: java version 1.6.0_24 OpenJDK Runtime
Re: Node property type constraints violated when setting strings
hi jessi, On Tue, Nov 6, 2012 at 9:08 PM, Jessi Abrahams jessi.abrah...@oracle.com wrote: Hi Stefan, It looks like you were right that I was extending a node type with residual property definitions - I oversimplified the example I gave so it was not apparent. I've fixed the problematic inheritance and the repository is validating as expected. Thanks for your help. good to hear and thanks for the feedback. cheers stefan Jessi On 11/02/2012 10:42 AM, Stefan Guggisberg wrote: On Fri, Nov 2, 2012 at 2:57 PM, Jessi Abrahams jessi.abrah...@oracle.com wrote: Hi Michael, Thanks for your response. Based on that spec I would expect that if I tried to assign a value of abc then the property would end up being set to false (based on how Java converts strings to booleans). However what I am seeing is the property being assigned a string value of abc. that would be a major bug. however, i suppose that your custom node type extends a node type containing residual property definitions (such as e.g. nt;unstructured). you can verify the declaring node type by running something like Property prop = node.setProperty(someBooleanProperty, abc); System.out.println(PropertyType.nameFromValue(prop.getType())); PropertyDefinition def = prop.getDefinition(); System.out.println(def.getDeclaringNodeType().getName()); System.out.println(PropertyType.nameFromValue(def.getRequiredType())); cheers stefan Thanks Jessi On 11/02/2012 04:55 AM, Michael Dürig wrote: Jessi, The repository will try to convert the string to a boolean. See 3.6.4 Property Type Conversion in JSR 283. Michael On 1.11.12 21:53, Jessi Abrahams wrote: Hi, I'm new to this list so I apologize if this question has been asked (I tried searching the archives) or if this is not the right place to ask. I have a custom node type with a definition like this: [foo:bar] mix:lastModified, mix:created, nt:base - someBooleanProperty (BOOLEAN) When I create a node of this type and use any of the setProperty methods on Node, the repository allows me to set string values (such as abc) for someBooleanProperty even though as far as I understand from the type definition, only booleans should be allowed. The repository throws a ConstraintViolationException (as I would expect) if I try to to set someBooleanProperty to any other incorrect (non-boolean) type - but not strings. It seems like properties can always be set to a string, whether or not it's allowed by the node type definition. Is this expected? It doesn't seem in line with the spec. Thanks Jessi
Re: session.save() persists changes even if the transaction is rollbacked
On Wed, Jul 4, 2012 at 12:33 PM, Subscriber subscri...@zfabrik.de wrote: Hi there, we are using Jackrabbit 2.2.5. The workspace is persisted in a MySql database using a JNDI datasource in conjunction with JTA (we are using MySqlPersistenceManager). In the javadocs of javax.jcr.Session#save() is written - cite: If the save occurs within a transaction, the changes are dispatched but are not persisted until the transaction is committed. So we assume that changes are not persisted, if session.save() is called, but the transaction is rollbacked afterwards (let's say due to an exception). However we discover that changes are persisted already after a session.save() call. Does jackrabbit calls commit() by itself? jackrabbit expects to be in full control of the underlying database connection, see [0]. cheers stefan [0] http://jackrabbit.apache.org/jackrabbit-configuration.html#JackrabbitConfiguration-Persistenceconfiguration Thanks in advance for any help and best regards Udo
Re: Special characters in JCR Node names
On Thu, Jun 28, 2012 at 10:18 AM, Julian Reschke julian.resc...@gmx.de wrote: On 2012-06-28 09:57, Stefan Guggisberg wrote: On Wed, Jun 27, 2012 at 10:37 PM, juser jcha...@maned.com wrote: Does anyone know the list of special characters not allowed in Jackrabbit node names? For example, when we use a node name like {}*|[]:/ all the characters except {} are escaped. {m is not accepted as a node name and it rerturns a Repository exception like Failed to resolve path {m relative to node /pnode/bnode. But m} is accepted as a nodename, and doesn't throw any exceptions. Could you please list out the special characters disallowed in Jackrabbit repository? see [0]. if a name starts with a '{' the name is expected to be in the 'Expanded Form'. see [1]. ... That is misleading. { is only special when the name conforms to the expanded name ABNF, so it MUST be followed by a URI and a }. That's why we could introduce the notation in JCR 2.0 in the first place (all previously legal names stay valid). I believe this is simply a bug in Jackrabbit. yes, you're right. i created an issue [0]. cheers stefan [0] https://issues.apache.org/jira/browse/JCR-3366 Best regards, Julian
Re: Special characters in JCR Node names
On Wed, Jun 27, 2012 at 10:37 PM, juser jcha...@maned.com wrote: Does anyone know the list of special characters not allowed in Jackrabbit node names? For example, when we use a node name like {}*|[]:/ all the characters except {} are escaped. {m is not accepted as a node name and it rerturns a Repository exception like Failed to resolve path {m relative to node /pnode/bnode. But m} is accepted as a nodename, and doesn't throw any exceptions. Could you please list out the special characters disallowed in Jackrabbit repository? see [0]. if a name starts with a '{' the name is expected to be in the 'Expanded Form'. see [1]. cheers stefan [0] http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.2%20Names [1] http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.2.5%20Lexical%20Form%20of%20JCR%20Names Thanks. -- View this message in context: http://jackrabbit.510166.n4.nabble.com/Special-characters-in-JCR-Node-names-tp4655663.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: Varying the node primary type
On Sun, May 6, 2012 at 9:46 AM, Lukas Kahwe Smith m...@pooteeweet.org wrote: On May 6, 2012, at 09:41 , Francisco Carriedo Scher wrote: Hi there, in a given point after creating a nt:file node i need to change the node type to nt:linkedFile. Sometimes such node will already have it's proper sons (a node called jcr:content and property called jcr:data inside this last one) and sometimes not. the JCR specification does not allow changing the primary node type once a node has been created. wrong, as of JCR 2.0 the primary node type of a node can be set [1]. cheers stefan [1] http://www.day.com/maven/jsr170/javadocs/jcr-2.0/javax/jcr/Node.html#setPrimaryType(java.lang.String) not sure if there is an established pattern for this case, but all you could do is remove the old node and create a new node. regards, Lukas Kahwe Smith m...@pooteeweet.org
Re: Berkeley DB support for Jackrabbit.
hi guillaume, On Fri, Apr 27, 2012 at 11:50 AM, Guillaume Belrose kafe...@gmail.com wrote: Hi all, I am wondering if Berkeley DB Java edition is a database that is or could be supported by Jackrabbit. All information I found on the subject is quite a bit dated (circa 2007). support for berkeley db java edition should be straightforward to implement. jackrabbit's persistence layer abstraction (the PersistenceManager interface) fits nicely with key-value stores. i suggest you have a look at [1] and [2]. it should be relatively easy to extend a berkeley-based implementation from [1]. cheers stefan [1] http://jackrabbit.apache.org/api/2.4/org/apache/jackrabbit/core/persistence/bundle/AbstractBundlePersistenceManager.html [2] http://jackrabbit.apache.org/api/2.4/org/apache/jackrabbit/core/persistence/mem/InMemBundlePersistenceManager.html Many thanks in advance. Guillaume.
Re: Berkeley DB support for Jackrabbit.
On Fri, Apr 27, 2012 at 1:00 PM, Guillaume Belrose kafe...@gmail.com wrote: Thanks Stefan, The API looks quite simple, so I might have a go. By the way, I did a quick Google search, and BDB was supported in previous versions of Jackrabbit. The last trace I could find dates from version 1.1 See http://svn.apache.org/repos/asf/jackrabbit/tags/1.1/contrib/bdb-persistence/ Did it go away because of licenses issues? yes cheers stefan Cheers, Guillaume. On 27 April 2012 11:09, Stefan Guggisberg stefan.guggisb...@gmail.com wrote: hi guillaume, On Fri, Apr 27, 2012 at 11:50 AM, Guillaume Belrose kafe...@gmail.com wrote: Hi all, I am wondering if Berkeley DB Java edition is a database that is or could be supported by Jackrabbit. All information I found on the subject is quite a bit dated (circa 2007). support for berkeley db java edition should be straightforward to implement. jackrabbit's persistence layer abstraction (the PersistenceManager interface) fits nicely with key-value stores. i suggest you have a look at [1] and [2]. it should be relatively easy to extend a berkeley-based implementation from [1]. cheers stefan [1] http://jackrabbit.apache.org/api/2.4/org/apache/jackrabbit/core/persistence/bundle/AbstractBundlePersistenceManager.html [2] http://jackrabbit.apache.org/api/2.4/org/apache/jackrabbit/core/persistence/mem/InMemBundlePersistenceManager.html Many thanks in advance. Guillaume.
Re: failed to store property state
On Mon, Apr 2, 2012 at 9:16 AM, ramesh_meenava...@satyam.com ramesh_meenava...@satyam.com wrote: I am getting the below issue while i try save the new Document in Jackrabbit It was running till now but suddenly started throwing this exception, from now on I could save any new document but able to read. i guess there are IOExceptions in your log. just a wild guess: out of disk space? cheers stefan I am using Jackrabbit 1.6 version in the repository.xml using XMLPersistanceManager (org.apache.jackrabbit.core.persistence.xml.XMLPersistenceManager) as PM and LocalFileSystem (org.apache.jackrabbit.core.fs.local.LocalFileSystem) Please help me to solve this issue org.apache.jackrabbit.core.state.ItemStateException: failed to store property state: 6d79a56e-2c32-47e3-9c2b-a87788777ae6/{http://www.jcp.org/jcr/1.0}data at org.apache.jackrabbit.core.persistence.xml.XMLPersistenceManager.store(XMLPersistenceManager.java:722) at org.apache.jackrabbit.core.persistence.AbstractPersistenceManager.store(AbstractPersistenceManager.java:75) at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:729) at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1115) at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351) at org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354) -- View this message in context: http://jackrabbit.510166.n4.nabble.com/failed-to-store-property-state-tp4525301p4525301.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: Question about in memory repositories
On Thu, Dec 1, 2011 at 3:28 PM, Alvaro Videla alvaro.vid...@liip.ch wrote: Hi, When I try to use the class org.apache.jackrabbit.core.persistence.mem.InMemPersistenceManager for my test workspace I get the following error in the logs: BeanConfig.java:185 org.apache.jackrabbit.core.persistence.mem.InMemPersistenceManager there's obviously a configuration problem. please provide the complete log entry. What's the recommended way to use jackrabbit for testing purposes? I'm trying to avoid doing file I/O for the tests. Also in which unit is the *initialCapacity *parameter expressed in the InMemPersistenceManager class? see [0]: quoteinitial capacity of the hash map used to store the data/quote BTW: InMemPersistenceManager has been deprecated [1]. as of version 2.3.6 there's a direct replacement (InMemBundlePersistenceManager, see [2]) which should be used instead. cheers stefan [0] http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/core/persistence/mem/InMemPersistenceManager.html [1] https://issues.apache.org/jira/browse/JCR-2802 [2] http://svn.apache.org/viewvc/jackrabbit/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/persistence/mem/InMemBundlePersistenceManager.java?view=markuppathrev=1214844 Regards, Alvaro -- Liip AG // Feldstrasse 133 // CH-8004 Zürich // GPG0x1D3625C7
Re: Concurrent modifications to a single node
/22/2011 03:46 PM, Stefan Guggisberg wrote: On Thu, Dec 22, 2011 at 8:39 PM, Mat Lowerymlow...@pentaho.com wrote: My understanding of concurrent modifications is that I can receive an InvalidItemStateException if I have a stale view of the repository when my session is saved or my transaction is committed. However, I can get other exceptions using the code below. Furthermore, I thought Jackrabbit would allow concurrent child node additions to a single node given the child node names were unique. correct So I wasn't even expecting an InvalidItemStateException for the code below. I understand that locking can prevent the InvalidItemStateException but I'm using transactions and committing the transaction for the sole purpose of exposing a lock is something I'd like to avoid at this time. I'd like to just catch the InvalidItemStateException and alert the user with a friendly message. Why do I get the given exceptions for the given code? I can supply full stacks if necessary; for now, just the class and error message are shown. strange. i ran your test 20 times in a row on my macbook pro (os-x 10.7, java 6) against the current head (svn r1222443). none of the runs failed. cheers stefan Jackrabbit 1.6.0: * org.apache.jackrabbit.core.state.ItemStateException: there's already a property state instance with id 243f6e39-7a3e-4d48-b051-9b4198a6a16b/{http://www.jcp.org/jcr/1.0}primaryType * javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / * javax.jcr.InvalidItemStateException: node /: the node cannot be saved because it has been modified externally. Jackrabbit 1.6.5: * javax.jcr.InvalidItemStateException: node /: the node cannot be saved because it has been modified externally. * javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / * org.apache.jackrabbit.core.state.ItemStateException: Child node entry with id 04afd8db-6515-4997-ac4e-83f066e39f53 has been removed, but is not present in the changelog Jackrabbit 2.3.4: * java.util.ConcurrentModificationException at java.util.WeakHashMap$HashIterator.nextEntry(WeakHashMap.java:762) at java.util.WeakHashMap$KeyIterator.next(WeakHashMap.java:795) at org.apache.jackrabbit.core.cache.CacheManager.logCacheStats(CacheManager.java:164) package test; import java.io.File; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import javax.jcr.Repository; import javax.jcr.RepositoryException; import javax.jcr.Session; import javax.jcr.SimpleCredentials; import org.apache.jackrabbit.core.TransientRepository; public class JackrabbitTest { public static void main(final String[] args) throws Exception { File dir = File.createTempFile(jackrabbit-test, ); dir.delete(); dir.mkdir(); System.out.println(created temporary directory: + dir.getAbsolutePath()); dir.deleteOnExit(); final Repository jcrRepo = new TransientRepository(dir); final AtomicBoolean passed = new AtomicBoolean(true); final AtomicInteger counter = new AtomicInteger(0); ExecutorService executor = Executors.newFixedThreadPool(50); Runnable runnable = new Runnable() { @Override public void run() { try { Session session = jcrRepo.login( new SimpleCredentials(admin, admin.toCharArray())); session.getRootNode().addNode(n + counter.getAndIncrement()); //unique name session.save(); session.logout(); } catch (RepositoryException e) { e.printStackTrace(); passed.set(false); } } }; System.out.println(Running threads); for (int i = 0; i 500; i++) { executor.execute(runnable); } executor.shutdown(); //Disable new tasks from being submitted if (!executor.awaitTermination(120, TimeUnit.SECONDS)) { System.err.println(timeout); System.exit(1); } if (!passed.get()) { System.err.println(one or more threads got an exception); System.exit(1); } else { System.out.println(all threads ran with no exceptions); System.exit(0); } } }
Re: Concurrent modifications to a single node
On Fri, Dec 23, 2011 at 10:27 PM, Mat Lowery mlow...@pentaho.com wrote: Thanks for your help. FYI: https://issues.apache.org/jira/browse/JCR-3194 excellent, thanks! cheers stefan On 12/23/2011 03:49 PM, Stefan Guggisberg wrote: package test; import java.io.File; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import javax.jcr.Repository; import javax.jcr.RepositoryException; import javax.jcr.Session; import javax.jcr.SimpleCredentials; import org.apache.jackrabbit.core.TransientRepository; public class JackrabbitTest { public static void main(final String[] args) throws Exception { File dir = File.createTempFile(jackrabbit-test, ); dir.delete(); dir.mkdir(); System.out.println(created temporary directory: + dir.getAbsolutePath()); dir.deleteOnExit(); final Repository jcrRepo = new TransientRepository(dir); final AtomicBoolean passed = new AtomicBoolean(true); final AtomicInteger counter = new AtomicInteger(0); ExecutorService executor = Executors.newFixedThreadPool(50); Runnable runnable = new Runnable() { @Override public void run() { try { Session session = jcrRepo.login( new SimpleCredentials(admin, admin.toCharArray())); session.getRootNode().addNode(n + counter.getAndIncrement()); //unique name session.save(); session.logout(); } catch (RepositoryException e) { e.printStackTrace(); passed.set(false); } } }; System.out.println(Running threads); for (int i = 0; i 500; i++) { executor.execute(runnable); } executor.shutdown(); //Disable new tasks from being submitted if (!executor.awaitTermination(120, TimeUnit.SECONDS)) { System.err.println(timeout); System.exit(1); } if (!passed.get()) { System.err.println(one or more threads got an exception); System.exit(1); } else { System.out.println(all threads ran with no exceptions); System.exit(0); } } }
Re: Using Jackrabbit/JCR as IDE workspace data backend
hi marcel, On Sun, Sep 25, 2011 at 3:40 PM, Marcel Bruch marcel.br...@gmail.com wrote: Hi, I'm looking for some advice whether Jackrabbit might be a good choice for my problem. Any comments on this are greatly appreciated. = Short description of the challenge = We've built a Eclipse based tool that analyzes java source files and stores its analysis results in additional files. The workspace potentially has hundreds of projects and each project may have up to a few thousands of files. Say, there will be 200 projects and 1000 java source files per project in a single workspace. Then, there will be 200*1000 = 200.000 files. On a full workspace build, all these 200k files have to be compiled (by the IDE) and analyzed (by our tool) at once and the analysis results have to be dumped to disk rather fast. But the most common use case is that a single file is changed several times per minute and thus gets frequently analyzed. At the moment, the analysis results are dumped on disk as plain json files; one json file for each java class. Each json file is around 5 to 100kb in size; some files grow up to several megabytes (10mb), these files have a few hundred JSON complex nodes (which might perfectly map to nodes in JCR). = Question = We would like to change the simple file system approach by a more sophisticated approach and I wonder whether Jackrabbit may be a suitable backend for this use case. Since we map all our data to JSON already, it looks like Jackrabbit/JCR is a perfect fit for this but I can't say for sure. What's your suggestion? Is Jackrabbit capable to quickly load and store json-like data - even if 200k files (nodes + their sub-nodes) have to be updated very in very short time? absolutely. if the data is reasonably structured/organized jackrabbit should be a perfect fit. i suggest to leverage the java package space hierarchy for organizing the data (i.e. org.apache.jackrabbit.core.TransientRepository - /org/apache/jackrabbit/core/TransientRepository). for further data modeling recommondations see [0]. cheers stefan [0] http://wiki.apache.org/jackrabbit/DavidsModel Thanks for your suggestions. I've you need more details on what operations are performed or how data looks like, I would be glad to take your questions. Marcel -- Eclipse Code Recommenders: w www.eclipse.org/recommenders tw www.twitter.com/marcelbruch g+ www.gplus.to/marcelbruch
Re: javax.jcr.Session Scope?
On Sun, Sep 11, 2011 at 12:50 PM, Yuval Zilberstein yuva...@gmail.com wrote: Hi, Does someone know what is the scope of a javax.jcr.Session, as received from javax.jcr.Repository.login(creds, repo)? http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.1.8%20Sessions cheers stefan Thanks Yuval --
Re: Jackrabbit Metadata + Inheritance
On Thu, Sep 1, 2011 at 9:55 AM, Nikolay Georgiev georgiev.niko...@gmail.com wrote: Hi Stefan, what would be the advantage of meta data saved as Mixin over just a node. Can I search Mixings thourh XPath? yes, see [0]. cheers stefan [0] http://www.day.com/specs/jcr/1.0/6.6.3.2_Type_Constraint.html Thanks, Nikolay On 08/31/2011 10:40 AM, Stefan Guggisberg wrote: hi nikolay On Tue, Aug 30, 2011 at 10:29 AM, Nikolay Georgiev georgiev.niko...@gmail.com wrote: Hello, I would like to implement 2 things and maybe you can give me some advice how to do it: 1) I want to add Metadata to every node. Possible solution: add a node called metaData and and the metaData as properties belonging to this node. What do you think? Or are there better ways? you could e.g. define a custom mixin node type and assign it to specific nodes by calling Node.addMixin(). 2) Some specific nodes should inherit other nodes, in a way that the metaData in the super node is also metaData in the subnode. How can I implement this? if the 'super node' is always a parent of the 'derived' node you could write some custom code which does traverse the parent hierarchy until it finds the 'meta data'. cheers stefan Thank you, Nikolay
Re: Jackrabbit Metadata + Inheritance
hi nikolay On Tue, Aug 30, 2011 at 10:29 AM, Nikolay Georgiev georgiev.niko...@gmail.com wrote: Hello, I would like to implement 2 things and maybe you can give me some advice how to do it: 1) I want to add Metadata to every node. Possible solution: add a node called metaData and and the metaData as properties belonging to this node. What do you think? Or are there better ways? you could e.g. define a custom mixin node type and assign it to specific nodes by calling Node.addMixin(). 2) Some specific nodes should inherit other nodes, in a way that the metaData in the super node is also metaData in the subnode. How can I implement this? if the 'super node' is always a parent of the 'derived' node you could write some custom code which does traverse the parent hierarchy until it finds the 'meta data'. cheers stefan Thank you, Nikolay
Re: javax.jcr.PathNotFoundException: hello/world
On Mon, Aug 29, 2011 at 2:24 PM, Nikolay Georgiev georgiev.niko...@gmail.com wrote: Hello, I am trying the run the SecondHop.java from http://jackrabbit.apache.org/first-hops.html but I get the exception: javax.jcr.PathNotFoundException: hello/world at org.apache.jackrabbit.core.NodeImpl$8.perform(NodeImpl.java:2120) at org.apache.jackrabbit.core.NodeImpl$8.perform(NodeImpl.java:2114) at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:188) at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91) at org.apache.jackrabbit.core.NodeImpl.getNode(NodeImpl.java:2114) at com.db.intranet.poc.pocdlcwithjr.App.main(App.java:33) Do you know what could be the problem and how could I fix it? there's no way of telling what's wrong without knowing what you're doing. ;) could you please provide your source code (App.java)? cheers stefan Thank you! Nikolay
Re: getPrimaryItem on referenceable nodes.
On Fri, Aug 19, 2011 at 12:31 AM, Furst, Carl carl.fu...@mlb.com wrote: I must confess I haven't really explored the jcr on this issue, however, would it be acceptable for a referenceable node (like a frozen node in jackrabbit) to inherit the primary item of the node it references? i'm afraid i can't follow you here. a referenceable node (a node of type mix:referenceable) is the target of a REFERENCE property, i.e. the node *is* (potentially) referenced. a frozen node (a node of type nt:frozenNode) is AFAIU not referenceable. or do you mean a node that has a reference property? cheers stefan This would imply that getPrimaryItem() on a frozen node would get the primary item of the referenced node if any, instead of an javax.jcr.ItemNotFoundException. I have that right? Would this be a better topic for the dev channel? Carl Furst CMS Developer MLB Advanced Media, LLP. ** MLB.com: Where Baseball is Always On
Re: Session.refresh(true) behaviour
On Thu, Aug 18, 2011 at 1:16 PM, David Buchmann david.buchm...@liip.ch wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 hi, the documentation [1] [2] is not very explicit how conflicts should be resolved. i wrote some simple test code, see below. is it correct that whenever i delete something on the server or move it somewhere else, the refresh(true) still changes the modified node to be deleted? Session s = repository.login(sc,workspace); Node n = s.getNode(/test); Node n2 = n.addNode(childname, nt:folder); s.save(); // delete the node in a separate session Session s2 = repository.login(sc,workspace); s2.removeItem(/test/childname); // or just s2.move(/test/childname, /xy); s2.save(); // add a child to the node in the first session Node n3 = n2.addNode(deepchild); jackrabbit (core) fails here with an InvalidItemStateException (Item does not exist anymore). that's IMO legitimate and spec-compliant behavior. cheers stefan // keep local changes s.refresh(true); // but our local change create a node is lost System.out.println(n3.getPath()); cheers,david [1] http://www.day.com/specs/jcr/2.0/10_Writing.html#10.11.1%20Refresh [2] http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/Session.html#refresh(boolean) - -- Liip AG // Agile Web Development // T +41 26 422 25 11 CH-1700 Fribourg // PGP 0xA581808B // www.liip.ch -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk5M9H0ACgkQqBnXnqWBgIsZ7ACcC/5FomxChgvw7opEoxRS6LQk mw8AoKBwWr2Ur3iwrkov3ucX1kzSur1F =Q5z5 -END PGP SIGNATURE-
Re: upsert, same name siblings etc
On Fri, Aug 19, 2011 at 5:14 PM, Lukas Kahwe Smith m...@pooteeweet.org wrote: Hi, So I am seeing behavior in production where I end up with same name siblings, the chances for that are pretty slim since its inside an import that checks if for the given path the data exists before starting the import which takes just a few ms. there's either a bug in your client code or the node in question has been created in the meantime by another session. What is stranger is that locally I cannot get same name siblings ever. Even if I disable the up front check all I get is an ItemExistsException. Is it because in production I am using MySQL for persistence and locally I am using the File System? Aka File System just doesnt support same name siblings? no. you're most likely using different node types on the parent node. whether a node can have SNS-child nodes is governed by the its (i.e. the parent node) node type. If so it would be great if such feature different would be mentioned on: http://wiki.apache.org/jackrabbit/PersistenceManagerFAQ#LocalFileSystem: At any rate I do not want same name siblings. I guess I can create a custom node type to prevent this from happening. But what I wonder is if I can then also somehow ensure that if I try to add a node to a path that already exists that it simply updates the content (with a new revision) instead, kind of a versioned upsert? that's unfortunately not supported. Or will I always have to implement something like this locally by locking the parent to prevent concurrency? that's one possibility, yes. alternatively you could also serialize the node creation code or use something like this: Node parent = ...; Node child = null; if (!parent.hasNode(child)) { try { child = parent.addNode(child); session.save(); } catch (RepositoryException e) { // the node might just have been created by another session, // try again child = parent.getNode(child); } } else { child = parent.getNode(child); } cheers stefan regards, Lukas Kahwe Smith m...@pooteeweet.org
Re: Recursive Node Typs
On Tue, Aug 9, 2011 at 4:57 PM, Eder, Johann j...@manz.at wrote: Hi Stefan, now I have set up a Jackrabbit 2.2.7 enviroment (sooner or later we have to upgrade) and tried to import the cnd file (see last mail below) but I got the following exceptions: org.apache.jackrabbit.commons.cnd.ParseException: Error setting node type name default:li (cnd input stream, line 9) at org.apache.jackrabbit.commons.cnd.Lexer.fail(Lexer.java:219) at org.apache.jackrabbit.commons.cnd.CompactNodeTypeDefReader.doNodeTypeName(CompactNodeTypeDefReader.java:268) at org.apache.jackrabbit.commons.cnd.CompactNodeTypeDefReader.parse(CompactNodeTypeDefReader.java:211) at org.apache.jackrabbit.commons.cnd.CompactNodeTypeDefReader.init(CompactNodeTypeDefReader.java:163) at org.apache.jackrabbit.commons.cnd.CompactNodeTypeDefReader.init(CompactNodeTypeDefReader.java:139) at org.apache.jackrabbit.commons.cnd.CndImporter.registerNodeTypes(CndImporter.java:141) at ... Caused by: javax.jcr.nodetype.ConstraintViolationException: javax.jcr.NamespaceException: default: is not a registered namespace prefix. at org.apache.jackrabbit.spi.commons.nodetype.NodeTypeTemplateImpl.setName(NodeTypeTemplateImpl.java:119) at org.apache.jackrabbit.commons.cnd.TemplateBuilderFactory$NodeTypeTemplateBuilder.setName(TemplateBuilderFactory.java:120) at org.apache.jackrabbit.commons.cnd.CompactNodeTypeDefReader.doNodeTypeName(CompactNodeTypeDefReader.java:266) ... 25 more -- Here is the snippet I used to import the cnd: -- public static void importNodeTypes(Session s, String filename) throws Exception { log.info( importing nodetypedefs ...); FileReader cnd = new FileReader(filename); CndImporter.registerNodeTypes(cnd, cnd input stream, s.getWorkspace().getNodeTypeManager(), s.getWorkspace().getNamespaceRegistry(), s.getValueFactory(), true); log.info( Done importing nodetypedefs.); } - I also have tried to register the namespace with s.getWorkspace().getNamespaceRegistry().registerNamespace(default, ); the 'empty' namespace is reserved, see [0]. either specify a non-empty namespace uri or remove the 'default' namespace declaration and in your cnd. cheers stefan [0] http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.5.1.1 Empty Prefix and Empty Namespace but with the same error. There is only one workspace. Any hint is appreciated. Cheers Johann -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Donnerstag, 4. August 2011 15:13 An: users@jackrabbit.apache.org Betreff: Re: Recursive Node Typs On Thu, Aug 4, 2011 at 2:41 PM, Eder, Johann j...@manz.at wrote: Hi Stefan, thanks for the snippet. I guess it works fine with Jackrabbit 2.x. We still use Jackrabbit 1.4.5. Is there a way to import the cnd file successfully? you'd had saved me some time if you had made it clear that you're refering to a 3-year old release... :( and no, i don't know the answer to your question. cheers stefan Cheers Johann -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Mittwoch, 3. August 2011 18:38 An: users@jackrabbit.apache.org Betreff: Re: Recursive Node Typs On Tue, Aug 2, 2011 at 10:06 AM, Eder, Johann j...@manz.at wrote: Hi Stefan, Thanks for the hints. I guess I have not made myself clear. I have a method which imports the NodeTypDef via cnd file perfectly. But when I try to import and register Nodes like -- mix = 'http://www.jcp.org/jcr/mix/1.0' nt = 'http://www.jcp.org/jcr/nt/1.0' jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + default:ul (default:ul)=default:ul multiple + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple - I get an error: ... INFO NodeTypeImporter:63 - DEBUG [registerCustomNodeTypes:L3] to register...li ERROR NodeTypeImporter:68 - org.apache.jackrabbit.core.nodetype.InvalidNodeTypeDefException: [{}li#{}ul] invalid default primary type '{}ul' ... This is clear to me, because ul is unknown. So my question is how to change the cnd file in a way that ul can be a child of li and li can be a child of ul ? i was able to successfully register your cnd node type definitions using the following snippet: Reader cnd = new FileReader
Re: Property::setValue specification
On Thu, Aug 11, 2011 at 5:22 PM, David Buchmann david.buchm...@liip.ch wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 hi, implementing the php port of jcr as closely to the java spec as the language differences permit, i have a question about the javadoc of Property.setValue(Value value) the javadoc [1] tells that If the property type is constrained, then a best-effort conversion is attempted. however, the jcr 2.0 specification defines in 3.6.4 an exact list of what can be converted into which types and when to throw the ValueFormatException. the best-effort conversion suggerates that the implementation might convert more of the cases or just does something like convert string hello to integer 1. no, that's not the intention. best-effort in this case means that a conversion is attempted but it's not guaranteed to succeed. however, such behaviour would result in non-portable client code because what works with one implementation works not with an other. should the setValue method javadoc read ...then a conversion *according to jcr spec paragraph 3.6.4* is attempted. ? i agree that the javadoc could be more specific WRT the supported value conversions. could you please file a jira issue [0]? thanks stefan [0] http://java.net/jira/browse/JSR_333 cheers,david [1] http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/Property.html#setValue%28javax.jcr.Value%29 [2] http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.6.4%20Property%20Type%20Conversion - -- Liip AG // Agile Web Development // T +41 26 422 25 11 CH-1700 Fribourg // PGP 0xA581808B // www.liip.ch -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk5D87wACgkQqBnXnqWBgIuhbgCeO+yezmf+eqM1xEG6t/8GbhBR kbwAnAqYx5JtA6xf/HrzspFE/iwqxirB =OJyi -END PGP SIGNATURE-
Re: Recursive Node Typs
On Thu, Aug 4, 2011 at 2:41 PM, Eder, Johann j...@manz.at wrote: Hi Stefan, thanks for the snippet. I guess it works fine with Jackrabbit 2.x. We still use Jackrabbit 1.4.5. Is there a way to import the cnd file successfully? you'd had saved me some time if you had made it clear that you're refering to a 3-year old release... :( and no, i don't know the answer to your question. cheers stefan Cheers Johann -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Mittwoch, 3. August 2011 18:38 An: users@jackrabbit.apache.org Betreff: Re: Recursive Node Typs On Tue, Aug 2, 2011 at 10:06 AM, Eder, Johann j...@manz.at wrote: Hi Stefan, Thanks for the hints. I guess I have not made myself clear. I have a method which imports the NodeTypDef via cnd file perfectly. But when I try to import and register Nodes like -- mix = 'http://www.jcp.org/jcr/mix/1.0' nt = 'http://www.jcp.org/jcr/nt/1.0' jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + default:ul (default:ul)=default:ul multiple + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple - I get an error: ... INFO NodeTypeImporter:63 - DEBUG [registerCustomNodeTypes:L3] to register...li ERROR NodeTypeImporter:68 - org.apache.jackrabbit.core.nodetype.InvalidNodeTypeDefException: [{}li#{}ul] invalid default primary type '{}ul' ... This is clear to me, because ul is unknown. So my question is how to change the cnd file in a way that ul can be a child of li and li can be a child of ul ? i was able to successfully register your cnd node type definitions using the following snippet: Reader cnd = new FileReader(/path/to/file.cnd); CndImporter.registerNodeTypes(cnd, cnd input stream, wsp.getNodeTypeManager(), wsp.getNamespaceRegistry(), session.getValueFactory(), true); cheers stefan Thanks. Cheers Johann -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Montag, 1. August 2011 17:22 An: users@jackrabbit.apache.org Betreff: Re: Recursive Node Typs On Mon, Aug 1, 2011 at 4:24 PM, Eder, Johann j...@manz.at wrote: Hi, Is it expected to be possible to create circular node type dependencies? yes (except if the circular dependencies are declared as 'autocreate'...). Example: Node type A has B nodes, and node type B has A nodes. In the NodeTypDef I have ul and li elements. ul can be a child of li and li can be a child of ul. ... jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple + default:table (default:table)=default:table multiple + * (default:p)=default:p multiple ... If yes, how to do it? declare all related/required node tyes in the same cnd file and register them (see [1]). cheers stefan [1] http://wiki.apache.org/jackrabbit/ExamplesPage#Register_a_Node_Type_.5BCND.5D Thanks for any hints. Johann
Re: non-daemon Timer thread preventing JVM from stopping
On Wed, Aug 3, 2011 at 5:30 AM, Kevin Jansz kevin.ja...@exari.com wrote: We've identified an issue where the JVM is prevented from stopping because of a non-daemon thread started by jackrabbit. I tracked it down (using jstack) to the timer in org.apache.jackrabbit.core.RepositoryContext: /** * Repository-wide timer instance. */ private final Timer timer = new Timer(false); Where the boolean is indicates if the wrapped java.util.Timer instances get kicked off with the daemon flag. Making the change in org.apache.jackrabbit.core.RepositoryContext: private final Timer timer = new Timer(true); Resolves the JVM shutdown issues. I'd recommend this change be made in line with a similar change for https://issues.apache.org/jira/browse/JCR-2752. From what I can tell this timer is only used by the SearchIndex so no need for it to be non-daemon. If more information is needed or I should raise this on the dev list or JIRA let me know. yes, please please create a jira issue. this problem seems to be a regression of JCR-2836 and related to JCR-600. i don't know why the repository global timer instance is created as 'non-daemon' but there might be a reason for it. jukka can probably answer this question. On a related note, I'd also like to suggest the code in org.apache.jackrabbit.util.Timer.schedule(Task, long, long) kick off the Timer with a name plus the daemon flag to avoid getting default thread names in the form of Timer-n which make it harder to track. Happy to submit code for this. patches are always welcome :) cheers stefan Regards, Kevin -- Kevin Jansz kevin.ja...@exari.com Level 7, 10-16 Queen Street, Melbourne 3000 Australia Tel +61 3 9621 2773 | Fax +61 3 9621 2776 Exari Systems Boston | London | Melbourne | Munich www.exari.com Test drive our software online - www.exari.com/demo-trial.html Read our blog on document assembly - blog.exari.com
Re: Recursive Node Typs
On Tue, Aug 2, 2011 at 10:06 AM, Eder, Johann j...@manz.at wrote: Hi Stefan, Thanks for the hints. I guess I have not made myself clear. I have a method which imports the NodeTypDef via cnd file perfectly. But when I try to import and register Nodes like -- mix = 'http://www.jcp.org/jcr/mix/1.0' nt = 'http://www.jcp.org/jcr/nt/1.0' jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + default:ul (default:ul)=default:ul multiple + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple - I get an error: ... INFO NodeTypeImporter:63 - DEBUG [registerCustomNodeTypes:L3] to register...li ERROR NodeTypeImporter:68 - org.apache.jackrabbit.core.nodetype.InvalidNodeTypeDefException: [{}li#{}ul] invalid default primary type '{}ul' ... This is clear to me, because ul is unknown. So my question is how to change the cnd file in a way that ul can be a child of li and li can be a child of ul ? i was able to successfully register your cnd node type definitions using the following snippet: Reader cnd = new FileReader(/path/to/file.cnd); CndImporter.registerNodeTypes(cnd, cnd input stream, wsp.getNodeTypeManager(), wsp.getNamespaceRegistry(), session.getValueFactory(), true); cheers stefan Thanks. Cheers Johann -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Montag, 1. August 2011 17:22 An: users@jackrabbit.apache.org Betreff: Re: Recursive Node Typs On Mon, Aug 1, 2011 at 4:24 PM, Eder, Johann j...@manz.at wrote: Hi, Is it expected to be possible to create circular node type dependencies? yes (except if the circular dependencies are declared as 'autocreate'...). Example: Node type A has B nodes, and node type B has A nodes. In the NodeTypDef I have ul and li elements. ul can be a child of li and li can be a child of ul. ... jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple + default:table (default:table)=default:table multiple + * (default:p)=default:p multiple ... If yes, how to do it? declare all related/required node tyes in the same cnd file and register them (see [1]). cheers stefan [1] http://wiki.apache.org/jackrabbit/ExamplesPage#Register_a_Node_Type_.5BCND.5D Thanks for any hints. Johann
Re: Recursive Node Typs
On Mon, Aug 1, 2011 at 4:24 PM, Eder, Johann j...@manz.at wrote: Hi, Is it expected to be possible to create circular node type dependencies? yes (except if the circular dependencies are declared as 'autocreate'...). Example: Node type A has B nodes, and node type B has A nodes. In the NodeTypDef I have ul and li elements. ul can be a child of li and li can be a child of ul. ... jcr = 'http://www.jcp.org/jcr/1.0' sys = 'http://onlaw.at/sys' default ='' [sys:base] nt:base, mix:referenceable [default:li] sys:base orderable mixin + * (nt:unstructured)=nt:unstructured multiple [default:ul] sys:base orderable - * + default:li (default:li)=default:li multiple [default:ol] sys:base orderable - * + default:li (default:li)=default:li multiple [sys:Text] sys:base - * + default:ol (default:ol)=default:ol multiple + default:ul (default:ul)=default:ul multiple + default:table (default:table)=default:table multiple + * (default:p)=default:p multiple ... If yes, how to do it? declare all related/required node tyes in the same cnd file and register them (see [1]). cheers stefan [1] http://wiki.apache.org/jackrabbit/ExamplesPage#Register_a_Node_Type_.5BCND.5D Thanks for any hints. Johann
Re: Creating nt:file under nt:folder exception
On Fri, Jul 22, 2011 at 2:39 PM, Francisco Carriedo Scher fcarrie...@gmail.com wrote: Hi there, i am trying to add a child node to a nt:folder node (rep:AuthorizableFolder node actually, but the same problem arises with other node types). In the rep:AuthorizableFolder is *not* derived from nt:folder... lines below the folder node appears in the path as **USUARIO-1311259687502**. Saw your examples and some similar more, but the following line: **Node fileNode = folderNode.addNode(file.getName(), nt:file);** throws the following exception: **Exception in thread main javax.jcr.nodetype.ConstraintViolationException: No child node definition for lebAudio.mp3 found in node /rep:security/rep:authorizables/rep:users/USUARIO-1311259687502** Despite of having read some docu about node types (and understanding that nt:file is allowed as nt:folder child, and both are built-in types in Jackrabbit, so nothing special should be done) i do not understand what is wrong. Any idea? you're trying to add a nt:file child node to a rep:AuthorizableFolder node. rep:AuthorizableFolder is declared as follows: [rep:AuthorizableFolder] nt:base, mix:referenceable + * (rep:Authorizable) = rep:User protected version + * (rep:AuthorizableFolder) = rep:AuthorizableFolder protected version as you can see it doesn't allow nt:file child nodes. cheers stefan Thanks in advance, have a nice day!
Re: batch read depth
On Tue, Jul 5, 2011 at 1:03 AM, ChadDavis chadmichaelda...@gmail.com wrote: I'm tuning the performance of my spi/davex remoting, and I'm a bit unclear about the batch read depth. What are the units of depth? Is it an item? In other words, if I get a node X, do I need a batch depth of 1 to also retrieve the properties of node X. Or would a depth of 1 read all of X's child nodes? 0: returns specified node incl. properties and child node names (empty objects) 1: returns specified node incl. properties and child nodes (props and grand child names) n: and so forth... -1: returns entire subtree Also, does the depth always start from the node being returned by node.getNode()? yes Or is it relative to the root node of the repo? you'll find more information here: http://jackrabbit.apache.org/api/2.1/org/apache/jackrabbit/server/remoting/davex/JcrRemotingServlet.html cheers stefan
Re: RepositoryException
hi sumit, On Sat, Jun 18, 2011 at 11:02 PM, Shah, Sumit (CGI Federal) sumit.s...@cgifederal.com wrote: All, I am seeing this error on my Jackrabbit 2.2.5 environment: 13:12:26.171 [RMI TCP Connection(5988)-206.137.126.43] WARN o.a.j.core.ItemSaveOperation - /momroot/BillingStatement/1786F0004645/jcr:lockToken: failed to restore transient state javax.jcr.RepositoryException: org.apache.jackrabbit.core.state.ItemStateException: there's already a property state instance with id 8afcfef0-593b-40c1-aec8-eba53edf3e2c/{http://www.jcp.org/jcr/1.0}lockToken at org.apache.jackrabbit.core.PropertyImpl.restoreTransient(PropertyImpl.java:195) ~[jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemSaveOperation.restoreTransientItems(ItemSaveOperation.java:878) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:276) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:188) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:42) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:188) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:355) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:758) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.rmi.server.ServerSession.save(ServerSession.java:263) [jackrabbit-jcr-rmi-2.2.5.jar:na] Can someone please explain what this means, the cause and how it can be resolved? first of all it's logged as a warning, so nothing to be seriously worried about. when a save operation fails the transient session state will be restored as it was before the #save() call. in this particular case there's been a problem restoring the transient session state. however, it would be interersting to see why the #save() call failed in the first place. cheers stefan Thanks Sumit
Re: RepositoryException
On Mon, Jun 20, 2011 at 4:58 PM, Shah, Sumit (CGI Federal) sumit.s...@cgifederal.com wrote: Hi Stefan, As Jukka suggested, we are going to try upgrading to the latest IBM JDK. It seems like we are at SR6. i can't follow you here. how is the problem you're describing in this thread related to a JVM version? Thanks Sumit -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Monday, June 20, 2011 7:56 AM To: users@jackrabbit.apache.org Subject: Re: RepositoryException hi sumit, On Sat, Jun 18, 2011 at 11:02 PM, Shah, Sumit (CGI Federal) sumit.s...@cgifederal.com wrote: All, I am seeing this error on my Jackrabbit 2.2.5 environment: 13:12:26.171 [RMI TCP Connection(5988)-206.137.126.43] WARN o.a.j.core.ItemSaveOperation - /momroot/BillingStatement/1786F0004645/jcr:lockToken: failed to restore transient state javax.jcr.RepositoryException: org.apache.jackrabbit.core.state.ItemStateException: there's already a property state instance with id 8afcfef0-593b-40c1-aec8-eba53edf3e2c/{http://www.jcp.org/jcr/1.0}lockToken at org.apache.jackrabbit.core.PropertyImpl.restoreTransient(PropertyImpl.java:195) ~[jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemSaveOperation.restoreTransientItems(ItemSaveOperation.java:878) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:276) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:188) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:42) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:188) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:355) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:758) [jackrabbit-core-2.2.5.jar:2.2.5] at org.apache.jackrabbit.rmi.server.ServerSession.save(ServerSession.java:263) [jackrabbit-jcr-rmi-2.2.5.jar:na] Can someone please explain what this means, the cause and how it can be resolved? first of all it's logged as a warning, so nothing to be seriously worried about. when a save operation fails the transient session state will be restored as it was before the #save() call. in this particular case there's been a problem restoring the transient session state. however, it would be interersting to see why the #save() call failed in the first place. cheers stefan Thanks Sumit
Re: Session.importXML and jcr:system
On Fri, May 20, 2011 at 9:38 AM, David Buchmann david.buchm...@liip.ch wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 hi, i try to import a dump of a repository for functional testing. the tests about versioning need specific older revisions of the node and it seems the version tree is in /jcr:system i dump the data with Session.exportSystemView and then try to reimport with session.importXml(/, stream, ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING); but on session.save i get the exception javax.jcr.nodetype.ConstraintViolationException: /jcr:root/jcr:system: mandatory child node {http://www.jcp.org/jcr/1.0}versionStorage does not exist when i look in my xml at the sv:name, i see the nodes jcr:root jcr:system jcr:versionStorage what am i doing wrong? do i need to specify a different path to have the jcr:system overwritten? you can't import jcr:system content using the JCR api. the /jcr:system subtree can be compared with the system tables in a rdbms. it contains in-content representations of repository meta-data etc. cheers stefan i attached the zipped repository dump i try to export. cheers, david - -- Liip AG // Agile Web Development // T +41 26 422 25 11 CH-1700 Fribourg // PGP 0xA581808B // www.liip.ch -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk3WGl4ACgkQqBnXnqWBgIv/+QCfY7VOEHt4K6LOeD4mI2tN5Vvi wmUAn2ZeYgtkn7HOTrCl7hNckunbyATE =itjH -END PGP SIGNATURE-
Re: Lock stealing
On Fri, May 13, 2011 at 8:43 AM, Kamil Nezval kamil.nez...@xitee.com wrote: Hi, I'm trying to implement a stealing of a node's lock - one user will be able to unlock the nodes locked by other users. i prefer the term transferring lock ownership... According to the JCR 2.0 specification it should be possible to assign a lock to a current session using LockManager.addLockToken() method: If the implementation does not support simultaneous lock ownership this method will transfer ownership of the lock corresponding to the specified lockToken to the current session, otherwise the current session will become an additional owner of that lock. So I've tried something like this: String nodePath = node.getPath(); LockManager lockManager = jcrSession.getWorkspace().getLockManager(); Lock nodeLock = lockManager.getLock(nodePath); String lockToken = nodeLock.getLockToken(); lockManager.addLockToken(lockToken); lockManager.unlock(nodePath); lockManager.lock(nodePath, false, false, 1000, jcrSession.getUserID()); But it doesn't work (Cannot add lock token: lock already held by other session. exception). I've looked into a source code and it looks like the implementation doesn't follow the specification at all, the implementation is spec-compliant. the javadoc [1] clearly states that a LockException is thrown if the specified lock token is already held by another Session and the implementation does not support simultaneous ownership of open-scoped locks. before adding the token to the new session you have to remove the token from the other session. cheers stefan [1] http://www.day.com/maven/jsr170/javadocs/jcr-2.0/javax/jcr/lock/LockManager.html#addLockToken(java.lang.String) see the code bellow (LockManagerImpl.java): public void addLockToken(SessionImpl session, String lt) throws LockException, RepositoryException { try { NodeId id = LockInfo.parseLockToken(lt); NodeImpl node = (NodeImpl) sysSession.getItemManager().getItem(id); Path path = node.getPrimaryPath(); PathMap.ElementLockInfo element = lockMap.map(path, true); if (element != null) { LockInfo info = element.get(); if (info != null) { if (info.isLockHolder(session)) { // nothing to do } else if (info.getLockHolder() == null) { info.setLockHolder(session); if (info instanceof InternalLockInfo) { session.addListener((InternalLockInfo) info); } } else { String msg = Cannot add lock token: lock already held by other session.; log.warn(msg); info.throwLockException(msg, session); } } } // inform SessionLockManager getSessionLockManager(session).lockTokenAdded(lt); } catch (IllegalArgumentException e) { String msg = Bad lock token: + e.getMessage(); log.warn(msg); throw new LockException(msg); } } And it is also not possible to get a lock token if the current user is not the lock holder (LockImpl.java): public String getLockToken() { if (!info.isSessionScoped() info.isLockHolder(node.getSession())) { return info.getLockToken(); } else { return null; } } So my question is whether it is somehow possible to implement a lock-stealing as described above. Thanks in advance. Regards Kamil
Re: Lock stealing
2011/5/13 Fabián Mandelbaum fmandelb...@gmail.com: Hello Stefan, how will your proposal work with long-lived (a.k.a. not session-scoped) tokens? A common scenario in web applications is that the client issuing the lock request is not always able to save (cache, keep, whatever one wants to call it) the lock tokens, and thus would not be able to pass the token around like you suggest (it's not uncommon that a user closes the web browser tab or window that had the client application running onto, and away goes the lock tokens that client could have saved...). It's common for the web application backend to maintain a pool of JCR sessions, thus you cannot guarantee that you'll have access to the session that locked the item (thus the lock token is 'lost' somehow...) Hope to have been clear :-) absolutely ;) if you need to be able to 'break' somebody else's lock you could e.g. override the following method: o.a.jackrabbit.core..lock.checkUnlock(LockInfo info, Session session) that way you can implement some sort of lock administrator session which is able to unlock any open-scoped lock. note that you'd have to subclass RepositoryImpl as well. cheers stefan On Fri, May 13, 2011 at 9:24 AM, Stefan Guggisberg stefan.guggisb...@gmail.com wrote: On Fri, May 13, 2011 at 8:43 AM, Kamil Nezval kamil.nez...@xitee.com wrote: Hi, I'm trying to implement a stealing of a node's lock - one user will be able to unlock the nodes locked by other users. i prefer the term transferring lock ownership... According to the JCR 2.0 specification it should be possible to assign a lock to a current session using LockManager.addLockToken() method: If the implementation does not support simultaneous lock ownership this method will transfer ownership of the lock corresponding to the specified lockToken to the current session, otherwise the current session will become an additional owner of that lock. So I've tried something like this: String nodePath = node.getPath(); LockManager lockManager = jcrSession.getWorkspace().getLockManager(); Lock nodeLock = lockManager.getLock(nodePath); String lockToken = nodeLock.getLockToken(); lockManager.addLockToken(lockToken); lockManager.unlock(nodePath); lockManager.lock(nodePath, false, false, 1000, jcrSession.getUserID()); But it doesn't work (Cannot add lock token: lock already held by other session. exception). I've looked into a source code and it looks like the implementation doesn't follow the specification at all, the implementation is spec-compliant. the javadoc [1] clearly states that a LockException is thrown if the specified lock token is already held by another Session and the implementation does not support simultaneous ownership of open-scoped locks. before adding the token to the new session you have to remove the token from the other session. cheers stefan [1] http://www.day.com/maven/jsr170/javadocs/jcr-2.0/javax/jcr/lock/LockManager.html#addLockToken(java.lang.String) see the code bellow (LockManagerImpl.java): public void addLockToken(SessionImpl session, String lt) throws LockException, RepositoryException { try { NodeId id = LockInfo.parseLockToken(lt); NodeImpl node = (NodeImpl) sysSession.getItemManager().getItem(id); Path path = node.getPrimaryPath(); PathMap.ElementLockInfo element = lockMap.map(path, true); if (element != null) { LockInfo info = element.get(); if (info != null) { if (info.isLockHolder(session)) { // nothing to do } else if (info.getLockHolder() == null) { info.setLockHolder(session); if (info instanceof InternalLockInfo) { session.addListener((InternalLockInfo) info); } } else { String msg = Cannot add lock token: lock already held by other session.; log.warn(msg); info.throwLockException(msg, session); } } } // inform SessionLockManager getSessionLockManager(session).lockTokenAdded(lt); } catch (IllegalArgumentException e) { String msg = Bad lock token: + e.getMessage(); log.warn(msg); throw new LockException(msg); } } And it is also not possible to get a lock token if the current user is not the lock holder (LockImpl.java): public String getLockToken() { if (!info.isSessionScoped() info.isLockHolder(node.getSession())) { return info.getLockToken(); } else { return null; } } So my question is whether it is somehow possible to implement a lock-stealing as described above. Thanks in advance. Regards Kamil
Re: External datasource injection
On Mon, May 9, 2011 at 10:20 AM, Bruno Dusausoy bruno.dusau...@yp5.be wrote: Hi, Is it possible to use an external datasource (like an XA one) with Jackrabbit ? I'm still trying to get the JTA transaction working with Spring. I've set up correctly - at least I think I've done it correctly - a Bitronix transaction manager but I'm confused about how I can tell the repository to use it for PM and DS configuration. If everything is declared in the repository/workspace XML file, how can I manage Jackrabbit to use the external datasource I've set up ? jackrabbit doesn't support externally managed datasources. jackrabbit expects to be in full control of the underlying database connection. see also: http://markmail.org/message/4y5flozqpceq7eoh cheers stefan Thanks. Regards. -- Bruno Dusausoy YP5 Software -- Pensez environnement : limitez l'impression de ce mail. Please don't print this e-mail unless you really need to.
Re: Jackrabbit Reference Lookup
On Thu, May 5, 2011 at 7:02 PM, kazim_ss...@yahoo.com kazim_ss...@yahoo.com wrote: Stefan Guggisberg wrote: yes. references are stored on the target. for every mix:referenceable node currently being referenced by 1-n reference properties there's one record in the REFS table. that record stores the collection of property id's refering to the corresponding mix:referenceable node. So if I use weak references, would there be a seperate record for the reference? in REFS or any other table? no. I understand that referential integrity will not be enforced in that case. correct. Stefan Guggisberg wrote: both forward and reverse lookups are highly efficient. Can you explain how forward lookup is done when there is no pointer to referenced node in referencing node data? Only way I can think of is to triverse all rows in REFS table along with their blob untill you find referencing node id in the blob. i can't follow you here. forward lookup = referenceProperty.getNode() the internal value of a refererence property *is* the unique identifier of the target node, i.e. the target node is *directly* accessed using its unique identifier. Stefan Guggisberg wrote: the drawback of this approach is limited scalability. 10k and more references to any particular node slow down write performance, i.e. adding an additional reference to a target node already being referenced by lets say 100k properties is relatively slow since the the entire collection of referer id's needs to be stored. I suppose this will not be an issue with weak references, assuming there is no record of reference in REFS or any other table. Am I right? yes. cheers stefan Thanks, KS -- View this message in context: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Reference-Lookup-tp3494206p3498886.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: Jackrabbit Reference Lookup
On Wed, May 4, 2011 at 3:55 AM, kazim_ss...@yahoo.com kazim_ss...@yahoo.com wrote: Hi, I am using reference type property in one of the nodes, and noticed that referenced node id along with all it's references (in blob) is stored in ${schemaObjectPrefix}REFS table (I am using bundle persistance manager for oracle). I don't see pointer to referenced node in the blob of referencing node. Does that mean that while getting referenced node from referencing node (using node.getProperty(...) method) uses reverse lookup? If that is the case wouldn't it perform extreamly poorly when there are millions of rows in ${schemaObjectPrefix}REFS table each referenced by 100 of nodes? Since in reverse lookup jackrabbit will have to triverse all rows in ${schemaObjectPrefix}REFS table to find referencing node id in each row's blob. Am I missing something? yes. references are stored on the target. for every mix:referenceable node currently being referenced by 1-n reference properties there's one record in the REFS table. that record stores the collection of property id's refering to the corresponding mix:referenceable node. this approach was chosen in order to efficiently support referential integrity checking, both forward and reverse lookups are highly efficient. the drawback of this approach is limited scalability. 10k and more references to any particular node slow down write performance, i.e. adding an additional reference to a target node already being referenced by lets say 100k properties is relatively slow since the the entire collection of referer id's needs to be stored. cheers stefan Please elaborate. Thanks, KS. -- View this message in context: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Reference-Lookup-tp3494206p3494206.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: schemaCheckEnabled in DbFileSystem
On Tue, May 3, 2011 at 1:43 PM, Bruno Dusausoy bruno.dusau...@yp5.be wrote: Hi, I have a NullPointerException when I try to start Jackrabbit with a DbFileSystem using MySql as RDBMS : 03-mai-2011 13:36:04 org.apache.jackrabbit.core.fs.db.DatabaseFileSystem init SEVERE: failed to initialize file system java.lang.NullPointerException at java.io.Reader.init(Reader.java:61) at java.io.InputStreamReader.init(InputStreamReader.java:55) at org.apache.jackrabbit.core.util.db.CheckSchemaOperation.run(CheckSchemaOperation.java:80) at org.apache.jackrabbit.core.fs.db.DatabaseFileSystem.init(DatabaseFileSystem.java:197) at org.apache.jackrabbit.core.config.RepositoryConfigurationParser$6.getFileSystem(RepositoryConfigurationParser.java:1057) at org.apache.jackrabbit.core.config.RepositoryConfig.getFileSystem(RepositoryConfig.java:911) at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:285) at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:605) When searching a bit, indeed, the ddl field of the CheckSchemaOperation class is null. that's most likely caused by misconfiguration. make sure you're specifying mysql as schema, i.e. FileSystem class=org.apache.jackrabbit.core.fs.db.DbFileSystem [...] param name=schema value=mysql/ [...] /FileSystem cheers stefan When searching further I've noticed that // check if schema objects exist and create them if necessary if (isSchemaCheckEnabled()) { createCheckSchemaOperation().run(); } in the init() method of the DatabaseFileSystem class. schemaCheckEnabled is true by default. I've tried to turn it off but failed. So, two questions in one here : should I let it turned on ? If yes, how can I get rid of this npe ? If no, how can I turn it off properly ? Thanks. Regards. -- Bruno Dusausoy YP5 Software -- Pensez environnement : limitez l'impression de ce mail. Please don't print this e-mail unless you really need to.
Re: Jackrabbit doesn`t startup anymore
On Tue, May 3, 2011 at 9:16 AM, sascha.the...@bosch-si.com wrote: Hi all, I have now a detailed log output of the raised NumnberFormatException: 2011-05-03 08:55:25,329 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: java.lang.NumberFormatException: For input string: at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:468) at java.lang.Integer.parseInt(Integer.java:497) at org.apache.jackrabbit.core.nodetype.NodeDefId.valueOf(NodeDefId.java:106) at org.apache.jackrabbit.core.persistence.bundle.util.BundleBinding.readBundle(BundleBinding.java:105) here's the problem: // definitionId bundle.setNodeDefId(NodeDefId.valueOf(in.readUTF())); jackrabbit 1.5.6 assumes that the node definition id is part of the serialized node bundle data. as of jackrabbit 1.6/2.0, definition id's are not persisted anymore, see [0] for details. it seems like you're using jackrabbit 1.5.6 to access data stored with jackrabbit 1.6.* or 2.*. cheers stefan [0] https://issues.apache.org/jira/browse/JCR-2170 at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.loadBundle(BundleDbPersistenceManager.java:1161) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.loadBundle(BundleDbPersistenceManager.java:1094) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.getBundle(AbstractBundlePersistenceManager.java:701) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.exists(AbstractBundlePersistenceManager.java:506) at org.apache.jackrabbit.core.state.SharedItemStateManager.hasNonVirtualItemState(SharedItemStateManager.java:1343) at org.apache.jackrabbit.core.state.SharedItemStateManager.init(SharedItemStateManager.java:203) at org.apache.jackrabbit.core.RepositoryImpl.createItemStateManager(RepositoryImpl.java:1317) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doInitialize(RepositoryImpl.java:1863) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.initialize(RepositoryImpl.java:1834) at org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces(RepositoryImpl.java:483) at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:324) at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:621) at [SKIPPED] Please note that the line numbers in BundleDbPersistenceManager have changed regarding to the 1.5.6 version of Jackrabbit because I have patched the class. But I think that doesn`t matter. Any ideas what causes the problem now? Or how to fix the problem? Thanks in advance, Sascha -Ursprüngliche Nachricht- Von: sascha.the...@bosch-si.com [mailto:sascha.the...@bosch-si.com] Gesendet: Montag, 2. Mai 2011 13:24 An: users@jackrabbit.apache.org Betreff: AW: Jackrabbit doesn`t startup anymore Hi, did you copy the complete repository home directory, i.e. including the ns_*.properties files? Yes I copied the complete repository home directory. But I also tried to remove the complete repository home directory to force a rebuild of the index files without success. Still same error. Very strange. Debugging is currently not possible because the machine is hosted by our customer. But I will try to patch Jackrabbit`s BundleDbPersistenceManager to get a complete stack trace. If I have any news I will post it here. But anyway, thank you very much for your help so far. Cheers, Sascha -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Montag, 2. Mai 2011 11:19 An: users@jackrabbit.apache.org Betreff: Re: Jackrabbit doesn`t startup anymore On Mon, May 2, 2011 at 9:18 AM, sascha.the...@bosch-si.com wrote: Hi, i would need a full stacktrace of the following error: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: Attached you will find a debug output log of Jackrabbit. Unfortunately, I do not see a full stack trace of the NumberFormatException. I think the exception must be catched somewhere in Jackrabbit? Any ideas how to get the full stack? either debug jackrabbit and set a breakpoint on the line in BundleDbPersistenceManager.java which logs the error, or change that line to print the full stacktrace and rebuild jackrabbit from the soruces. all i can say is that for some reason the deserialization of the root node data fails. what exact steps did you perform to setup the test on the other machine? I copied the whole application, including the workspace
Re: Jackrabbit doesn`t startup anymore
On Wed, May 4, 2011 at 3:41 PM, sascha.the...@bosch-si.com wrote: Hi Stefan, yes you are absolutely right. We switched to a new version of our product and used the old database. In our new product version we include Jackrabbit 1.6.2 and in the old product version we include 1.5.6. So Jackrabbit has already migrated the data. So here is not really a problem of Jackrabbit itself. if you'd provided that information early on you would have saved me a lot of time... :( But I have 2 questions regarding the automated migration of Jackrabbit: 1) Is it possible to disable the automated migration in Jackrabbit? So the migration of the data is always a explicit and manual step? that's currently not possible. 2) Is it possible to manually fix the data in the database so we are able to bring back Jackrabbit 1.5.6 again? you would have to write your own conversion tool. that tool would need to re-determine the appropriate definition id's which is possible but non-trivial. cheers stefan Thanks for your help! Cheers, Sascha -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Mittwoch, 4. Mai 2011 15:13 An: users@jackrabbit.apache.org Betreff: Re: Jackrabbit doesn`t startup anymore On Tue, May 3, 2011 at 9:16 AM, sascha.the...@bosch-si.com wrote: Hi all, I have now a detailed log output of the raised NumnberFormatException: 2011-05-03 08:55:25,329 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: java.lang.NumberFormatException: For input string: at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:468) at java.lang.Integer.parseInt(Integer.java:497) at org.apache.jackrabbit.core.nodetype.NodeDefId.valueOf(NodeDefId.java:106) at org.apache.jackrabbit.core.persistence.bundle.util.BundleBinding.readBundle(BundleBinding.java:105) here's the problem: // definitionId bundle.setNodeDefId(NodeDefId.valueOf(in.readUTF())); jackrabbit 1.5.6 assumes that the node definition id is part of the serialized node bundle data. as of jackrabbit 1.6/2.0, definition id's are not persisted anymore, see [0] for details. it seems like you're using jackrabbit 1.5.6 to access data stored with jackrabbit 1.6.* or 2.*. cheers stefan [0] https://issues.apache.org/jira/browse/JCR-2170 at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.loadBundle(BundleDbPersistenceManager.java:1161) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.loadBundle(BundleDbPersistenceManager.java:1094) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.getBundle(AbstractBundlePersistenceManager.java:701) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.exists(AbstractBundlePersistenceManager.java:506) at org.apache.jackrabbit.core.state.SharedItemStateManager.hasNonVirtualItemState(SharedItemStateManager.java:1343) at org.apache.jackrabbit.core.state.SharedItemStateManager.init(SharedItemStateManager.java:203) at org.apache.jackrabbit.core.RepositoryImpl.createItemStateManager(RepositoryImpl.java:1317) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doInitialize(RepositoryImpl.java:1863) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.initialize(RepositoryImpl.java:1834) at org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces(RepositoryImpl.java:483) at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:324) at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:621) at [SKIPPED] Please note that the line numbers in BundleDbPersistenceManager have changed regarding to the 1.5.6 version of Jackrabbit because I have patched the class. But I think that doesn`t matter. Any ideas what causes the problem now? Or how to fix the problem? Thanks in advance, Sascha -Ursprüngliche Nachricht- Von: sascha.the...@bosch-si.com [mailto:sascha.the...@bosch-si.com] Gesendet: Montag, 2. Mai 2011 13:24 An: users@jackrabbit.apache.org Betreff: AW: Jackrabbit doesn`t startup anymore Hi, did you copy the complete repository home directory, i.e. including the ns_*.properties files? Yes I copied the complete repository home directory. But I also tried to remove the complete repository home directory to force a rebuild of the index files without success. Still same error. Very strange. Debugging is currently not possible because the machine is hosted by our customer. But I will try to patch Jackrabbit`s BundleDbPersistenceManager to get a complete stack trace. If I have any news I will post it here
Re: Jackrabbit doesn`t startup anymore
On Mon, May 2, 2011 at 9:18 AM, sascha.the...@bosch-si.com wrote: Hi, i would need a full stacktrace of the following error: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: Attached you will find a debug output log of Jackrabbit. Unfortunately, I do not see a full stack trace of the NumberFormatException. I think the exception must be catched somewhere in Jackrabbit? Any ideas how to get the full stack? either debug jackrabbit and set a breakpoint on the line in BundleDbPersistenceManager.java which logs the error, or change that line to print the full stacktrace and rebuild jackrabbit from the soruces. all i can say is that for some reason the deserialization of the root node data fails. what exact steps did you perform to setup the test on the other machine? I copied the whole application, including the workspace and the lucene index data and the database configuration, to the other machine and started up the application. did you copy the complete repository home directory, i.e. including the ns_*.properties files? has anything changed on the original machine? environment settings, locales etc? I do not see any differences. What should I exactly look for? e.g. environment settings (default encoding, charset, locale), jvm runtime version, etc. what os? deployment details? OS is a Unix operating system. Jackrabbit is embedded in our application which runs in an OSGi container. Jackrabbit is connected to a Oracle database which is hosted on another machine. Java 5 is installed on the machine. More details are printed in the attached log file. Any ideas how to get rid of the problem are appreciated because I really need to get it working again on the orig. machine. you'll probably have to debug jackrabbit in order to see why the deserialization of the root node bundle fails. cheers stefan Thanks in advance, Sascha -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Freitag, 29. April 2011 15:22 An: users@jackrabbit.apache.org Betreff: Re: Jackrabbit doesn`t startup anymore On Fri, Apr 29, 2011 at 11:54 AM, sascha.the...@bosch-si.com wrote: Hi, thanks for your fast reply. We didn`t try a db backup yet but what we have tried is to install Jackrabbit on another machine with exactly the same configuration (also same db in use). That Jackrabbit instance works without problems. We can start and stop it and we can browse the nodes. that's good news :) So it seems that it doesn`t depend on the database... agreed Any other ideas? what exact steps did you perform to setup the test on the other machine? has anything changed on the original machine? environment settings, locales etc? what os? deployment details? obviously there's a problem reading the root node (cafebabe...) on the original machine. i would need a full stacktrace of the following error: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: it might be that the internal namespace index files (ns_*.properties) got corrupted. cheers stefan Thanks, Sascha -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Freitag, 29. April 2011 10:53 An: users@jackrabbit.apache.org Betreff: Re: Jackrabbit doesn`t startup anymore On Fri, Apr 29, 2011 at 9:58 AM, sascha.the...@bosch-si.com wrote: Hi all, we have running a Jackrabbit 1.5.6 instance for months now without any problems. But since yesterday we were not able anymore to list nodes anymore and so on. We just got back empty results so that it seems that no data was ever persisted. After that we shutdown the Jackrabbit instance and now we are not able to start it again. The following exception occurs when starting up: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated seems like your oracle db got corrupted somehow. did you perform sanity checks on your oracle instance? did you try with a db backup? cheers stefan , state/code: 23000/1 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated , state/code: 23000/1 2011
Re: Jackrabbit doesn`t startup anymore
On Fri, Apr 29, 2011 at 9:58 AM, sascha.the...@bosch-si.com wrote: Hi all, we have running a Jackrabbit 1.5.6 instance for months now without any problems. But since yesterday we were not able anymore to list nodes anymore and so on. We just got back empty results so that it seems that no data was ever persisted. After that we shutdown the Jackrabbit instance and now we are not able to start it again. The following exception occurs when starting up: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated seems like your oracle db got corrupted somehow. did you perform sanity checks on your oracle instance? did you try with a db backup? cheers stefan , state/code: 23000/1 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated , state/code: 23000/1 2011-04-29 07:41:14,417 [main] ERROR BundleDbPersistenceManager - failed to write bundle: deadbeef-cafe-babe-cafe-babecafebabe java.sql.SQLException: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:12 5) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:305) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:626) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.jav a:182) at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStat ement.java:630) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement. java:1081) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePrepare dStatement.java:2905) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStateme nt.java:2996) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmtInternal(ConnectionRecoveryManager.java:371) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmtInternal(ConnectionRecoveryManager.java:298) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmt(ConnectionRecoveryManager.java:261) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmt(ConnectionRecoveryManager.java:239) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager .storeBundle(BundleDbPersistenceManager.java:1198) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM anager.putBundle(AbstractBundlePersistenceManager.java:732) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM anager.storeInternal(AbstractBundlePersistenceManager.java:672) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM anager.store(AbstractBundlePersistenceManager.java:536) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager .store(BundleDbPersistenceManager.java:524) at org.apache.jackrabbit.core.state.SharedItemStateManager.createRootNodeSt ate(SharedItemStateManager.java:1303) at org.apache.jackrabbit.core.state.SharedItemStateManager.init(SharedIte mStateManager.java:204) at org.apache.jackrabbit.core.RepositoryImpl.createItemStateManager(Reposit oryImpl.java:1317) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doInitialize(Rep ositoryImpl.java:1863) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.initialize(Repos itoryImpl.java:1834) at org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces(Reposito ryImpl.java:483) at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:324 ) at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:621 ) I think the SQL exception is only raised because Jackrabbit is not able to read the bundle cafebabe-cafe-babe-cafe-babecafebabe. Do you have any ideas how to fix the problem or what could have caused the problem? Any
Re: Jackrabbit doesn`t startup anymore
On Fri, Apr 29, 2011 at 11:54 AM, sascha.the...@bosch-si.com wrote: Hi, thanks for your fast reply. We didn`t try a db backup yet but what we have tried is to install Jackrabbit on another machine with exactly the same configuration (also same db in use). That Jackrabbit instance works without problems. We can start and stop it and we can browse the nodes. that's good news :) So it seems that it doesn`t depend on the database... agreed Any other ideas? what exact steps did you perform to setup the test on the other machine? has anything changed on the original machine? environment settings, locales etc? what os? deployment details? obviously there's a problem reading the root node (cafebabe...) on the original machine. i would need a full stacktrace of the following error: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: it might be that the internal namespace index files (ns_*.properties) got corrupted. cheers stefan Thanks, Sascha -Ursprüngliche Nachricht- Von: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Gesendet: Freitag, 29. April 2011 10:53 An: users@jackrabbit.apache.org Betreff: Re: Jackrabbit doesn`t startup anymore On Fri, Apr 29, 2011 at 9:58 AM, sascha.the...@bosch-si.com wrote: Hi all, we have running a Jackrabbit 1.5.6 instance for months now without any problems. But since yesterday we were not able anymore to list nodes anymore and so on. We just got back empty results so that it seems that no data was ever persisted. After that we shutdown the Jackrabbit instance and now we are not able to start it again. The following exception occurs when starting up: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,278 [main] ERROR BundleDbPersistenceManager - failed to read bundle: cafebabe-cafe-babe-cafe-babecafebabe: java.lang.NumberFormatException: For input string: 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated seems like your oracle db got corrupted somehow. did you perform sanity checks on your oracle instance? did you try with a db backup? cheers stefan , state/code: 23000/1 2011-04-29 07:41:14,409 [main] ERROR ConnectionRecoveryManager - could not execute statement, reason: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated , state/code: 23000/1 2011-04-29 07:41:14,417 [main] ERROR BundleDbPersistenceManager - failed to write bundle: deadbeef-cafe-babe-cafe-babecafebabe java.sql.SQLException: ORA-1: unique constraint (UJXMTSRADMIN.DEFAULT_BUNDLE_IDX) violated at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:12 5) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:305) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:626) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.jav a:182) at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStat ement.java:630) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement. java:1081) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePrepare dStatement.java:2905) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStateme nt.java:2996) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmtInternal(ConnectionRecoveryManager.java:371) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmtInternal(ConnectionRecoveryManager.java:298) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmt(ConnectionRecoveryManager.java:261) at org.apache.jackrabbit.core.persistence.bundle.util.ConnectionRecoveryMan ager.executeStmt(ConnectionRecoveryManager.java:239) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager .storeBundle(BundleDbPersistenceManager.java:1198) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM anager.putBundle(AbstractBundlePersistenceManager.java:732) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM anager.storeInternal(AbstractBundlePersistenceManager.java:672) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceM
Re: problem with removeMixin
On Wed, Apr 13, 2011 at 11:01 AM, Gazi Mushfiqur Rahman gazimushfiqurrah...@gmail.com wrote: I am not sure how that is possible. This is what I have done: 1. Added 'mix:shareable' mixin to a node. 2. Created a shared node for that node 3. Removed the shared node (by calling: 'node.removeShare();' on the shared node) 4. Remove the 'mix:shareable' mixin from the original node (which is failing). the exception thrown answers your question: Caused by: javax.jcr.UnsupportedRepositoryOperationException: Removing mix:shareable is not supported. cheers stefan The stack trace of the error is given below: 13.04.2011 14:51:59.019 *ERROR* [10.0.0.87 [1302684719009] POST /sling/content/hello.move.html HTTP/1.1] org.apache.sling.engine.impl.SlingRequestProcessorImpl service: Uncaught SlingException org.mozilla.javascript.WrappedException: Wrapped javax.jcr.UnsupportedRepositoryOperationException: Removing mix:shareable is not supported. (/apps/versionable/document/move/POST.esp#13) at org.mozilla.javascript.Context.throwAsScriptRuntimeEx(Context.java:1757) at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:170) at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:243) at org.mozilla.javascript.optimizer.OptRuntime.callProp0(OptRuntime.java:119) at org.mozilla.javascript.gen.c25._c0(/apps/versionable/document/move/POST.esp:13) at org.mozilla.javascript.gen.c25.call(/apps/versionable/document/move/POST.esp) at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:393) at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:2834) at org.mozilla.javascript.gen.c25.call(/apps/versionable/document/move/POST.esp) at org.mozilla.javascript.gen.c25.exec(/apps/versionable/document/move/POST.esp) at org.mozilla.javascript.Context.evaluateReader(Context.java:1227) at org.apache.sling.scripting.javascript.internal.RhinoJavaScriptEngine.eval(RhinoJavaScriptEngine.java:114) at org.apache.sling.scripting.core.impl.DefaultSlingScript.call(DefaultSlingScript.java:351) at org.apache.sling.scripting.core.impl.DefaultSlingScript.eval(DefaultSlingScript.java:163) at org.apache.sling.scripting.core.impl.DefaultSlingScript.service(DefaultSlingScript.java:449) at org.apache.sling.engine.impl.request.RequestData.service(RequestData.java:529) at org.apache.sling.engine.impl.SlingRequestProcessorImpl.processComponent(SlingRequestProcessorImpl.java:274) at org.apache.sling.engine.impl.filter.RequestSlingFilterChain.render(RequestSlingFilterChain.java:49) at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:64) at org.apache.sling.engine.impl.debug.RequestProgressTrackerLogFilter.doFilter(RequestProgressTrackerLogFilter.java:59) at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:60) at org.apache.sling.engine.impl.SlingRequestProcessorImpl.processRequest(SlingRequestProcessorImpl.java:161) at org.apache.sling.engine.impl.SlingMainServlet.service(SlingMainServlet.java:183) at org.apache.felix.http.base.internal.handler.ServletHandler.doHandle(ServletHandler.java:96) at org.apache.felix.http.base.internal.handler.ServletHandler.handle(ServletHandler.java:79) at org.apache.felix.http.base.internal.dispatch.ServletPipeline.handle(ServletPipeline.java:42) at org.apache.felix.http.base.internal.dispatch.InvocationFilterChain.doFilter(InvocationFilterChain.java:49) at org.apache.felix.http.base.internal.dispatch.HttpFilterChain.doFilter(HttpFilterChain.java:33) at org.apache.felix.http.base.internal.dispatch.FilterPipeline.dispatch(FilterPipeline.java:48) at org.apache.felix.http.base.internal.dispatch.Dispatcher.dispatch(Dispatcher.java:39) at org.apache.felix.http.base.internal.DispatcherServlet.service(DispatcherServlet.java:67) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.felix.http.proxy.ProxyServlet.service(ProxyServlet.java:60) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.sling.launchpad.base.webapp.SlingServletDelegate.service(SlingServletDelegate.java:277) at org.apache.sling.launchpad.webapp.SlingServlet.service(SlingServlet.java:148) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:243) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:201) at
Re: How to mix structured and unstructured content on a node?
On Mon, Apr 4, 2011 at 1:13 PM, Markus Joschko markus.josc...@gmail.com wrote: Hi, I have a node that should mix free and and fixed properties. For that purpose I created the following nodetype (leaving out the namespace): [Contact] nt:unstructured, mix:created, mix:lastModified - primaryContactDetails (weakreference) [Individual] Contact When I create a node of type Individual and set the primaryContactDetails property to another referencable node, it gets the typ reference. Asked for its required type the property returns undefined and as the DeclaringNodeType it returns nt:unstructured. When I remove nt:unstructured from the inheritance list, I get the desired weakreference and Contact as DeclaringNodeType. Obviously nt:unstructure takes precedence over the defined properties. no, named definitions should take precedence over residual definitions. weak reference were introduced in jsr-283 (jcr 2.0). you probably encountered a problem that is specific to weakreferences. do you mind filing a jira issue? a simple test case would be great :) cheers stefan Is there an other way to combine defined and undefined properties in a node? I currently use jackrabbit 2.1.1 Regards, Markus
Re: How to mix structured and unstructured content on a node?
On Mon, Apr 4, 2011 at 2:18 PM, Markus Joschko markus.josc...@gmail.com wrote: On Mon, Apr 4, 2011 at 1:56 PM, Stefan Guggisberg stefan.guggisb...@gmail.com wrote: On Mon, Apr 4, 2011 at 1:13 PM, Markus Joschko markus.josc...@gmail.com wrote: Hi, I have a node that should mix free and and fixed properties. For that purpose I created the following nodetype (leaving out the namespace): [Contact] nt:unstructured, mix:created, mix:lastModified - primaryContactDetails (weakreference) [Individual] Contact When I create a node of type Individual and set the primaryContactDetails property to another referencable node, it gets the typ reference. Asked for its required type the property returns undefined and as the DeclaringNodeType it returns nt:unstructured. When I remove nt:unstructured from the inheritance list, I get the desired weakreference and Contact as DeclaringNodeType. Obviously nt:unstructure takes precedence over the defined properties. no, named definitions should take precedence over residual definitions. weak reference were introduced in jsr-283 (jcr 2.0). you probably encountered a problem that is specific to weakreferences. I did some more tests: 1) when using a defined name with the defined type - the correct nodetype is used 2) when using a residual name - nt:unstructured is taken as declaring nodetype what do you mean by 'residual name'? could you please provide an example for better understanding? 3) when using a defined name but a type that is incompatible with the defintion - nt:unstructured is used as declaring nodetype Is 3) really a valid behaviour? yes I would expect this to fail. there's a matching residual definition, why should it fail? That's not only for weakreferences but for all types I tested. It is especially easy to notice with weakreferences as I can only pass a node to the setProperty method and must rely on jackrabbit to set the correct type. i can't follow you here. you can use either [1] or [2]. [1] http://www.day.com/maven/jsr170/javadocs/jcr-2.0/javax/jcr/Node.html#setProperty(java.lang.String, javax.jcr.Value) [2] http://www.day.com/maven/jsr170/javadocs/jcr-2.0/javax/jcr/Node.html#setProperty(java.lang.String, java.lang.String, int) cheers stefan do you mind filing a jira issue? a simple test case would be great :) cheers stefan Is there an other way to combine defined and undefined properties in a node? I currently use jackrabbit 2.1.1 Regards, Markus
Re: Modify node type definition
On Fri, Mar 25, 2011 at 2:34 PM, ttemprano ttempr...@toyota.com.ve wrote: Hi guys. What's the status on node type definition modification? Let's say that I have the following def: [wrn:facets] nt:base, mix:title orderable + * (wrn:facets) = wrn:facets And want to add a new property: [wrn:facets] nt:base, mix:title orderable + * (wrn:facets) = wrn:facets - wrp:order (long) = '0' autocreated After recreating the node definition, will the new nodes have the new property? yes. Can I programatically add the property to old nodes? yes. cheers stefan Thank you. Tomas -- View this message in context: http://jackrabbit.510166.n4.nabble.com/Modify-node-type-definition-tp3405385p3405385.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: Jackrabbit error when updating from 1.6.1 = 2.1.2
On Thu, Mar 10, 2011 at 9:29 AM, Xizor jaakko.yli-koivi...@kolumbus.fi wrote: Hi! I'm using an existing database (the existing database is created with Jackrabbit 1.6.1). And with MySQL I meant that I tried the update using MySQL as the datasource for Jackrabbit and then everything went fine. I also tried a clean Liferay (6.0) install with Jackrabbit Oracle 11g setup and this is the error I get: Caused by: org.apache.jackrabbit.core.state.ItemStateException: failed to write bundle: deadbeef-face-babe-cafe-babecafebabe at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.storeBundle(BundleDbPersistenceManager.java:1192) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.putBundle(AbstractBundlePersistenceManager.java:668) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.storeInternal(AbstractBundlePersistenceManager.java:610) at org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.store(AbstractBundlePersistenceManager.java:487) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.store(BundleDbPersistenceManager.java:561) ... 246 more Caused by: java.lang.IllegalStateException: Unable to insert index for string: versionStorage at org.apache.jackrabbit.core.persistence.bundle.NGKDbNameIndex.insertString(NGKDbNameIndex.java:63) at org.apache.jackrabbit.core.persistence.bundle.DbNameIndex.stringToIndex(DbNameIndex.java:98) at org.apache.jackrabbit.core.persistence.util.BundleBinding.writeBundle(BundleBinding.java:266) at org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager.storeBundle(BundleDbPersistenceManager.java:1183) ... 250 more Caused by: java.sql.SQLException: ORA-01400: kohteeseen (TEST.J_V_PM_NAMES.ID) cannot insert NULL into (string) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216) at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3316) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3422) at org.apache.jackrabbit.core.persistence.bundle.ConnectionRecoveryManager.executeStmtInternal(ConnectionRecoveryManager.java:371) at org.apache.jackrabbit.core.persistence.bundle.ConnectionRecoveryManager.executeStmtInternal(ConnectionRecoveryManager.java:298) at org.apache.jackrabbit.core.persistence.bundle.ConnectionRecoveryManager.executeStmt(ConnectionRecoveryManager.java:261) at org.apache.jackrabbit.core.persistence.bundle.ConnectionRecoveryManager.executeStmt(ConnectionRecoveryManager.java:239) at org.apache.jackrabbit.core.persistence.bundle.NGKDbNameIndex.insertString(NGKDbNameIndex.java:61) I've debugged Jackrabbit code a bit and the ORA-00955: name is already used by an existing object- error seems to be caused by org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager- class and it's checkTablesExist()- method. Could this happen because the Oracle credentials I'm using don't have enough priviliges so that the checkTablesExist()- method could function correctly? yes, the oracle user needs the privileges to access the meta data, cheers stefan BTW here is a link to my repository.xml file: http://dl.dropbox.com/u/13289522/repository.xml -- View this message in context: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-error-when-updating-from-1-6-1-2-1-2-tp3343157p3345376.html Sent from the Jackrabbit - Users mailing list archive at Nabble.com.
Re: same name sibling issuesq
On Sun, Feb 27, 2011 at 8:32 PM, GOODWIN, MATTHEW (ATTCORP) mg0...@att.com wrote: I believe we experienced the same issue (in the moveFrom for a node) and filed a bug. Please see https://issues.apache.org/jira/browse/JCR-2891 sorry, but i fail to see how JCR-2891 relates to the issue hand... cheers stefan This bug has been scheduled for 2.2.5 but I haven't heard when 2.2.5 is going to be released. -Original Message- From: ChadDavis [mailto:chadmichaelda...@gmail.com] Sent: Sunday, February 27, 2011 1:02 PM To: users@jackrabbit.apache.org Subject: Re: same name sibling issuesq the following logic applies: - find a matching 'named' child node definition (both name and required type constraints must be satisfied) - if none exists, the first residual child node definition that satisfies the required type constraint is chosen. the order of evaluation is undefined. Just to clarify. Are you saying that if two residual child node definitions are inherited from supertypes, then it's undefined which one get's applied? Undefined in the specification, correct? see o.a.j.core.nodetype.EffectiveNodeType#getApplicableChildNodeDef for the implementation. And, here, you are referring me to see the actual jackrabbit implementation so I can peruse the logic myself, correct? Thanks. This is precisely what I need. At this point, I'm fairly certain that I witnessed erratic behavior in Jackrabbit's evaluation of which rule to apply . . . I don't want to file a super vague ticket for this, as I know that vague tickets are annoying -- I see plenty of them myself, but I may not get time to investigate further . . . what do you recommend, should I file a ticket just so it's on record, or no? WRT your use case i'd suggest to add residual property and child node definitions to me:folder and remove the nt:unstructured supertype. Yes, I did something like this already. I defined my own unstructured type and let my types inherit from that, thus taking all SNS out of the inheritance hierarchy.
Re: same name sibling issuesq
On Mon, Feb 28, 2011 at 2:34 PM, GOODWIN, MATTHEW (ATTCORP) mg0...@att.com wrote: It actually may not at all but I couldn't tell for sure because there was no stack trace of the ItemExistsException. We ran into the situation (as outlined in the JIRA I referenced) where we would get that exception for SNS but in our case we were getting the exception when we felt we shouldn't have and for us it happened correctly somewhat sporadically (the original email indicated this as well). In our case this was because when things would fail it would be because sometimes the object (not totally sure how objects were instantiated) would have the same object id's and our node was specified as not allowing SNS and would fail. Might not be the same situation but I thought since the ticket had a patch on it it might be a quick try to see if it solved their problems. thanks for the background information! Matt p.s. I can definitely see how someone wouldn't see connection - as there might not be one :) OTOH i can't definitely rule out the possibility that there is a connection ... ;) cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Monday, February 28, 2011 4:16 AM To: users@jackrabbit.apache.org Subject: Re: same name sibling issuesq On Sun, Feb 27, 2011 at 8:32 PM, GOODWIN, MATTHEW (ATTCORP) mg0...@att.com wrote: I believe we experienced the same issue (in the moveFrom for a node) and filed a bug. Please see https://issues.apache.org/jira/browse/JCR-2891 sorry, but i fail to see how JCR-2891 relates to the issue hand... cheers stefan This bug has been scheduled for 2.2.5 but I haven't heard when 2.2.5 is going to be released. -Original Message- From: ChadDavis [mailto:chadmichaelda...@gmail.com] Sent: Sunday, February 27, 2011 1:02 PM To: users@jackrabbit.apache.org Subject: Re: same name sibling issuesq the following logic applies: - find a matching 'named' child node definition (both name and required type constraints must be satisfied) - if none exists, the first residual child node definition that satisfies the required type constraint is chosen. the order of evaluation is undefined. Just to clarify. Are you saying that if two residual child node definitions are inherited from supertypes, then it's undefined which one get's applied? Undefined in the specification, correct? see o.a.j.core.nodetype.EffectiveNodeType#getApplicableChildNodeDef for the implementation. And, here, you are referring me to see the actual jackrabbit implementation so I can peruse the logic myself, correct? Thanks. This is precisely what I need. At this point, I'm fairly certain that I witnessed erratic behavior in Jackrabbit's evaluation of which rule to apply . . . I don't want to file a super vague ticket for this, as I know that vague tickets are annoying -- I see plenty of them myself, but I may not get time to investigate further . . . what do you recommend, should I file a ticket just so it's on record, or no? WRT your use case i'd suggest to add residual property and child node definitions to me:folder and remove the nt:unstructured supertype. Yes, I did something like this already. I defined my own unstructured type and let my types inherit from that, thus taking all SNS out of the inheritance hierarchy.
Re: same name sibling issuesq
On Sun, Feb 27, 2011 at 7:02 PM, ChadDavis chadmichaelda...@gmail.com wrote: the following logic applies: - find a matching 'named' child node definition (both name and required type constraints must be satisfied) - if none exists, the first residual child node definition that satisfies the required type constraint is chosen. the order of evaluation is undefined. Just to clarify. Are you saying that if two residual child node definitions are inherited from supertypes, then it's undefined which one get's applied? Undefined in the specification, correct? correct. see o.a.j.core.nodetype.EffectiveNodeType#getApplicableChildNodeDef for the implementation. And, here, you are referring me to see the actual jackrabbit implementation so I can peruse the logic myself, correct? correct. you can view the source code e.g. here: http://svn.apache.org/viewvc/jackrabbit/trunk/jackrabbit-core/src/main/java/org/apache/jackrabbit/core/nodetype/EffectiveNodeType.java?view=markup the relevant code starts at line 692. Thanks. This is precisely what I need. At this point, I'm fairly certain that I witnessed erratic behavior in Jackrabbit's evaluation of which rule to apply . . . I don't want to file a super vague ticket for this, as I know that vague tickets are annoying -- I see plenty of them myself, but I may not get time to investigate further . . . what do you recommend, should I file a ticket just so it's on record, or no? to be honest, if a ticket is rather vague chances that someone will invest time to investigate are usually low. the scenario that you described in this thread is IMO as expected. OTOH, if you're pretty sure you found a bug it might still be worth reporting it. WRT your use case i'd suggest to add residual property and child node definitions to me:folder and remove the nt:unstructured supertype. Yes, I did something like this already. I defined my own unstructured type and let my types inherit from that, thus taking all SNS out of the inheritance hierarchy. excellent :) cheers stefan
Re: Session.importXml should close the input stream?
On Wed, Feb 23, 2011 at 3:04 PM, Alex Parvulescu alex.parvule...@gmail.com wrote: Hello, I was going through the Hops examples and I noticed something that I don't understand in Hop 3. The Session.importXml api [1] says that the input stream will be closed by the session: The passed InputStream is closed before this method returns either normally or because of an exception. Which does not seem to be the case for the example. I also looked at org.apache.jackrabbit.jcr2spi.SessionImpl which does not appear to close the input stream. Is there something I'm missing here? Who is in charge of closing the input stream in the end? JSR 170 (- JCR 1.0) did not specify who's responsible for closing an InputStream instance passed to an api method. jackrabbit 2.0 did not close the passed streams, the api consumer was responsible for closing the stream after the api method returned. JSR 283 (- JCR 2.0) finally specified that that the following methods will close the passed InputStream before returning control to the caller: Node.setProperty(String, InputStream) Property.setValue(InputStream) ValueFactory.createValue(InputStream) ValueFactory.createBinary(InputStream) Session.importXML(String, InputStream, int) Workspace.importXML(String, InputStream, int) jackrabbit apparently does not yet comply with the new JCR 2.0 stream handling contract. do you mind filing a jira issue? thanks stefan [1] http://www.day.com/maven/javax.jcr/javadocs/jcr-2.0/javax/jcr/Session.html#importXML(java.lang.String, java.io.InputStream, int)
Re: A PersistanceManager for DB2 (i-series)?
On Fri, Feb 18, 2011 at 4:13 AM, J S quartz...@gmail.com wrote: Hi I am using a database PM and connecting to an external DB2 (i-series) database. Access rights for the Jackrabbit database user are enough to allow the creation of tables at server start up. Data is also persisted successfully. However, at server restart, Jackrabbit tries to recreate the tables again and it fails because the tables are already there. The end result is Jackrabbit fails to initialize the file system, and the application is unusable (the data can’t be accessed). The workaround is to add param name=schemaCheckEnabled value=false/ to stop the schema check. This is not necessary with MySql though, somehow, Jackrabbit realizes that tables are there and just use them without trying to create them again. With MySql, I am using MySqlPersistenceManager, but with DB2 I am using the BundleDbPersistenceManager. Do you think there is something particular for DB2 that requires its own PersistentManager implementation? no, i don't think so. it's most likely a permission issue (the db user lacks the permission to read the meta data). see [0] and [1] for related information. you might also want to have a look at the following method: o.a.jackrabbit.core.util.db.ConnectionHelper#tableExists cheers stefan [0] http://jackrabbit.markmail.org/message/jtq2sqis2aceh7ro [1] https://issues.apache.org/jira/browse/JCR-2034 Cheers, JS
Re: Jackrabbit and multithread access to nodes | design motivations | jcr2
hi alejandro, On Fri, Feb 18, 2011 at 6:01 PM, Alejandro Gomez alejandro.go...@gmail.com wrote: Hi Stefan, I deleted the mysql tables and some files in the JKR repo, and now I have a cleaner perspective of the facts I described in my first email. I ran a test that adds nodes to a same parent - concurrently with different sessions- and everything was right. I ran a test that adds a new string property to a same node - concurrently with different sessions - and everything was right. I ran a test that modifies one property in one unique node - concurrently with different sessions - and I obtained: javax.jcr.InvalidItemStateException: 00da5eb0-d7ea-41dc-aff4-2dd8940caab3/{}propertyToChange has been modified externally this is expected and per design. see e.g. [0] for a related discussion on the mailing list. jackrabbit's behavior is compliant with the JCR 2.0 spec ([1)). [0] http://www.mail-archive.com/users@jackrabbit.apache.org/msg16522.html [1] http://www.day.com/specs/jcr/2.0/10_Writing.html#10.11.6 Invalid States cheers stefan I hope this help to clarify my question. Regards Alejandro Gomez On Thu, Feb 17, 2011 at 11:10 AM, Stefan Guggisberg stefan.guggisb...@gmail.com wrote: hi alejandro, On Wed, Feb 16, 2011 at 4:19 PM, Alejandro Gomez alejandro.go...@gmail.com wrote: Hi, I've been working with jackrabbit (2.x.x) more than a year, and some questions arised when I faced the multithreading aspects of Jackrabbit. I've found issues trying to add nodes (on different threads) that are children of a same parent. I've found issues trying to modify a node from concurrent sessions on different threads. adding children to a parent node does modify the state of the parent node. so the situation is the same here as in your previous example. could you please elaborate what kind of issues/behaviour you're refering to? And after all, I did read a LOT of mailing lists archives, and I found that some people encourage to implement explicit locking methods. My question is: What are the design/architecture motivations behind this behavior? again, what behavior? could you please be more specific? cheers stefan Is that related with some JCR 2 spec item? What would be the best practices if any? I would LOVE if some of the core developers answer to this topic. Thanks in advance to everyone! Alejandro Gomez -- Lo que creas de los demás estará signado por lo que creas de ti mismo, y del mismo modo los hechos de tu vida. -- Lo que creas de los demás estará signado por lo que creas de ti mismo, y del mismo modo los hechos de tu vida.
Re: Getting a bug scheduled?
On Wed, Feb 16, 2011 at 9:19 PM, GOODWIN, MATTHEW (ATTCORP) mg0...@att.com wrote: I have submitted a bug report https://issues.apache.org/jira/browse/JCR-2891. fixed, thanks for reporting this issue! cheers stefan What is the process for getting that scheduled to get in the next release?
Re: Jackrabbit and multithread access to nodes | design motivations | jcr2
hi alejandro, On Wed, Feb 16, 2011 at 4:19 PM, Alejandro Gomez alejandro.go...@gmail.com wrote: Hi, I've been working with jackrabbit (2.x.x) more than a year, and some questions arised when I faced the multithreading aspects of Jackrabbit. I've found issues trying to add nodes (on different threads) that are children of a same parent. I've found issues trying to modify a node from concurrent sessions on different threads. adding children to a parent node does modify the state of the parent node. so the situation is the same here as in your previous example. could you please elaborate what kind of issues/behaviour you're refering to? And after all, I did read a LOT of mailing lists archives, and I found that some people encourage to implement explicit locking methods. My question is: What are the design/architecture motivations behind this behavior? again, what behavior? could you please be more specific? cheers stefan Is that related with some JCR 2 spec item? What would be the best practices if any? I would LOVE if some of the core developers answer to this topic. Thanks in advance to everyone! Alejandro Gomez -- Lo que creas de los demás estará signado por lo que creas de ti mismo, y del mismo modo los hechos de tu vida.
Re: Jackrabbit dependencies
On Wed, Feb 2, 2011 at 7:20 PM, Jukka Zitting jzitt...@adobe.com wrote: Hi, On 02/02/2011 02:33 PM, Stefan Guggisberg wrote: +1 for removing it. Done, see JCR-2875. thanks! other candidates for exclusion: - bouncycastle - rome IIUC bouncycastle is used for parsing encrypted pdf's, and rome is used for parsing RSS feeds. if those are optional dependencies of tika, i'd like to remove them from jackrabbit core. if they aren't optional, we should probably consider alternatives... cheers stefan To answer Angela's question, the netcdf dependency came in as a transitive dependency from Tika 0.8. I didn't think the extra size was too big a deal and wanted to avoid extra excludes. But as said I don't feel too strongly about this. And to Thomas' point about no parser dependencies: I wouldn't go there. Perhaps we could/should do a separate jackrabbit-lite package if there are people who really need that, but personally I think that full text indexing is such an important part of JCR functionality that we should support it right out of the box. What we could do is move the tika-parsers dependency from jackrabbit-core to the jackrabbit-webapp and jackrabbit-jca modules. Then someone who has a direct low-level dependency to jackrabbit-core wouldn't automatically get any of the parser libraries, but we'd still have them included in the deployable release packages. -- Jukka Zitting
Re: Jackrabbit dependencies
On Thu, Feb 3, 2011 at 11:20 AM, Jukka Zitting jzitt...@adobe.com wrote: Hi, On 02/03/2011 11:00 AM, Stefan Guggisberg wrote: other candidates for exclusion: - bouncycastle - rome IIUC bouncycastle is used for parsing encrypted pdf's, and rome is used for parsing RSS feeds. We're quickly getting at diminishing returns here. The rome jar adds only about 1% to the size of the jackrabbit-webapp. do you have any convincing use case for keeping rome in jackrabbit core? rome has a dependency on jdom. jdom used to cause class loading issues in the past. i'd rather prefer to not clutter up jackrabbit core with useless dependencies. The bouncycastle stuff is heavier, but it's also much more useful. It's needed not only for read-protected PDFs but for ones with any kind of embedded protection (modify, print, etc.). Dropping bouncycastle would measurably reduce functionality for quite a few users, ok cheers stefan so I'd only consider doing that if there's a convincing case for why the required extra space is troublesome. -- Jukka Zitting
Re: Jackrabbit dependencies
On Thu, Feb 3, 2011 at 11:54 AM, Jukka Zitting jzitt...@adobe.com wrote: Hi, On 02/03/2011 11:34 AM, Stefan Guggisberg wrote: do you have any convincing use case for keeping rome in jackrabbit core? Not really. It might be useful for something like a feed aggregator application, but then again such an application would likely store feeds as fine-grained JCR content instead of Atom/RSS files. rome has a dependency on jdom. jdom used to cause class loading issues in the past. Agreed, jdom is troublesome. i'd rather prefer to not clutter up jackrabbit core with useless dependencies. What do you think of the idea of moving the tika-parsers dependency from jackrabbit-core to the deployment packages like jackrabbit-webapp and jackrabbit-jca? Tika is intentionally split into tika-core and tika-parsers to allow such a setup. yes, that's probably a good idea. however, i stil think it would be worthwhile getting rid of unnecessary (transitive) dependencies and making sensible decisions when adding new dependencies. cheers stefan -- Jukka Zitting
Re: Jackrabbit dependencies
On Wed, Feb 2, 2011 at 2:00 PM, Jukka Zitting jzitt...@adobe.com wrote: Hi, On 02/02/2011 01:43 PM, Stefan Guggisberg wrote: the recent addition of the netcdf library is IMO an excellent example. apparently it did cause classloader issues, it increased the size of stand-alone jackrabbit by 15% and the majority of jackrabbit users will probably never use it... [1] Yes, I agree that we had a bug (that got resolved) and that the netcdf dependency does bring in quite a bit of extra weight compared to the functionality it adds. I wouldn't object if people want to exclude it. +1 for removing it. i don't see the point in including exotic stuff that most people won't ever need. cheers stefan My point before was mostly that such decisions (what to include/exclude) are best made at the project level rather than separately in each individual deployment. We are at a much better position to understand where and how each dependency is being used, and have also tools for tracking and documenting such decisions across releases. If there are conflicting requirements (for example functionality vs. size), we can always add separate packagings for different deployment targets. -- Jukka Zitting
Re: Locking on nodes are not preserved in following sessions or requests
On Sun, Jan 30, 2011 at 3:50 PM, Gazi Mushfiqur Rahman gazimushfiqurrah...@gmail.com wrote: After debugging for long long time, I found out the real problem. Actually there's a bug in the org.apache.jackrabbit.core.lock.LockInfo class of the '2.1.1' version of0 'jackRabbit-core' module, which is used in the latest sling builds. The bug is very simple. The logic of 'isExpired()' method is wrong (just revers). As a result after locking any non session scope lock on some node, the node is automatically unlocked by the 'LockManager'. So, I tried to modify the source of LockInfo in jackrabbit-core, build it again and then rebuild Sling. But it seems not to be working. Jackrabbit team already resolved this issue on their latest version, i.e. 2.2.2. So can anyone please update the jackrabbit version for Sling or just please post your request on the sling list. cheers stefan fix this issue or let me know how to build and integrate jackrabbit-core in Sling? Any help will be much appreciated. Thanks Regards. On Tue, Jan 25, 2011 at 9:18 PM, Gazi Mushfiqur Rahman gazimushfiqurrah...@gmail.com wrote: Hi all, I am facing a problem on locking a node or resource. I am using Jackrabbit from Apache Sling and I have the following code (esp file) to lock a node: !DOCTYPE html % var session = request.getResourceResolver().adaptTo(Packages.javax.jcr.Session);; var wasLockableNode = currentNode.isNodeType(mix:lockable); if (!wasLockableNode) { currentNode.addMixin(mix:lockable); session.save(); } var lockOwner = null; var workspace = session.workspace; var lockManager = workspace.lockManager; var wasLocked = lockManager.isLocked(currentNode.path); var locked = false; if (!wasLocked) { var lock = lockManager.lock(currentNode.path, true, false, 120, lockOwner); lockManager.addLockToken(lock.lockToken); locked = true; } else { var lock = lockManager.getLock(currentNode.path); } session.save(); % html head title%= currentNode.title % is locked: %= locked %/title /head body p Is Locking Supported by Repository %= session.repository.getDescriptorValue(session.repository.OPTION_LOCKING_SUPPORTED).string %br / Is Locked %= locked %br / Lock Owner: span id=owner%= lock.lockOwner %/spanbr/ Lock Token: span id=token%= lock.lockToken %/spanbr / Is Deep: span id=deep%= lock.deep %/spanbr / Is Session Scoped: span%= lock.sessionScoped %/spanbr / Is Current Session Owning Lock: span id=isLockOwningSession%= lock.lockOwningSession %/spanbr/ Was lockable node: %= wasLockableNode %br / Was Locked: %= wasLocked %br / Remaining Seconds for the Lock %= lock.secondsRemaining %br / Current Lock Tokens: %= Packages.java.util.Arrays.toString(lockManager.lockTokens) %br / /p /body /html After executing the above script, I find that the node was locked. But if I execute the following script just after executing the previous one, I find that node ia not locked!: code !DOCTYPE html html head title%= currentNode.title %/title meta http-equiv=Content-Type content=text/html; charset=UTF-8/ /head body h1%= currentNode.title %/h1 p Title: span id=title%= currentNode.title %/spanbr / Is Locked: span id=locked%= currentNode.locked %/spanbr / /p /body /html /code Can anyone help me to find out the problem in my code or give me suggestion on how to implement locking on nodes using Sling? Thanks Regards.
Re: NPE in ConcurrentCache
On Fri, Jan 28, 2011 at 7:06 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Thanks - but neither suggestion made a difference. Always the same NPE at ConcurrentCache.java:47. Other suggestions welcomed ;) try the attached patch. it's not a fix, it just guards against the NPE. if it works, at least we know where the problem is. cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Friday, January 28, 2011 11:17 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache On Fri, Jan 28, 2011 at 5:07 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - I see your initial assessment in Jira that this could be specific to the JVM. At this point, we're just trying to load some initial nodes at repo create time. Is there any change to our configuration that would let us temporarily work around this problem? Thanks - not sure if it helps, but you could try to: - increase the jvm heap size - use Workspace.importXML instead of Session.importXML cheers stefan -Original Message- From: PALMER, THOMAS C (ATTCORP) Sent: Friday, January 28, 2011 5:56 AM To: users@jackrabbit.apache.org Subject: RE: NPE in ConcurrentCache Created https://issues.apache.org/jira/browse/JCR-2871. Thanks - -Original Message- From: Jukka Zitting [mailto:jzitt...@adobe.com] Sent: Friday, January 28, 2011 4:50 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache Hi, On 28.01.2011 10:02, Stefan Guggisberg wrote: On Thu, Jan 27, 2011 at 8:38 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: I've created a simple project plus test node XML that can dup this. Where can I send it? Thanks - please send it to my personal email address. Or better yet, file a bug in our issue tracker [1] and attach the test case. [1] https://issues.apache.org/jira/browse/JCR -- Jukka Zitting
Re: NPE in ConcurrentCache
On Mon, Jan 31, 2011 at 1:42 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - Don't see an attachment here ... or on JCR-2871. Did you forget to attach? Thanks - sorry, my bad. it has been stripped by the mailing list apparently. i've added it to JCR-2871 now. i didn't want to do that initially because it's not meant to 'fix' the problem, but anyway... cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Monday, January 31, 2011 3:33 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache On Fri, Jan 28, 2011 at 7:06 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Thanks - but neither suggestion made a difference. Always the same NPE at ConcurrentCache.java:47. Other suggestions welcomed ;) try the attached patch. it's not a fix, it just guards against the NPE. if it works, at least we know where the problem is. cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Friday, January 28, 2011 11:17 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache On Fri, Jan 28, 2011 at 5:07 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - I see your initial assessment in Jira that this could be specific to the JVM. At this point, we're just trying to load some initial nodes at repo create time. Is there any change to our configuration that would let us temporarily work around this problem? Thanks - not sure if it helps, but you could try to: - increase the jvm heap size - use Workspace.importXML instead of Session.importXML cheers stefan -Original Message- From: PALMER, THOMAS C (ATTCORP) Sent: Friday, January 28, 2011 5:56 AM To: users@jackrabbit.apache.org Subject: RE: NPE in ConcurrentCache Created https://issues.apache.org/jira/browse/JCR-2871. Thanks - -Original Message- From: Jukka Zitting [mailto:jzitt...@adobe.com] Sent: Friday, January 28, 2011 4:50 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache Hi, On 28.01.2011 10:02, Stefan Guggisberg wrote: On Thu, Jan 27, 2011 at 8:38 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: I've created a simple project plus test node XML that can dup this. Where can I send it? Thanks - please send it to my personal email address. Or better yet, file a bug in our issue tracker [1] and attach the test case. [1] https://issues.apache.org/jira/browse/JCR -- Jukka Zitting
Re: NPE in ConcurrentCache
On Thu, Jan 27, 2011 at 8:38 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - I've created a simple project plus test node XML that can dup this. Where can I send it? Thanks - please send it to my personal email address. thanks stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Thursday, January 27, 2011 11:45 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache On Thu, Jan 27, 2011 at 5:09 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - Our code is absolutely single-threaded. This is just a standalone tool that creates a repository and then loads nodes from XML. I've also seen the same NPE when registering custom node types (CND files) - but sporadically. Any help is appreciated - thanks. is the problem reproducible? if you can provide a simple test case i'll have a look. cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Thursday, January 27, 2011 4:15 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache hi tom, On Wed, Jan 26, 2011 at 8:18 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: We're getting the following error when trying to load nodes into a newly created repository. This is an Oracle repository and Jackrabbit 2.2.1. We're loading nodes via session.importXML and then calling session.getRootNode().accept() with a visitor that adjusts some versioning information on the nodes. java.lang.NullPointerException at org.apache.jackrabbit.core.cache.ConcurrentCache$E.access$000(Concurrent Cache.java:47) at org.apache.jackrabbit.core.cache.ConcurrentCache$1.removeEldestEntry(Con currentCache.java:70) at java.util.LinkedHashMap.putImpl(LinkedHashMap.java:409) at java.util.LinkedHashMap.put(LinkedHashMap.java:370) are you sure you're not using the same session concurrently in different threads? cheers stefan at org.apache.jackrabbit.core.cache.ConcurrentCache.shrinkIfNeeded(Concurre ntCache.java:249) at org.apache.jackrabbit.core.cache.ConcurrentCache.put(ConcurrentCache.jav a:176) at org.apache.jackrabbit.core.state.MLRUItemStateCache.cache(MLRUItemStateC ache.java:83) at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemState ReferenceCache.java:169) at org.apache.jackrabbit.core.state.LocalItemStateManager.getNodeState(Loca lItemStateManager.java:111) at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(Loca lItemStateManager.java:172) at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemS tateManager.java:260) at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(Se ssionItemStateManager.java:161) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:370) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:337) at org.apache.jackrabbit.core.ItemManager.getNode(ItemManager.java:630) at org.apache.jackrabbit.core.LazyItemIterator.prefetchNext(LazyItemIterato r.java:120) at org.apache.jackrabbit.core.LazyItemIterator.next(LazyItemIterator.java:2 57) at org.apache.jackrabbit.core.LazyItemIterator.nextNode(LazyItemIterator.ja va:166) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at com.att.cms.jcr.util.jcrtool.ToolLoad.loadJcrData(ToolLoad.java:77) Any ideas? Thanks for your help - Tom Palmer Director, Strategic Technology Services ATT Hosting Application Services | 2000 Perimeter Park Drive, Suite 140 | Morrisville, NC 27560 Office: +1 (919) 388-5937 | Mobile: +1 (919) 627-5431 thomas.pal...@att.com mailto:thomas.pal...@att.com Confidentiality Notice and Disclaimer: This e-mail transmission may contain confidential and/or proprietary information of ATT that is intended only for the individual or entity named in the e-mail address. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or reliance upon the contents of this e-mail is strictly prohibited. If you have received this e-mail transmission in error, please reply to the sender, so that ATT can arrange for proper delivery, and then please delete the message from your inbox. Thank you.
Re: NPE in ConcurrentCache
On Fri, Jan 28, 2011 at 5:07 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - I see your initial assessment in Jira that this could be specific to the JVM. At this point, we're just trying to load some initial nodes at repo create time. Is there any change to our configuration that would let us temporarily work around this problem? Thanks - not sure if it helps, but you could try to: - increase the jvm heap size - use Workspace.importXML instead of Session.importXML cheers stefan -Original Message- From: PALMER, THOMAS C (ATTCORP) Sent: Friday, January 28, 2011 5:56 AM To: users@jackrabbit.apache.org Subject: RE: NPE in ConcurrentCache Created https://issues.apache.org/jira/browse/JCR-2871. Thanks - -Original Message- From: Jukka Zitting [mailto:jzitt...@adobe.com] Sent: Friday, January 28, 2011 4:50 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache Hi, On 28.01.2011 10:02, Stefan Guggisberg wrote: On Thu, Jan 27, 2011 at 8:38 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: I've created a simple project plus test node XML that can dup this. Where can I send it? Thanks - please send it to my personal email address. Or better yet, file a bug in our issue tracker [1] and attach the test case. [1] https://issues.apache.org/jira/browse/JCR -- Jukka Zitting
Re: NPE in ConcurrentCache
hi tom, On Wed, Jan 26, 2011 at 8:18 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: We're getting the following error when trying to load nodes into a newly created repository. This is an Oracle repository and Jackrabbit 2.2.1. We're loading nodes via session.importXML and then calling session.getRootNode().accept() with a visitor that adjusts some versioning information on the nodes. java.lang.NullPointerException at org.apache.jackrabbit.core.cache.ConcurrentCache$E.access$000(Concurrent Cache.java:47) at org.apache.jackrabbit.core.cache.ConcurrentCache$1.removeEldestEntry(Con currentCache.java:70) at java.util.LinkedHashMap.putImpl(LinkedHashMap.java:409) at java.util.LinkedHashMap.put(LinkedHashMap.java:370) are you sure you're not using the same session concurrently in different threads? cheers stefan at org.apache.jackrabbit.core.cache.ConcurrentCache.shrinkIfNeeded(Concurre ntCache.java:249) at org.apache.jackrabbit.core.cache.ConcurrentCache.put(ConcurrentCache.jav a:176) at org.apache.jackrabbit.core.state.MLRUItemStateCache.cache(MLRUItemStateC ache.java:83) at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemState ReferenceCache.java:169) at org.apache.jackrabbit.core.state.LocalItemStateManager.getNodeState(Loca lItemStateManager.java:111) at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(Loca lItemStateManager.java:172) at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemS tateManager.java:260) at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(Se ssionItemStateManager.java:161) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:370) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:337) at org.apache.jackrabbit.core.ItemManager.getNode(ItemManager.java:630) at org.apache.jackrabbit.core.LazyItemIterator.prefetchNext(LazyItemIterato r.java:120) at org.apache.jackrabbit.core.LazyItemIterator.next(LazyItemIterator.java:2 57) at org.apache.jackrabbit.core.LazyItemIterator.nextNode(LazyItemIterator.ja va:166) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at com.att.cms.jcr.util.jcrtool.ToolLoad.loadJcrData(ToolLoad.java:77) Any ideas? Thanks for your help - Tom Palmer Director, Strategic Technology Services ATT Hosting Application Services | 2000 Perimeter Park Drive, Suite 140 | Morrisville, NC 27560 Office: +1 (919) 388-5937 | Mobile: +1 (919) 627-5431 thomas.pal...@att.com mailto:thomas.pal...@att.com Confidentiality Notice and Disclaimer: This e-mail transmission may contain confidential and/or proprietary information of ATT that is intended only for the individual or entity named in the e-mail address. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or reliance upon the contents of this e-mail is strictly prohibited. If you have received this e-mail transmission in error, please reply to the sender, so that ATT can arrange for proper delivery, and then please delete the message from your inbox. Thank you.
Re: NPE in ConcurrentCache
On Thu, Jan 27, 2011 at 5:09 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: Stefan - Our code is absolutely single-threaded. This is just a standalone tool that creates a repository and then loads nodes from XML. I've also seen the same NPE when registering custom node types (CND files) - but sporadically. Any help is appreciated - thanks. is the problem reproducible? if you can provide a simple test case i'll have a look. cheers stefan -Original Message- From: Stefan Guggisberg [mailto:stefan.guggisb...@gmail.com] Sent: Thursday, January 27, 2011 4:15 AM To: users@jackrabbit.apache.org Subject: Re: NPE in ConcurrentCache hi tom, On Wed, Jan 26, 2011 at 8:18 PM, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote: We're getting the following error when trying to load nodes into a newly created repository. This is an Oracle repository and Jackrabbit 2.2.1. We're loading nodes via session.importXML and then calling session.getRootNode().accept() with a visitor that adjusts some versioning information on the nodes. java.lang.NullPointerException at org.apache.jackrabbit.core.cache.ConcurrentCache$E.access$000(Concurrent Cache.java:47) at org.apache.jackrabbit.core.cache.ConcurrentCache$1.removeEldestEntry(Con currentCache.java:70) at java.util.LinkedHashMap.putImpl(LinkedHashMap.java:409) at java.util.LinkedHashMap.put(LinkedHashMap.java:370) are you sure you're not using the same session concurrently in different threads? cheers stefan at org.apache.jackrabbit.core.cache.ConcurrentCache.shrinkIfNeeded(Concurre ntCache.java:249) at org.apache.jackrabbit.core.cache.ConcurrentCache.put(ConcurrentCache.jav a:176) at org.apache.jackrabbit.core.state.MLRUItemStateCache.cache(MLRUItemStateC ache.java:83) at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemState ReferenceCache.java:169) at org.apache.jackrabbit.core.state.LocalItemStateManager.getNodeState(Loca lItemStateManager.java:111) at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(Loca lItemStateManager.java:172) at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemS tateManager.java:260) at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(Se ssionItemStateManager.java:161) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:370) at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:337) at org.apache.jackrabbit.core.ItemManager.getNode(ItemManager.java:630) at org.apache.jackrabbit.core.LazyItemIterator.prefetchNext(LazyItemIterato r.java:120) at org.apache.jackrabbit.core.LazyItemIterator.next(LazyItemIterator.java:2 57) at org.apache.jackrabbit.core.LazyItemIterator.nextNode(LazyItemIterator.ja va:166) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:19 1) at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1705) at com.att.cms.jcr.util.jcrtool.ToolLoad.loadJcrData(ToolLoad.java:77) Any ideas? Thanks for your help - Tom Palmer Director, Strategic Technology Services ATT Hosting Application Services | 2000 Perimeter Park Drive, Suite 140 | Morrisville, NC 27560 Office: +1 (919) 388-5937 | Mobile: +1 (919) 627-5431 thomas.pal...@att.com mailto:thomas.pal...@att.com Confidentiality Notice and Disclaimer: This e-mail transmission may contain confidential and/or proprietary information of ATT that is intended only for the individual or entity named in the e-mail address. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or reliance upon the contents of this e-mail is strictly prohibited. If you have received this e-mail transmission in error, please reply to the sender, so that ATT can arrange for proper delivery, and then please delete the message from your inbox. Thank you.
Re: Declarative namespace registration
hi mathieu, On Fri, Jan 14, 2011 at 10:15 PM, Mathieu Baudier mbaud...@argeo.org wrote: is there a way to register namespace declaratively (in a config file, like a repository.xml file) instead of programmatically using javax.jcr.NamespaceRegistry.registerNamespace(String, String) ? I have found a pretty clean way to do it programmatically, so I'd have no problem with a 'no'. thanks ;) jackrabbit doesn't provide a way to register namespaces declaratively. cheers stefan I just want to make sure that I did not miss a declarative way... We are using Jackrabbit 2.20. I meant 2.2.0 (now 2.2.1, thanks for the release!)
Re: Item cannot be saved because it has been modified externally: node /
On Sat, Dec 11, 2010 at 11:52 AM, Norman Maurer nor...@apache.org wrote: Hi there, this is fixed in the upcomming jackrabbit 2.2.0 (which should be released within the next days). In the meantime you can grab a snapshot here: https://repository.apache.org/content/groups/snapshots/org/apache/jackrabbit/ Version name is 2.2-SNAPSHOT. sorry, but i have to contradict norman. the following behavior is as per design: 1. sessionA modifies a property 2. sessionB modifies the same property and saves its changes 3. sessionA tries to save its changes but fails with a InvalidItemStateException because its changes have become stale this is the behavior as implemented in trunk. cheers stefan Bye, Norman 2010/12/11 François Cassistat f...@maya-systems.com: I've managed to make some basic case that throws the error every time. import org.apache.jackrabbit.core.TransientRepository; import javax.jcr.*; public class JCRConcurrency { public static void main(String[] args) throws RepositoryException { Repository repo; if (args.length = 2) repo = new TransientRepository(args[0], args[1]); else repo = new TransientRepository(); SimpleCredentials simpleCredentials = new SimpleCredentials(username, password.toCharArray()); Session sessionInit = repo.login(simpleCredentials); // initialization Node root = sessionInit.getRootNode(); Node test; if (root.hasNode(test)) test = root.getNode(test); else test = root.addNode(test); if (!test.hasProperty(property)) test.setProperty(property, 0); sessionInit.save(); String testIdentifier = test.getIdentifier(); // session 1 Session session1 = repo.login(simpleCredentials); Node test1 = session1.getNodeByIdentifier(testIdentifier); System.out.println(test1.getProperty(property).getLong()); test1.setProperty(property, 1); // session 2 is doing some other things at the same time Session session2 = repo.login(simpleCredentials); Node test2 = session2.getNodeByIdentifier(testIdentifier); test2.setProperty(property, 2); // session 2 saves first session2.save(); session2.logout(); // session 1 saves session1.save(); session1.logout(); sessionInit.logout(); System.exit(0); } } Le 2010-12-10 à 6:12 PM, François Cassistat a écrit : Hi list ! I've got some concurrency problem while saving. My application use distinct Sessions object and when two processes are trying to modify the same property of the same node at the same time. I've get the exception below : javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:249) at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:981) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:920) at com.myproject.MyProject.save(MyProject.java:1525) ... I have tried saving this way : synchronized (this) { session.refresh(true); session.save(); } Is there any way around or only locks and transactions can prevent that ? Thanks ! François
Re: Item cannot be saved because it has been modified externally: node /
2010/12/13 François Cassistat f...@maya-systems.com: Re-hi, Thanks for your answers. I have tested with 2.3-SNAPSHOT and it still doesn't work... Although, the exception message is better property /test/property: the property cannot be saved because it has been modified externally. I do not understand why it should be an wanted behavior. because it prevents you from accidentally overwriting changes made by other sessions. just like a text editor application will warn you, that the document you're about to persist had been modified by somebody else after you've opened it for editing. there are different ways to handle such situations, and i guess none is the ultimate 'correct way'. it depends on the use case. if your application must concurrently modify a specific property then you should use locking. that's what locks are intended for. cheers stefan Does Jackrabbit is designed to work with Node locks and Transaction only? There is no way to take the changes I've made in one Session and retry them? Based on my analysis : - Transaction would slow down the repository... - Locked nodes looks complicated and deadlock friendly since I need to modify properties at many place in the repository at once. - Retrying to get a new session and remake the changes when there was an error at session.save() looks like a dirty solution to me. Maybe there is something I did not understand with jackrabbit concurrency model. Any pointers? François Le 2010-12-13 à 4:33 AM, Stefan Guggisberg a écrit : On Sat, Dec 11, 2010 at 11:52 AM, Norman Maurer nor...@apache.org wrote: Hi there, this is fixed in the upcomming jackrabbit 2.2.0 (which should be released within the next days). In the meantime you can grab a snapshot here: https://repository.apache.org/content/groups/snapshots/org/apache/jackrabbit/ Version name is 2.2-SNAPSHOT. sorry, but i have to contradict norman. the following behavior is as per design: 1. sessionA modifies a property 2. sessionB modifies the same property and saves its changes 3. sessionA tries to save its changes but fails with a InvalidItemStateException because its changes have become stale this is the behavior as implemented in trunk. cheers stefan Bye, Norman 2010/12/11 François Cassistat f...@maya-systems.com: I've managed to make some basic case that throws the error every time. import org.apache.jackrabbit.core.TransientRepository; import javax.jcr.*; public class JCRConcurrency { public static void main(String[] args) throws RepositoryException { Repository repo; if (args.length = 2) repo = new TransientRepository(args[0], args[1]); else repo = new TransientRepository(); SimpleCredentials simpleCredentials = new SimpleCredentials(username, password.toCharArray()); Session sessionInit = repo.login(simpleCredentials); // initialization Node root = sessionInit.getRootNode(); Node test; if (root.hasNode(test)) test = root.getNode(test); else test = root.addNode(test); if (!test.hasProperty(property)) test.setProperty(property, 0); sessionInit.save(); String testIdentifier = test.getIdentifier(); // session 1 Session session1 = repo.login(simpleCredentials); Node test1 = session1.getNodeByIdentifier(testIdentifier); System.out.println(test1.getProperty(property).getLong()); test1.setProperty(property, 1); // session 2 is doing some other things at the same time Session session2 = repo.login(simpleCredentials); Node test2 = session2.getNodeByIdentifier(testIdentifier); test2.setProperty(property, 2); // session 2 saves first session2.save(); session2.logout(); // session 1 saves session1.save(); session1.logout(); sessionInit.logout(); System.exit(0); } } Le 2010-12-10 à 6:12 PM, François Cassistat a écrit : Hi list ! I've got some concurrency problem while saving. My application use distinct Sessions object and when two processes are trying to modify the same property of the same node at the same time. I've get the exception below : javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:249) at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:981) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:920) at com.myproject.MyProject.save(MyProject.java:1525) ... I have tried saving this way : synchronized (this) { session.refresh(true); session.save(); } Is there any way around or only locks and transactions can prevent that ? Thanks ! François
Re: Item cannot be saved because it has been modified externally: node /
On Sat, Dec 11, 2010 at 11:52 AM, Norman Maurer nor...@apache.org wrote: Hi there, this is fixed in the upcomming jackrabbit 2.2.0 (which should be which jira issue are you refering to? cheers stefan released within the next days). In the meantime you can grab a snapshot here: https://repository.apache.org/content/groups/snapshots/org/apache/jackrabbit/ Version name is 2.2-SNAPSHOT. Bye, Norman 2010/12/11 François Cassistat f...@maya-systems.com: I've managed to make some basic case that throws the error every time. import org.apache.jackrabbit.core.TransientRepository; import javax.jcr.*; public class JCRConcurrency { public static void main(String[] args) throws RepositoryException { Repository repo; if (args.length = 2) repo = new TransientRepository(args[0], args[1]); else repo = new TransientRepository(); SimpleCredentials simpleCredentials = new SimpleCredentials(username, password.toCharArray()); Session sessionInit = repo.login(simpleCredentials); // initialization Node root = sessionInit.getRootNode(); Node test; if (root.hasNode(test)) test = root.getNode(test); else test = root.addNode(test); if (!test.hasProperty(property)) test.setProperty(property, 0); sessionInit.save(); String testIdentifier = test.getIdentifier(); // session 1 Session session1 = repo.login(simpleCredentials); Node test1 = session1.getNodeByIdentifier(testIdentifier); System.out.println(test1.getProperty(property).getLong()); test1.setProperty(property, 1); // session 2 is doing some other things at the same time Session session2 = repo.login(simpleCredentials); Node test2 = session2.getNodeByIdentifier(testIdentifier); test2.setProperty(property, 2); // session 2 saves first session2.save(); session2.logout(); // session 1 saves session1.save(); session1.logout(); sessionInit.logout(); System.exit(0); } } Le 2010-12-10 à 6:12 PM, François Cassistat a écrit : Hi list ! I've got some concurrency problem while saving. My application use distinct Sessions object and when two processes are trying to modify the same property of the same node at the same time. I've get the exception below : javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:249) at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:981) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:920) at com.myproject.MyProject.save(MyProject.java:1525) ... I have tried saving this way : synchronized (this) { session.refresh(true); session.save(); } Is there any way around or only locks and transactions can prevent that ? Thanks ! François
Re: Item cannot be saved because it has been modified externally: node /
On Sat, Dec 11, 2010 at 3:53 PM, Norman Maurer nor...@apache.org wrote: Hi there, unfortunaly I don't know the issue number out of my head. I just know I had the same problems with concurrent write operations and talked to Jukka about it. He told me its fixed in 2.2-SNAPSHOT and I tried it (he even gave me an issue number but I can't remember) to see if the problems are still there. It was fixed for me.. ok, thanks for the information. i'll check monday. cheers stefan Bye, Norman 2010/12/11 Stefan Guggisberg stefan.guggisb...@gmail.com: On Sat, Dec 11, 2010 at 11:52 AM, Norman Maurer nor...@apache.org wrote: Hi there, this is fixed in the upcomming jackrabbit 2.2.0 (which should be which jira issue are you refering to? cheers stefan released within the next days). In the meantime you can grab a snapshot here: https://repository.apache.org/content/groups/snapshots/org/apache/jackrabbit/ Version name is 2.2-SNAPSHOT. Bye, Norman 2010/12/11 François Cassistat f...@maya-systems.com: I've managed to make some basic case that throws the error every time. import org.apache.jackrabbit.core.TransientRepository; import javax.jcr.*; public class JCRConcurrency { public static void main(String[] args) throws RepositoryException { Repository repo; if (args.length = 2) repo = new TransientRepository(args[0], args[1]); else repo = new TransientRepository(); SimpleCredentials simpleCredentials = new SimpleCredentials(username, password.toCharArray()); Session sessionInit = repo.login(simpleCredentials); // initialization Node root = sessionInit.getRootNode(); Node test; if (root.hasNode(test)) test = root.getNode(test); else test = root.addNode(test); if (!test.hasProperty(property)) test.setProperty(property, 0); sessionInit.save(); String testIdentifier = test.getIdentifier(); // session 1 Session session1 = repo.login(simpleCredentials); Node test1 = session1.getNodeByIdentifier(testIdentifier); System.out.println(test1.getProperty(property).getLong()); test1.setProperty(property, 1); // session 2 is doing some other things at the same time Session session2 = repo.login(simpleCredentials); Node test2 = session2.getNodeByIdentifier(testIdentifier); test2.setProperty(property, 2); // session 2 saves first session2.save(); session2.logout(); // session 1 saves session1.save(); session1.logout(); sessionInit.logout(); System.exit(0); } } Le 2010-12-10 à 6:12 PM, François Cassistat a écrit : Hi list ! I've got some concurrency problem while saving. My application use distinct Sessions object and when two processes are trying to modify the same property of the same node at the same time. I've get the exception below : javax.jcr.InvalidItemStateException: Item cannot be saved because it has been modified externally: node / at org.apache.jackrabbit.core.ItemImpl.getTransientStates(ItemImpl.java:249) at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:981) at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:920) at com.myproject.MyProject.save(MyProject.java:1525) ... I have tried saving this way : synchronized (this) { session.refresh(true); session.save(); } Is there any way around or only locks and transactions can prevent that ? Thanks ! François
Re: Lifecycle Support in SPI
hi christanto, On Thu, Dec 9, 2010 at 9:34 AM, Christanto Leonardo cleon...@adobe.com wrote: Hi, I notice that lifecycle is not supported for SPI. org.apache.jackrabbit.jcr2spi.NodeImpl#getAllowedLifecycleTransistions() just throws UnsupportedRepositoryOperationException. Is there any specific reason for this? lack of resources/interest perhaps? here's the related jira issue: https://issues.apache.org/jira/browse/JCR-2228 cheers stefan Thank you. Christanto
Re: Functionality to store indexes in database with jackrabbit 2.1.2 or upcoming releases.........
On Mon, Nov 29, 2010 at 2:12 PM, Alexander Klimetschek aklim...@adobe.com wrote: On 29.11.10 13:25, Thomas Mueller muel...@adobe.com wrote: the single-big index for the entire repository that is mandated by the JCR spec. Sorry, where in the spec is this mandated? Not sure if it is actually mandated, if you're not sure then please refrain from making such public statements. they are not very helpful and only cause confusion. cheers stefan but applications expect it and you might get conflicts with multiple applications. If one application does not want a certain property to be indexed (because it does not want to find it in e.g. full text search), there might be another one who needs it, so you can't configure the right index if you only have a single one. Regards, Alex -- Alexander Klimetschek Developer // Adobe (Day) // Berlin - Basel
Re: Why does Jackrabbit 2.0.0 uses the /temp dir on Linux?
On Mon, Nov 29, 2010 at 4:58 PM, Niu, Xuetao xuetao@fiserv.com wrote: Hello, I am experiencing a trouble where someone cleaned the /temp folder on Linux and Jackrabbit complains about this (see exception below). I wonder if Jackrabbit 2.0.0 has to use the /temp dir so that we will document it in our software manual not clean it. jackrabbit does create temp files in the system-dependent default temporary-file directory (/tmp on unix systems). you can specify an alternate temp dir by setting the java.io.tmpdir system property on jvm invokation. see [1] for more information. cheers stefan [1] http://download.oracle.com/javase/1.4.2/docs/api/java/io/File.html#createTempFile(java.lang.String, java.lang.String, java.io.File) Is there any other places in the file system that will be used by Jackrabbit (and its dependant libs) if I only use the DBFileSystem and DBPersistenceManager (on both Linux and Windows)? Thanks, Xuetao ... Caused by: javax.jcr.RepositoryException: file backing binary value not found at org.apache.jackrabbit.core.value.BLOBInTempFile.getStream(BLOBInTempFile .java:140) at com.fiserv.repository.jcr.JCRUtils.loadMeta(JCRUtils.java:1528) ... 44 more Caused by: java.io.FileNotFoundException: /tmp/bin6905811194972706308.tmp at org.apache.jackrabbit.core.data.LazyFileInputStream.init(LazyFileInput Stream.java:63) at org.apache.jackrabbit.core.value.BLOBInTempFile.getStream(BLOBInTempFile .java:138) ... 45 more
Re: ObeservationManager - userData
On Mon, Nov 22, 2010 at 5:46 PM, klemens.let...@signal-iduna.de wrote: Hello, we have following Problem in our application. We define an eventListener, which we add to a session via: observationManager.addEventListener. In a different session we add userData to the according observationManager. This userData is not accessible in the events occurring in the listener. which deployment model are you using? cheers stefan The method event.getUserData() always returns null. Even when we try to use only one session for observation and repository access, the event couldn't access the userData. The question is, how we could share userData to occuring events? If not, is there a different, or even better approach we could use? Greetings Klemens Letulé SIGNAL Krankenversicherung a. G., Sitz: Dortmund, HR B 2405, AG Dortmund IDUNA Vereinigte Lebensversicherung aG für Handwerk, Handel und Gewerbe, Sitz: Hamburg, HR B 2740, AG Hamburg Deutscher Ring Krankenversicherungsverein a.G., Sitz: Hamburg, HR B 4673, AG Hamburg, SIGNAL IDUNA Allgemeine Versicherung AG, Sitz: Dortmund, HR B 19108, AG Dortmund Vorstände: Reinhold Schulte (Vorsitzender), Wolfgang Fauter (stellv. Vorsitzender), Dr. Karl-Josef Bierth, Jens O. Geldmacher, Marlies Hirschberg-Tafel, Michael Johnigk, Ulrich Leitermann, Michael Petmecky, Dr. Klaus Sticker, Prof. Dr. Markus Warg Vorsitzender der Aufsichtsräte: Günter Kutz SIGNAL IDUNA Gruppe Hauptverwaltungen, Internet: www.signal-iduna.de 44121 Dortmund, Hausanschrift: Joseph-Scherer-Str. 3, 44139 Dortmund 20351 Hamburg, Hausanschrift: Neue Rabenstraße 15-19, 20354 Hamburg
Re: Re: ObeservationManager - userData
2010/11/24 klemens.let...@signal-iduna.de: Hei, we are using the Repository Server. what remoting protocol are you using (rmi/davex)? Greets Klemens Stefan Guggisberg stefan.guggisb...@gmail.com 24.11.2010 16:53 Bitte antworten an users@jackrabbit.apache.org An users@jackrabbit.apache.org Kopie Thema Re: ObeservationManager - userData On Mon, Nov 22, 2010 at 5:46 PM, klemens.let...@signal-iduna.de wrote: Hello, we have following Problem in our application. We define an eventListener, which we add to a session via: observationManager.addEventListener. In a different session we add userData to the according observationManager. This userData is not accessible in the events occurring in the listener. which deployment model are you using? cheers stefan The method event.getUserData() always returns null. Even when we try to use only one session for observation and repository access, the event couldn't access the userData. The question is, how we could share userData to occuring events? If not, is there a different, or even better approach we could use? Greetings Klemens Letulé SIGNAL Krankenversicherung a. G., Sitz: Dortmund, HR B 2405, AG Dortmund IDUNA Vereinigte Lebensversicherung aG für Handwerk, Handel und Gewerbe, Sitz: Hamburg, HR B 2740, AG Hamburg Deutscher Ring Krankenversicherungsverein a.G., Sitz: Hamburg, HR B 4673, AG Hamburg, SIGNAL IDUNA Allgemeine Versicherung AG, Sitz: Dortmund, HR B 19108, AG Dortmund Vorstände: Reinhold Schulte (Vorsitzender), Wolfgang Fauter (stellv. Vorsitzender), Dr. Karl-Josef Bierth, Jens O. Geldmacher, Marlies Hirschberg-Tafel, Michael Johnigk, Ulrich Leitermann, Michael Petmecky, Dr. Klaus Sticker, Prof. Dr. Markus Warg Vorsitzender der Aufsichtsräte: Günter Kutz SIGNAL IDUNA Gruppe Hauptverwaltungen, Internet: www.signal-iduna.de 44121 Dortmund, Hausanschrift: Joseph-Scherer-Str. 3, 44139 Dortmund 20351 Hamburg, Hausanschrift: Neue Rabenstraße 15-19, 20354 Hamburg SIGNAL Krankenversicherung a. G., Sitz: Dortmund, HR B 2405, AG Dortmund IDUNA Vereinigte Lebensversicherung aG für Handwerk, Handel und Gewerbe, Sitz: Hamburg, HR B 2740, AG Hamburg Deutscher Ring Krankenversicherungsverein a.G., Sitz: Hamburg, HR B 4673, AG Hamburg, SIGNAL IDUNA Allgemeine Versicherung AG, Sitz: Dortmund, HR B 19108, AG Dortmund Vorstände: Reinhold Schulte (Vorsitzender), Wolfgang Fauter (stellv. Vorsitzender), Dr. Karl-Josef Bierth, Jens O. Geldmacher, Marlies Hirschberg-Tafel, Michael Johnigk, Ulrich Leitermann, Michael Petmecky, Dr. Klaus Sticker, Prof. Dr. Markus Warg Vorsitzender der Aufsichtsräte: Günter Kutz SIGNAL IDUNA Gruppe Hauptverwaltungen, Internet: www.signal-iduna.de 44121 Dortmund, Hausanschrift: Joseph-Scherer-Str. 3, 44139 Dortmund 20351 Hamburg, Hausanschrift: Neue Rabenstraße 15-19, 20354 Hamburg
Re: jackrabbit node identifier implementation
On Mon, Nov 15, 2010 at 1:28 PM, danisevsky danisev...@gmail.com wrote: Hi, is node identifier unique across all workspaces in repository? no, 'corresponding' nodes do share the same identifier. for more information about corresponding nodes please see [1]. cheers stefan [1] http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#CorrespondingNodes Thanks.
Re: 30 secondes to create and open a repository ?
On Thu, Nov 11, 2010 at 2:21 PM, Ista Pouss ista...@gmail.com wrote: 2010/11/11 Cech. Ulrich ulrich.c...@aeb.de No, I configure nothing ; the repository goes in an empty directory, as you can see with dirs.mkdirs(). Then, try to configure a repository.xml and choose simple FileSystem as PersistenceManager and so on and give this XML to the TransientRepository to check, if this makes some difference. With BundleFsPersistenceManager, the same take only 3 secondes ! It seem that DerbyPersistenceManager, the default, is very time expensive, for me, at startup. But what to do ?... in wiki I read, at http://wiki.apache.org/jackrabbit/PersistenceManagerFAQ#Bundle_File-System_PM: Bundle File-System PM Not meant to be used in production environments (except for read-only uses). is it derby wich is bad ? If I must use another database, which one ? you're on the wrong track. your poor startup performance is mainly due to classloading. if you create the repository repeatedly in the same process you'll notice significantly improved startup time. however, there must be something wrong with your setup/machine. on my macbook pro (2.8ghz) the first startup takes 3-4 seconds, 2nd startup takes about 0.4 seconds. cheers stefan My use case is a desktop application, for text/images, like catalogs (something like 10.000 products). And try to use some absolute paths. It seems that there is something wrong with your computers' IO-access. It is absolute path, and jackrabbit is the only one which detect this something wrong stuf with my computer IO-access, fortunately. Thanks.
Re: JCR-2701 Discussion
hi cory, On Fri, Oct 22, 2010 at 7:03 AM, Cory Prowse c...@prowse.com wrote: Hi, I've been looking into JCR-2701 (https://issues.apache.org/jira/browse/JCR-2701) which is the error when attempting to clone nodes between workspaces when deployed on JCA. Could someone with better knowledge of the inner working of Jackrabbit please verify and/or clarify my understanding of how the internals are to work when creating a Workspace off of another. I've followed through the logic and it seems to me that the root cause is that calling save on a session does not persist to the underlying datastore due to it being part of a container managed transaction. However it seems the createWorkspace process has an incorrect assumption on state, centred I believe around XAItemStateManager. The reason I think this is a problem is threefold: (1) - When calling save on a XASession that is part of a running transaction, it will not persist to the underlying datastore and instead merge changes together for the final commit (2). in WorkspaceImpl.createWorkspace(String,String), it uses the _CURRENT_ session to iterate through all children of the root node, so that it can then issue a clone for each child node name to the new workspace. (3). in WorkspaceImpl.clone() it creates a _NEW_ session on the source workspace to copy nodes from. Since (1) means the changes are not available to other sessions, (2) sees the saved nodes, but (3) does not and so fails with the PathNotFoundException. yes, you're right. So a workaround for the problem is to use bean managed transactions which commit the user transaction after the save, and before the creation of a new workspace, which I have verified works. However to me it seems the assumption in WorkspaceImpl.clone could cause other problems where an overlying transaction is in place, causing it to attempt to copy unpersisted (but saved!) nodes. agreed. unfortunately i am not familiar with jackrabbit's JTA support. maybe somebody more knowledgeable can help here. of course, if you can provide a patch, that would be very appreciated :). thanks for your excellent analysis, cheers stefan Would love to hear someone else's thoughts on that. I can update the JIRA issue with the above info if it is verified by someone else. -- Cory
Re: behavour of workspace.copy()
On Wed, Oct 13, 2010 at 12:52 AM, ChadDavis chadmichaelda...@gmail.com wrote: The specification says that copying into an existing node can do a sort of smart update or throw an ItemExistsException, depending upon the implementation. What does Jackrabbit do? I can observe what happens, but I'd like to know all the details. i guess you're refering to the following phrase (javadoc): quote When a node N is copied to a location where a node N' already exists, the repository may either immediately throw an ItemExistsException or attempt to update the node N' by selectively replacing part of its subgraph with a copy of the relevant part of the subgraph of N. If the node types of N and N' are compatible, the implementation supports update-on-copy for these node types and no other errors occur, then the copy will succeed. Otherwise an ItemExistsException is thrown. /quote jackrabbit does not support update-on-copy. this 'feature' has been introduced relatively late in the jsr-283 development cycle, the justification being improved WebDAV/Delta-V (RFC 3253) compatibility. cheers stefan I would think this type of information would be somewhere . . . but I can't find it. Feel free to RTFM me, as long as you provide a link.
Re: utility of defining properties as protected
On Wed, Oct 6, 2010 at 10:41 PM, Palmer, Tom tom.pal...@usi.com wrote: I'm having a hard time understanding the utility of defining a property as protected. If it's protected then the only way to get a value set is to also specify autocreated and a default but that doesn't work for setting dynamic values since if it's protected I can't reset the value - even prior to the first save. Is protected only useful for built-in types where the implementation has total control during node creation? yes. cheers stefan Thanks - Tom Palmer Director, Strategic Technology Services ATT Hosting Application Services | 2000 Perimeter Park Drive, Suite 140 | Morrisville, NC 27560 Office: +1 (919) 388-5937 | Mobile: +1 (919) 627-5431 thomas.pal...@att.com mailto:thomas.pal...@att.com Confidentiality Notice and Disclaimer: This e-mail transmission may contain confidential and/or proprietary information of ATT that is intended only for the individual or entity named in the e-mail address. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or reliance upon the contents of this e-mail is strictly prohibited. If you have received this e-mail transmission in error, please reply to the sender, so that ATT can arrange for proper delivery, and then please delete the message from your inbox. Thank you.
Re: Is a node still locked after session.move()?
hi dominik, On Mon, Sep 27, 2010 at 7:42 PM, Dominik Klaholt do...@mail.upb.de wrote: Hello, I have written a test-method that deals with moving a locked node and can be executed within XATest.java ( http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit-core/src/test/java/org/apache/jackrabbit/core/XATest.java ) - it is attached below this email. The test-method locks a node, moves this node and then tests whether it is still locked. Checking the locked-status of the node by using its identifier yields that it is still locked. However, checking the locked-status of the node by using its new path yields that it is no longer locked. this seems to be a bug. could you please post a jira issue? thanks stefan Is this intended due to some reason I don't see right now? Is something wrong with my test-case? Thanks Dominik public void testLockedOrUnlocked() throws Exception { Session session = null; try { session = getHelper().getSuperuserSession(); // prerequisite of test if (session.getRootNode().hasNode(testParent)) { session.getRootNode().getNode(testParent).remove(); } session.getRootNode().addNode(testParent).addNode(testNode).addMixin(NodeType.MIX_LOCKABLE); session.save(); // ids of the two nodes String testParentId = session.getRootNode().getNode(testParent).getIdentifier(); String testNodeId = session.getNodeByIdentifier(testParentId).getNode(testNode).getIdentifier(); // lock 'testNode' session.getWorkspace().getLockManager().lock(session.getNodeByIdentifier(testNodeId).getPath(), false, true, 0L, null); // confirm lock assertTrue(session.getWorkspace().getLockManager().isLocked( session.getNodeByIdentifier(testNodeId).getPath())); // move 'testNode' session.move(session.getNodeByIdentifier(testNodeId).getPath(), session.getRootNode().getPath() + testNode); session.save(); // is 'testNode' still locked? // this call says, it's locked assertTrue(session.getWorkspace().getLockManager().isLocked( session.getNodeByIdentifier(testNodeId).getPath())); // this call says, it's not assertTrue(session.getWorkspace().getLockManager().isLocked( session.getRootNode().getNode(testNode).getPath())); } finally { if (session != null) { session.logout(); } } }
Re: Example project showing Issue JCR-2701
hi cory On Mon, Sep 6, 2010 at 5:58 PM, Cory Prowse c...@prowse.com wrote: Hi, To aid in the solving of this issue (which is sorta a blocker for me), I have attached a very simple project which exhibits the issue. It is essentially only one Java file with the minimum boilerplate and jars to make it into a WAR file. It contains a README.txt file which has the exact steps to follow to produce the error (simple copypaste is all that is required). excellent, thanks! If there is anything else I can do to help this issue along I'll do what I can. unfortunately, i can't help. i am not familiar with jca (and quite frankly don't wanna be ;). i hope somebody with more experience in this field wil have a chance to look into this soon. cheers stefan The details are available at: https://issues.apache.org/jira/browse/JCR-2701 -- Cory
Re: Problems migrating from 1.6.0 to 2.1.0
On Wed, Sep 1, 2010 at 11:38 AM, Robin Wyles ro...@jacaranda.co.uk wrote: An update on this... Stefan was indeed correct and it a charset/encoding issue that was causing Jackrabbit to ignore the existing repository content. thanks for the information. can you please provide more details about the exact nature of the problem? cheers stefan However, now that I have manage to get our existing repository running under 2.1.0 I have a new problem and that is that all the nt:file nodes whose content is stored in the datastore (FileDataStore) are missing. The small nt:file nodes that are stored in the database are visible, just not those in the FileDataStore. When starting up our newly migrated repository for the first time I get a few Record not found datastore exceptions and some associated Tika exceptions for those missing datastore records - would those errors prevent the entire datastore from being used? The number of errors are far less than the 3000 or so items in the datastore, so it would suggest that it's either ignoring most of the datastore contents, or at start up at least they are recognised as valid. As before, once our repository has started I am able to add new nodes to the datastore, and these behave has expected. Any help, gratefully received - I'm really keen to get our repos onto 2.10 as some of its new query functionality is much needed! Robin On 27 Aug 2010, at 16:03, Robin Wyles wrote: Hi Stefan On 27 Aug 2010, at 13:11, Stefan Guggisberg wrote: On Fri, Aug 27, 2010 at 2:02 PM, Stefan Guggisberg stefan.guggisb...@day.com wrote: On Fri, Aug 27, 2010 at 1:18 PM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi Stefan, Thanks for your quick reply... On 27 Aug 2010, at 11:36, Stefan Guggisberg wrote: hi robin, On Fri, Aug 27, 2010 at 11:25 AM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi, I'm having problems migrating an existing repository from Jackrabbit 1.6.0 to 2.1.0. Here are the steps I followed to test the migration: 1. Update app to use Jackrabbit 2.1.0, run unit tests etc. Manually test against empty 2.1.0 repository. All works fine here. Our repository configuration has not changed at all between versions. 2. Used mysqldump to export production repository. 3. Copy production repository directory (workspace folder, datastore, index folders etc.) to test machine. 4. Import SQL file from 2 above to new DB on test machine. 5. Start application on test machine. The result of the above is that the application starts up without error but that the repository appears empty. I am able to add new nodes to the repository, which behave correctly within the application yet none of the existing nodes are visible. I've tried xpath queries against known paths, e.g. //library/* and these return 0 nodes. A few things I've tried/noticed: 1. Repeating steps 3 and 4 above, then removing the old index directories before starting the application. Jackrabbit creates new lucene indexes, but they are very small, just like they would be when initialising an empty repository. Also, the index files are called indexes_2 rather than indexes as they were under 1.6.0. 2. When starting the app after the migration I notice that 4 extra records have been added to the BUNDLE table, 3 extra records are added to the VERSION_BUNDLE table and 2 extra records added to the VERSION_NAMES table. Again, this seems to be consistent with what is added automatically added to the database when a new repository is initialised. So, basically it appears that Jackrabbit is completely ignoring the existing repository data, and instead initialising a new repos using the existing database… If anyone has any ideas as to how I can get 2.1.0 to recognise our existing repository they'd be gratefully received - I feel there must be something simple I've overlooked! hmm, seems like the key values (i.e. the id format) has changed. however, i am not aware of such a change. maybe someone else knows more? The release notes for Jackrabbit 2.0.0 claim that it is backward compatible with 1.x repositories. I've seen a couple of messages on the users list relating to migration issues but these seem to involve custom nodetypes, whereas our repository has no custom nodetypes. How may I see what key values/ID format is used by the different versions? This sounds like quite a major change to me, and I'm sure something that would've been documented! absolutely. however, if you're saying that 4 extra records have been inserted into the BUNDLE table and the BUNDLE table already had n=4 records, i can only explain it with a changed binary representation of the record id's. the 4 BUNDLE records are: / (root node) /jcr:system /jcr:system/jcr:nodeTypes /jcr:system/jcr:versionStore the values of the ids those nodes are hard-coded in jackrabbit. on startup, those nodes will be created if they don't exist
Re: Problems migrating from 1.6.0 to 2.1.0
hi robin, On Fri, Aug 27, 2010 at 11:25 AM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi, I'm having problems migrating an existing repository from Jackrabbit 1.6.0 to 2.1.0. Here are the steps I followed to test the migration: 1. Update app to use Jackrabbit 2.1.0, run unit tests etc. Manually test against empty 2.1.0 repository. All works fine here. Our repository configuration has not changed at all between versions. 2. Used mysqldump to export production repository. 3. Copy production repository directory (workspace folder, datastore, index folders etc.) to test machine. 4. Import SQL file from 2 above to new DB on test machine. 5. Start application on test machine. The result of the above is that the application starts up without error but that the repository appears empty. I am able to add new nodes to the repository, which behave correctly within the application yet none of the existing nodes are visible. I've tried xpath queries against known paths, e.g. //library/* and these return 0 nodes. A few things I've tried/noticed: 1. Repeating steps 3 and 4 above, then removing the old index directories before starting the application. Jackrabbit creates new lucene indexes, but they are very small, just like they would be when initialising an empty repository. Also, the index files are called indexes_2 rather than indexes as they were under 1.6.0. 2. When starting the app after the migration I notice that 4 extra records have been added to the BUNDLE table, 3 extra records are added to the VERSION_BUNDLE table and 2 extra records added to the VERSION_NAMES table. Again, this seems to be consistent with what is added automatically added to the database when a new repository is initialised. So, basically it appears that Jackrabbit is completely ignoring the existing repository data, and instead initialising a new repos using the existing database… If anyone has any ideas as to how I can get 2.1.0 to recognise our existing repository they'd be gratefully received - I feel there must be something simple I've overlooked! hmm, seems like the key values (i.e. the id format) has changed. however, i am not aware of such a change. maybe someone else knows more? cheers stefan Many thanks, Robin
Re: Problems migrating from 1.6.0 to 2.1.0
On Fri, Aug 27, 2010 at 1:18 PM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi Stefan, Thanks for your quick reply... On 27 Aug 2010, at 11:36, Stefan Guggisberg wrote: hi robin, On Fri, Aug 27, 2010 at 11:25 AM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi, I'm having problems migrating an existing repository from Jackrabbit 1.6.0 to 2.1.0. Here are the steps I followed to test the migration: 1. Update app to use Jackrabbit 2.1.0, run unit tests etc. Manually test against empty 2.1.0 repository. All works fine here. Our repository configuration has not changed at all between versions. 2. Used mysqldump to export production repository. 3. Copy production repository directory (workspace folder, datastore, index folders etc.) to test machine. 4. Import SQL file from 2 above to new DB on test machine. 5. Start application on test machine. The result of the above is that the application starts up without error but that the repository appears empty. I am able to add new nodes to the repository, which behave correctly within the application yet none of the existing nodes are visible. I've tried xpath queries against known paths, e.g. //library/* and these return 0 nodes. A few things I've tried/noticed: 1. Repeating steps 3 and 4 above, then removing the old index directories before starting the application. Jackrabbit creates new lucene indexes, but they are very small, just like they would be when initialising an empty repository. Also, the index files are called indexes_2 rather than indexes as they were under 1.6.0. 2. When starting the app after the migration I notice that 4 extra records have been added to the BUNDLE table, 3 extra records are added to the VERSION_BUNDLE table and 2 extra records added to the VERSION_NAMES table. Again, this seems to be consistent with what is added automatically added to the database when a new repository is initialised. So, basically it appears that Jackrabbit is completely ignoring the existing repository data, and instead initialising a new repos using the existing database… If anyone has any ideas as to how I can get 2.1.0 to recognise our existing repository they'd be gratefully received - I feel there must be something simple I've overlooked! hmm, seems like the key values (i.e. the id format) has changed. however, i am not aware of such a change. maybe someone else knows more? The release notes for Jackrabbit 2.0.0 claim that it is backward compatible with 1.x repositories. I've seen a couple of messages on the users list relating to migration issues but these seem to involve custom nodetypes, whereas our repository has no custom nodetypes. How may I see what key values/ID format is used by the different versions? This sounds like quite a major change to me, and I'm sure something that would've been documented! absolutely. however, if you're saying that 4 extra records have been inserted into the BUNDLE table and the BUNDLE table already had n=4 records, i can only explain it with a changed binary representation of the record id's. the 4 BUNDLE records are: / (root node) /jcr:system /jcr:system/jcr:nodeTypes /jcr:system/jcr:versionStore the values of the ids those nodes are hard-coded in jackrabbit. on startup, those nodes will be created if they don't exist. i am not a mysql expert. have you compared the configurations of both mysql instances? maybe it's some strange charset/encoding issue... cheers stefan Thanks, Robin
Re: Problems migrating from 1.6.0 to 2.1.0
On Fri, Aug 27, 2010 at 2:02 PM, Stefan Guggisberg stefan.guggisb...@day.com wrote: On Fri, Aug 27, 2010 at 1:18 PM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi Stefan, Thanks for your quick reply... On 27 Aug 2010, at 11:36, Stefan Guggisberg wrote: hi robin, On Fri, Aug 27, 2010 at 11:25 AM, Robin Wyles ro...@jacaranda.co.uk wrote: Hi, I'm having problems migrating an existing repository from Jackrabbit 1.6.0 to 2.1.0. Here are the steps I followed to test the migration: 1. Update app to use Jackrabbit 2.1.0, run unit tests etc. Manually test against empty 2.1.0 repository. All works fine here. Our repository configuration has not changed at all between versions. 2. Used mysqldump to export production repository. 3. Copy production repository directory (workspace folder, datastore, index folders etc.) to test machine. 4. Import SQL file from 2 above to new DB on test machine. 5. Start application on test machine. The result of the above is that the application starts up without error but that the repository appears empty. I am able to add new nodes to the repository, which behave correctly within the application yet none of the existing nodes are visible. I've tried xpath queries against known paths, e.g. //library/* and these return 0 nodes. A few things I've tried/noticed: 1. Repeating steps 3 and 4 above, then removing the old index directories before starting the application. Jackrabbit creates new lucene indexes, but they are very small, just like they would be when initialising an empty repository. Also, the index files are called indexes_2 rather than indexes as they were under 1.6.0. 2. When starting the app after the migration I notice that 4 extra records have been added to the BUNDLE table, 3 extra records are added to the VERSION_BUNDLE table and 2 extra records added to the VERSION_NAMES table. Again, this seems to be consistent with what is added automatically added to the database when a new repository is initialised. So, basically it appears that Jackrabbit is completely ignoring the existing repository data, and instead initialising a new repos using the existing database… If anyone has any ideas as to how I can get 2.1.0 to recognise our existing repository they'd be gratefully received - I feel there must be something simple I've overlooked! hmm, seems like the key values (i.e. the id format) has changed. however, i am not aware of such a change. maybe someone else knows more? The release notes for Jackrabbit 2.0.0 claim that it is backward compatible with 1.x repositories. I've seen a couple of messages on the users list relating to migration issues but these seem to involve custom nodetypes, whereas our repository has no custom nodetypes. How may I see what key values/ID format is used by the different versions? This sounds like quite a major change to me, and I'm sure something that would've been documented! absolutely. however, if you're saying that 4 extra records have been inserted into the BUNDLE table and the BUNDLE table already had n=4 records, i can only explain it with a changed binary representation of the record id's. the 4 BUNDLE records are: / (root node) /jcr:system /jcr:system/jcr:nodeTypes /jcr:system/jcr:versionStore the values of the ids those nodes are hard-coded in jackrabbit. on startup, those nodes will be created if they don't exist. i am not a mysql expert. have you compared the configurations of both mysql instances? maybe it's some strange charset/encoding issue... or maybe it's a problem with the mysql indexes on those tables... cheers stefan cheers stefan Thanks, Robin
Re: New Properties for nt:unstructured and nt:resource get lost on reboot
On Fri, Aug 20, 2010 at 11:41 AM, Thomas Lustig tho...@futuredesign.at wrote: Dear all I am using Jackrabbit 2.1 and I would like to add two custom String Properties to the Nodetypes nt:unstructured and nt:resource. This is for storing an ID and a classname representation of the Hibernate Objects I du this by adding a Mixin Type defined in the custom_nodetypes.xml file you shouldn't mess around with jackrabbit internal files. there's an api to register custom node types. In Java i used it this way: .. filenode.addMixin(myns:javaobject); filenode.setProperty(myns:javaclass, myclass); filenode.setProperty(myns:hibernateid, myuuid); . everthing works fine until i restart the jackrabbit server. Then the two properties added to the nodes is lost. did you call session.save()? cheers stefan What is done wrong there? Do i have to do something extra to store my additional nodetypes permanently to get the data after the reboot of jackrabbit server? Could anyone tell me how i could solve my problem; this would be really great! - my custom_nodetypes.xml in folder \repository\nodetypes - nodeTypes xmlns:myns=my-namespace xmlns:nt=http://www.jcp.org/jcr/nt/1.0; nodeType name=myns:javaobject isMixin=true hasOrderableChildNodes=false primaryItemName= isAbstract=false supertypes supertypent:base/supertype /supertypes propertyDefinition name=myns:hibernateid requiredType=String autoCreated=false mandatory=false onParentVersion=COPY protected=false multiple=false/ propertyDefinition name=myns:javaclass requiredType=String autoCreated=false mandatory=false onParentVersion=COPY protected=false multiple=false/ /nodeType /nodeTypes - Thanks again in advance for helping me best regards thomas
Re: jr 2.1 and xml content
On Thu, Jul 22, 2010 at 10:48 PM, John Langley john.lang...@mathworks.com wrote: Thanks that definitely helped! But now I get the following errors / warnings: [#|2010-07-22T20:27:48.722+|WARNING|sun-appserver2.1| org.apache.jackrabbit.core.query.lucene.LazyTextExtractorField| _ThreadID=16;_ThreadName=jackrabbit-pool-4;_RequestID=7c74399a-6822-4f8b-b6e1-fcc54e5c37f8;|Failed to extract text from a binary property org.apache.tika.exception.TikaException: TIKA-237: Illegal SAXException from org.apache.tika.parser.xml.dcxmlpar...@85deafc at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:130) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:101) at org.apache.jackrabbit.core.query.lucene.JackrabbitParser.parse(JackrabbitParser.java:189) at org.apache.jackrabbit.core.query.lucene.LazyTextExtractorField $ParsingTask.run(LazyTextExtractorField.java:174) at java.util.concurrent.Executors $RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask $Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor $ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor $ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor $Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor $Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: org.xml.sax.SAXParseException: The version is required in the XML declaration. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source) at org.apache.xerces.impl.XMLScanner.scanXMLDeclOrTextDecl(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanXMLDeclOrTextDecl(Unknown Source) at org.apache.xerces.impl.XMLDocumentScannerImpl $XMLDeclDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl $JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:198) at org.apache.tika.parser.xml.XMLParser.parse(XMLParser.java:72) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:120) ... 11 more |#] Note: this only happens when I put a file in via webdav and the file has an .xml extension but is empty (which is a temporary state in our application) Is there anything I can or should do (other than tweaking the logging properties) to turn off this warning? the warning is IMO legitimate (trying to index a zero-length file). however, the length of the file could probably be checked by o.a.jackrabbit.core.query.lucene.LazyTextExtractorField before handing it over to TIKA and a less verbose warning could be logged if the file is empty. feel free to create a jira issue if it really bugs you. cheers stefan Thanks in advance, the first suggestion was great! -- Langley On Thu, 2010-07-22 at 10:36 -0400, Stefan Guggisberg wrote: this might help: http://markmail.org/message/hctkq6looial7xzr cheers stefan On Thu, Jul 22, 2010 at 4:08 PM, John Langley john.lang...@mathworks.com wrote: We recently upgraded from jackrabbit 2.0 to jackrabbit 2.1 and discovered much to our chagrin that storing xml content in the repository has been significantly changed. In fact, from our point of view it has been broken! Previously, we had been storing xml content via a webdav client into the repository and everything was fine. Now when we try to do this, the result is that the content length of these xml files (regardless of whether the file has a .xml extension or not) is 0 length! Certainly there MUST be a configuration that can help us turn off any special processing of xml content that we store in the repository. Could someone please point this out? Thanks in advance! -- Langley