[jira] Commented: (JCR-537) Failure to remove a versionable node
[ https://issues.apache.org/jira/browse/JCR-537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513577 ] Shane Preater commented on JCR-537: --- I am also seeing this error when trying to remove the version mixin from a node. > Failure to remove a versionable node > > > Key: JCR-537 > URL: https://issues.apache.org/jira/browse/JCR-537 > Project: Jackrabbit > Issue Type: Bug > Components: versioning >Affects Versions: 1.1 >Reporter: Florent Guillaume >Assignee: Tobias Bocanegra > Attachments: var.tgz > > > This happens on current trunk. > When running the following code on the attached jackrabbit repository, > (sorry, Jython code, I trust the conversion to Java is trivial): > from javax.jcr import SimpleCredentials > from org.apache.jackrabbit.core import TransientRepository > uuid = "83f6e473-3fe2-4584-9570-4e18a0cd6688" > repoconf = "var/jackrabbit.xml" > repopath = "var/jackrabbit" > credentials = SimpleCredentials("username", "password") > repository = TransientRepository(repoconf, repopath) > session = repository.login(credentials, "default") > root = session.getRootNode() > node = session.getNodeByUUID(uuid) > node.remove() > root.save() > I get the following error: > org.apache.jackrabbit.core.state.NoSuchItemStateException: > c147b847-8ba5-4fe9-a890-481586476510 > at > org.apache.jackrabbit.core.state.SharedItemStateManager.getNodeReferences(SharedItemStateManager.java:307) > at > org.apache.jackrabbit.core.state.SharedItemStateManager.updateReferences(SharedItemStateManager.java:1046) > at > org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:484) > at > org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:687) > at > org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:717) > at > org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:316) > at > org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:323) > at > org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:292) > at > org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:258) > at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:1209) > ... > javax.jcr.RepositoryException: javax.jcr.RepositoryException: /: unable to > update item.: c147b847-8ba5-4fe9-a890-481586476510: > c147b847-8ba5-4fe9-a890-481586476510 > The uuid I'm trying to delete is that of a document at path > /workspaces/ecm:children/subfolder/ecm:children/ghtgh > The uuid mentioned in the error is the one of its version history. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: re-indexing problem
Hi Marcel, Thanks for getting back to me. Thankfully we rolled back to our last data back up and this has fixed our problem. I am using the simpleDBPersistence manager to talk to postgres so I will set those parameters now and, fingers crossed, we won't see this issue again. Regards, Shane. On 20/06/07, Marcel Reutegger <[EMAIL PROTECTED]> wrote: Hi Shane, this probably indicates that the workspace is somehow inconsistent. e.g. there might be a missing node for a child node entry in the database. what kind of persistence manager do you use? if you are using a bundle persistence manager you can set the parameter 'consistenceCheck' and 'consistencyFix'. regards marcel Shane Preater wrote: > HI all, > I am having a problem with our staging server we had to re-index it > yesterday after tailoring our indexes.xml to make our repository more > efficient now it keeps fully re-indexing and then failing with the > following > stack trace: > > 19.06.2007 11:25:42 *ERROR* RepositoryImpl: Failed to initialize workspace > 'default' (RepositoryImpl.java, line 382) > javax.jcr.RepositoryException: Error indexing root node: > 10022d38-c449-4751-b8f0-9d07ac45ead5: > Error indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5: Error > indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5 > at org.apache.jackrabbit.core.SearchManager.initializeQueryHandler( > SearchManager.java:476) > at org.apache.jackrabbit.core.SearchManager.(SearchManager.java :231) > at > org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.getSearchManager ( > RepositoryImpl.java:1580) > at org.apache.jackrabbit.core.RepositoryImpl.initWorkspace( > RepositoryImpl.java:570) > at org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces( > RepositoryImpl.java:379) > at > org.apache.jackrabbit.core.RepositoryImpl.(RepositoryImpl.java :286) > at > org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java :521) > at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.createRepository( > RepositoryStartupServlet.java:419) > at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.initRepository( > RepositoryStartupServlet.java:387) > at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.startup( > RepositoryStartupServlet.java:237) > at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.init( > RepositoryStartupServlet.java:210) > at javax.servlet.GenericServlet.init(GenericServlet.java:211) > at > org.apache.catalina.core.StandardWrapper.loadServlet( StandardWrapper.java > :1105) > at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java :932) > at org.apache.catalina.core.StandardContext.loadOnStartup( > StandardContext.java:3951) > at > org.apache.catalina.core.StandardContext.start(StandardContext.java :4225) > at org.apache.catalina.core.ContainerBase.addChildInternal( > ContainerBase.java:759) > at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java :739) > at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524) > at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:809) > at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java :698) > at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java :472) > at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1122) > at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java > :310) > at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent( > LifecycleSupport.java:119) > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021) > at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013) > at org.apache.catalina.core.StandardEngine.start(StandardEngine.java :442) > at org.apache.catalina.core.StandardService.start(StandardService.java :450) > at org.apache.catalina.core.StandardServer.start(StandardServer.java :709) > at org.apache.catalina.startup.Catalina.start(Catalina.java:551) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke( NativeMethodAccessorImpl.java > :39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585) > at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:294) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke( NativeMethodAccessorImpl.java > :39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585)
re-indexing problem
HI all, I am having a problem with our staging server we had to re-index it yesterday after tailoring our indexes.xml to make our repository more efficient now it keeps fully re-indexing and then failing with the following stack trace: 19.06.2007 11:25:42 *ERROR* RepositoryImpl: Failed to initialize workspace 'default' (RepositoryImpl.java, line 382) javax.jcr.RepositoryException: Error indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5: Error indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5: Error indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5 at org.apache.jackrabbit.core.SearchManager.initializeQueryHandler( SearchManager.java:476) at org.apache.jackrabbit.core.SearchManager.(SearchManager.java:231) at org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.getSearchManager( RepositoryImpl.java:1580) at org.apache.jackrabbit.core.RepositoryImpl.initWorkspace( RepositoryImpl.java:570) at org.apache.jackrabbit.core.RepositoryImpl.initStartupWorkspaces( RepositoryImpl.java:379) at org.apache.jackrabbit.core.RepositoryImpl.(RepositoryImpl.java:286) at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:521) at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.createRepository( RepositoryStartupServlet.java:419) at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.initRepository( RepositoryStartupServlet.java:387) at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.startup( RepositoryStartupServlet.java:237) at org.apache.jackrabbit.j2ee.RepositoryStartupServlet.init( RepositoryStartupServlet.java:210) at javax.servlet.GenericServlet.init(GenericServlet.java:211) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java :1105) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:932) at org.apache.catalina.core.StandardContext.loadOnStartup( StandardContext.java:3951) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4225) at org.apache.catalina.core.ContainerBase.addChildInternal( ContainerBase.java:759) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:739) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:809) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:698) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:472) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1122) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :310) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent( LifecycleSupport.java:119) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021) at org.apache.catalina.core.StandardHost.start(StandardHost.java:718) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442) at org.apache.catalina.core.StandardService.start(StandardService.java:450) at org.apache.catalina.core.StandardServer.start(StandardServer.java:709) at org.apache.catalina.startup.Catalina.start(Catalina.java:551) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java :39) at sun.reflect.DelegatingMethodAccessorImpl.invoke( DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:294) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java :39) at sun.reflect.DelegatingMethodAccessorImpl.invoke( DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java :177) Caused by: java.io.IOException: Error indexing root node: 10022d38-c449-4751-b8f0-9d07ac45ead5 at org.apache.jackrabbit.core.query.lucene.MultiIndex.(MultiIndex.java :323) at org.apache.jackrabbit.core.query.lucene.SearchIndex.doInit( SearchIndex.java:344) at org.apache.jackrabbit.core.query.AbstractQueryHandler.init( AbstractQueryHandler.java:44) at org.apache.jackrabbit.core.SearchManager.initializeQueryHandler( SearchManager.java:474) ... 41 more Our system is: Tomcat 5.5 Jackrabbit 1.3 (unclustered) Postgresql 8.1 Any help on this would be greatly appreciated. Thanks, Shane Preater
Re: Possible deadlock of jcr-server 1.2.1 (rmi)
Thanks for that Tobias. We have now implemented the fix proposed by Marcel and this has sorted out our dead lock issue (Based on the tests we created to verify that our issues were the same as that found by Olivier) so if anyone else is experiencing this issue then Marcel's fix is the way to go temporarily. Regards, Shane. On 15/03/07, Tobias Bocanegra <[EMAIL PROTECTED]> wrote: hi, a quick search in jira shows that the following issues deal with deadlocked repositories: http://issues.apache.org/jira/browse/JCR-546 http://issues.apache.org/jira/browse/JCR-672 http://issues.apache.org/jira/browse/JCR-447 http://issues.apache.org/jira/browse/JCR-443 http://issues.apache.org/jira/browse/JCR-335 the hacks i mentioned earlier where fixes for some of those issues. the solution that marcel proposed seems reasonable and could help solving this issues in the short run. regards, toby On 3/15/07, Shane Preater <[EMAIL PROTECTED]> wrote: > Tobias, > We are also experiencing this problem with deadlocks on our system could you > outline the "hacks" you have used to fix this issue. We are using versioning > in a production environment so if we need to hack it temporarily to get over > this issue then so be it for the moment. > > Also I will keep an eye on the JIRA issue for when the proper fix is > implemented. > > Thanks very much, > Shane. > > > On 14/03/07, Tobias Bocanegra <[EMAIL PROTECTED]> wrote: > > hi, > > we analyzed the issue several times and most of the fixes were hacks > > to prevent deadlocks and data corruption. > > imo, we can't fixed the transaction/concurrency issues that occur > > together with versioning without a bigger redesign of some of the core > > parts of jackrabbit. > > > > regards, toby > > > > On 3/14/07, Miro Walker <[EMAIL PROTECTED]> wrote: > > > We've been aware of this issue for a while. Unfortunately, the locking > > > implementation is pretty hard to disentangle, and we haven't been able > > > to come up with a fix. However, we have been able to work around it by > > > adding an extra level of synchronisation in our own application that > > > ensures only one simultaneous versioning operation can occur. I guess > > > it depends how big a hit this would be as to whether it would be a > > > suitable solution for anyone else. > > > > > > Miro > > > > > > On 3/14/07, Jukka Zitting <[EMAIL PROTECTED]> wrote: > > > > Hi, > > > > > > > > Seems like another case of the age-old JCR-18 issue with concurrent > > > > versioning. Both of the updates contain some versioning operations, > > > > and since concurrent versioning is at the moment still a rather > > > > dangerous sport, I'm not surprised if bad things like a deadlock can > > > > occur. > > > > > > > > Any contributions in further diagnosing and resolving the concurrent > > > > versioning issues would be very much appreciated! > > > > > > > > BR, > > > > > > > > Jukka Zitting > > > > > > > > > > > > > -- > > -< > [EMAIL PROTECTED] >--- > > Tobias Bocanegra, Day Management AG, Barfuesserplatz 6, CH - 4001 Basel > > T +41 61 226 98 98, F +41 61 226 98 97 > > ---< > http://www.day.com >--- > > > > -- -< [EMAIL PROTECTED] >--- Tobias Bocanegra, Day Management AG, Barfuesserplatz 6, CH - 4001 Basel T +41 61 226 98 98, F +41 61 226 98 97 ---< http://www.day.com >---
Re: Possible deadlock of jcr-server 1.2.1 (rmi)
Tobias, We are also experiencing this problem with deadlocks on our system could you outline the "hacks" you have used to fix this issue. We are using versioning in a production environment so if we need to hack it temporarily to get over this issue then so be it for the moment. Also I will keep an eye on the JIRA issue for when the proper fix is implemented. Thanks very much, Shane. On 14/03/07, Tobias Bocanegra <[EMAIL PROTECTED]> wrote: hi, we analyzed the issue several times and most of the fixes were hacks to prevent deadlocks and data corruption. imo, we can't fixed the transaction/concurrency issues that occur together with versioning without a bigger redesign of some of the core parts of jackrabbit. regards, toby On 3/14/07, Miro Walker <[EMAIL PROTECTED]> wrote: > We've been aware of this issue for a while. Unfortunately, the locking > implementation is pretty hard to disentangle, and we haven't been able > to come up with a fix. However, we have been able to work around it by > adding an extra level of synchronisation in our own application that > ensures only one simultaneous versioning operation can occur. I guess > it depends how big a hit this would be as to whether it would be a > suitable solution for anyone else. > > Miro > > On 3/14/07, Jukka Zitting <[EMAIL PROTECTED]> wrote: > > Hi, > > > > Seems like another case of the age-old JCR-18 issue with concurrent > > versioning. Both of the updates contain some versioning operations, > > and since concurrent versioning is at the moment still a rather > > dangerous sport, I'm not surprised if bad things like a deadlock can > > occur. > > > > Any contributions in further diagnosing and resolving the concurrent > > versioning issues would be very much appreciated! > > > > BR, > > > > Jukka Zitting > > > -- -< [EMAIL PROTECTED] >--- Tobias Bocanegra, Day Management AG, Barfuesserplatz 6, CH - 4001 Basel T +41 61 226 98 98, F +41 61 226 98 97 ---< http://www.day.com >---
Re: Possible deadlock of jcr-server 1.2.1 (rmi)
Sorry for the post spam there I was trying to forward this to my systems team as we are seeing something quite similar. Shane. On 13/03/07, Shane Preater <[EMAIL PROTECTED]> wrote: Does this sound familiar! -- Forwarded message -- From: Olivier Dony <[EMAIL PROTECTED] > Date: 13-Mar-2007 17:19 Subject: Possible deadlock of jcr-server 1.2.1 (rmi) To: dev@jackrabbit.apache.org Hi, We are using the Repository Server deployment model for one of our systems, with 3 different web applications using the same jackrabbit server. Each webapp is running in a separate Tomcat5 server, and jackrabbit 1.2.1 is running as a jcr server in a 3rd Tomcat server. Everything has been doing fine for weeks, but yesterday the jackrabbit server suddenly stopped responding to all requests, seemingly deadlocked. We had the opportunity to take a threadump of the jackrabbit server before performing an emergency restart, which solved the situation. The thread dump is attached. I tried to make some sense out of it, but the read/write locks are hard to follow. Looks like all RMI-handling thread are waiting to acquire a reader lock on the SharedItemStateManager, except one which is waiting for a writer lock. None appear to be ready to release a lock, which is why I suppose they were deadlocked. Is this maybe related to a lock that isn't reentrant but should be? Or not? Can anybody see anything there? Thanks a lot for having a look! -- Olivier Dony Denali s.a., "Bridging the gap between Business and IT" Rue de Clairvaux 8, B-1348 Louvain-la-Neuve, Belgium Office: +32 10 43 99 51 Fax: +32 10 43 99 52 www.denali.be Legal Notice: This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message by mistake, please advise the sender immediately by return e-mail and delete this message from your system. Thank you for your cooperation.
Re: Session handling problem
It would appear that the call to login is blocking. But we have only experienced this in our live environment which I don't have direct access to so I am relying on second hand information. Also for this reason getting a thread dump is more difficult. Leave this with me and I will try and put some more logging into the system. Also I will see if the services team can grab me a thread dump. Although I wonder if the limitation you have linked me with could be my problem. I will do some more investigation and update you once I have a bit more information. Thanks for taking the time to give me some more ideas, Shane. On 01/02/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote: hi shane On 2/1/07, Shane Preater <[EMAIL PROTECTED]> wrote: > Hi all, > I am getting an intermittnet problem with jackrabbit sessions . > > Basically everything seems fine but every now and again when trying to > acquire a session the system seems to lock up. what do you mean by 'lock up'? does the Repository.login call block? a dead-lock? anyway, a thread dump would help analyzing the issue... > > Are there any known issues with either: > 1) Sharing sessions using commons-pooling? > > 2) Doing workspace scoped operations (clone etc) while other people are > performing session scoped operations like saving nodes? These will probably > not both be affecting the same node (Although I can not confirm this but > based on the workflow our users perform it should not be the case) ? > there's a known limitation/issue: calls to the persistence layer are effetively serialized in order to ensure data consistency.e.g. large workspace-scoped operations might affect performance of other concurrent workspace or session-scoped save operations. for more details see http://issues.apache.org/jira/browse/JCR-314 cheers stefan > Any help would be great. > > Thanks, > Shane. > >
Session handling problem
Hi all, I am getting an intermittnet problem with jackrabbit sessions . Basically everything seems fine but every now and again when trying to acquire a session the system seems to lock up. Are there any known issues with either: 1) Sharing sessions using commons-pooling? 2) Doing workspace scoped operations (clone etc) while other people are performing session scoped operations like saving nodes? These will probably not both be affecting the same node (Although I can not confirm this but based on the workflow our users perform it should not be the case) ? Any help would be great. Thanks, Shane.
Re: Pulling binary data from a property
Hmm OK I have put together a test case to exercise the RMI stream issue and it is passing!! Looks like this problem has already been resolved for the 1.1.1 release which is what I am currently using. Please see the attached JUnit TestCase that I have been using and let me know if it fails for you. The only thing I can think of which is different from when I experienced the problem before is the size of the binary data. As in the real app we are storing image data as a property on the node. Shane. On 29/01/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote: On 1/29/07, Shane Preater <[EMAIL PROTECTED]> wrote: > Hi there, > Yeah there was an issue but I did not get a chance to add the JIRA at the > time and then totally forgot about it. > > Sorry about that :( np, thanks for the feedback. > > The work around is, as suggested, use a bufferedInputStream around the one > from the property and all is good. > > Are you going to raise the JIRA or would you like me to do so now? i'd prefer you create the issue. if possible please include a small test case that demonstrates the issue. and don't forget to mention that the problem occurs when using rmi. thanks! stefan > > On 29/01/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote: > > > > On 1/26/07, JavaJ <[EMAIL PROTECTED]> wrote: > > > > > > Hi there, > > > > > > Were you ever able to resolve this problem? > > > > i am not convinced whether there *was* a problem. at least shane did not > > create > > a jira issue as suggested. > > > > > > > > I having the same problem and I am not using RMI. Basically, I am > > using > > > almost the exact same code to persist the binary property. I can verify > > > that the property is persisted properly because if I retrieve the node > > back > > > (in another thread), the property looks fine. However, after the > > initial > > > call of save() on the node, if I try to retrieve the property from the > > same > > > instance of the node, the stream has 0 bytes. > > > > please create a jira issue if you're able to reproduce the issue. please > > inlcude a simple test case that demonstrates the issue. > > > > however i'm pretty confident that there's an issue with your test code. > > i did a quick test with the following code, everything worked as expected. > > > > byte[] data = new byte[10]; > > ValueFactory factory = session.getValueFactory(); > > Value value = factory.createValue(new > > ByteArrayInputStream(data)); > > > > Node node = root.addNode("foo"); > > node.setProperty("bin", new ByteArrayInputStream(data)); > > root.save(); > > > > InputStream in = node.getProperty("bin").getStream(); > > ByteArrayOutputStream baos = new ByteArrayOutputStream(); > > try { > > int read = 0; > > while ((read = in.read()) >= 0) { > > baos.write(read); > > } > > } finally { > > in.close(); > > } > > byte[] data1 = baos.toByteArray(); > > System.out.println(Arrays.equals(data, data1)); > > > > > > > > > > > > Perhaps it's a bug with using ByteArrayInputStream? > > > > rather unlikely ;-) > > > > cheers > > stefan > > > > > > > > > > > > > > > > > zagarol wrote: > > > > > > > > Hi, > > > > > > > > I have a property on a node called 'blobData' this property has been > > > > loaded > > > > using the following snippet: > > > > > > > > ValueFactory factory = session.getValueFactory(); > > > > Value value = factory.createValue(new ByteArrayInputStream(data)); > > > > node.setProperty(propertyName, value); > > > > > > > > Then obviously further on a call to session.save(); is used to persist > > > > this. > > > > > > > > I am now trying to get this binary information back from the property > > > > using: > > > > > > > > InputStream inputStream = node.getProperty(property) > > > > .getStream(); > > > > int readInt = 0; > > > > while ((readInt = inputStream.read()) >= 0) { > > > > outputStream.write(readInt); > > > > } > > > > return outputStream.toByteArray(); > > > > > > > > However this always returns an empty byte array as the first call to > > > > inputStream.read() returns -1 indicating the end of the stream. > > > > > > > > Could someone point me in the direction of my error. > > > > > > > > Thanks, > > > > Shane. > > > > > > > > > > > > > > -- > > > View this message in context: > > http://www.nabble.com/Pulling-binary-data-from-a-property-tf2423182.html#a8657674 > > > Sent from the Jackrabbit - Dev mailing list archive at Nabble.com. > > > > > > > > > >
Re: Pulling binary data from a property
Hi there, Yeah there was an issue but I did not get a chance to add the JIRA at the time and then totally forgot about it. Sorry about that :( The work around is, as suggested, use a bufferedInputStream around the one from the property and all is good. Are you going to raise the JIRA or would you like me to do so now? On 29/01/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote: On 1/26/07, JavaJ <[EMAIL PROTECTED]> wrote: > > Hi there, > > Were you ever able to resolve this problem? i am not convinced whether there *was* a problem. at least shane did not create a jira issue as suggested. > > I having the same problem and I am not using RMI. Basically, I am using > almost the exact same code to persist the binary property. I can verify > that the property is persisted properly because if I retrieve the node back > (in another thread), the property looks fine. However, after the initial > call of save() on the node, if I try to retrieve the property from the same > instance of the node, the stream has 0 bytes. please create a jira issue if you're able to reproduce the issue. please inlcude a simple test case that demonstrates the issue. however i'm pretty confident that there's an issue with your test code. i did a quick test with the following code, everything worked as expected. byte[] data = new byte[10]; ValueFactory factory = session.getValueFactory(); Value value = factory.createValue(new ByteArrayInputStream(data)); Node node = root.addNode("foo"); node.setProperty("bin", new ByteArrayInputStream(data)); root.save(); InputStream in = node.getProperty("bin").getStream(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { int read = 0; while ((read = in.read()) >= 0) { baos.write(read); } } finally { in.close(); } byte[] data1 = baos.toByteArray(); System.out.println(Arrays.equals(data, data1)); > > Perhaps it's a bug with using ByteArrayInputStream? rather unlikely ;-) cheers stefan > > > > > zagarol wrote: > > > > Hi, > > > > I have a property on a node called 'blobData' this property has been > > loaded > > using the following snippet: > > > > ValueFactory factory = session.getValueFactory(); > > Value value = factory.createValue(new ByteArrayInputStream(data)); > > node.setProperty(propertyName, value); > > > > Then obviously further on a call to session.save(); is used to persist > > this. > > > > I am now trying to get this binary information back from the property > > using: > > > > InputStream inputStream = node.getProperty(property) > > .getStream(); > > int readInt = 0; > > while ((readInt = inputStream.read()) >= 0) { > > outputStream.write(readInt); > > } > > return outputStream.toByteArray(); > > > > However this always returns an empty byte array as the first call to > > inputStream.read() returns -1 indicating the end of the stream. > > > > Could someone point me in the direction of my error. > > > > Thanks, > > Shane. > > > > > > -- > View this message in context: http://www.nabble.com/Pulling-binary-data-from-a-property-tf2423182.html#a8657674 > Sent from the Jackrabbit - Dev mailing list archive at Nabble.com. > >
Cloning nodes with out traversing the entire sub tree
Hi, I have a problem with cloning a node using jackrabbit. I have the following workspaces with referenceable nodes (As illustrated). Note: The last nodes on the tree are blog assets which have other sub nodes etc etc. Editorial Published | | blogsblogs || general general | | | 2007 20062006 | | | 01 12 12 | | | | 03 04 20 20 | | | | test-blog test2 published published I need to be able to clone the test-blog blog from the editorial workspace to the published workspace but I do not want this to clone the test2 blog. I (mistakenly) though the last parameter on the clone method was a flag to tell the workspace whether to clone the nodes children but this is not the case it is what to do when there is a UUID collision! I was hoping to clone the single nodes down to the article then clone the sub tree from the article down. However this is not going to work as the clone tried to clone sub tree. When I try to just clone the article node this fails as the parent nodes do not yet exist. Could someone point me in the correct direction for solving this problem? Also I am unable to change the structure of the nodes to something easier to replicate as it is a rigid part of the spec (I already tried that approach). Many thanks, Shane.
Problems with XPath querying of Multivalue properties
Hi, I am trying to perform the following query: /jcr:root/some/path/(@lastName, @articles) On the 1.1.1 RMI jackrabbit service running on rmi:localhost:1099/jackrabbit.repository The problem I am getting is that the 'articles' property is multi-valued and when I call the RowIterator method getValues("articles"); on the returned rows it always fails with a NullPointerException. However if I obtain the node via the 'jcr:path' value returned and then call node.getProperty("articles").getValues(); This returns the correct values. Is there an issue with querying properties with multiple values in the XPath query engine? I know that predicating properties with multivalues works as I am also doing the following: /jcr:root/some/path/[EMAIL PROTECTED] 'aabb-aabbdd-a2203a'] which returns all the nodes which have the uuid as one of their articles. Thanks for any help. Shane Preater.
Problems cloning workspace
Hi I am using jackrabbit and require the ability to clone nodes and their children from a development workspace to a live workspace. I am having the following problem when trying to clone a simple node with a single child using the 1.1.1 released version of Jackrabbit: javax.jcr.ItemNotFoundException: failed to build path of a2988d4b-429e-4c82-9f4e-5c0f4f799f9f: cafebabe-cafe-babe-cafe-babecafebabe has no child entry for a2988d4b-429e-4c82-9f4e-5c0f4f799f9f at org.apache.jackrabbit.core.HierarchyManagerImpl.buildPath( HierarchyManagerImpl.java:309) at org.apache.jackrabbit.core.CachingHierarchyManager.buildPath( CachingHierarchyManager.java:160) at org.apache.jackrabbit.core.HierarchyManagerImpl.getPath( HierarchyManagerImpl.java:358) at org.apache.jackrabbit.core.CachingHierarchyManager.getPath( CachingHierarchyManager.java:222) at org.apache.jackrabbit.core.CachingHierarchyManager.nodeAdded( CachingHierarchyManager.java:351) at org.apache.jackrabbit.core.state.StateChangeDispatcher.notifyNodeAdded( StateChangeDispatcher.java:152) at org.apache.jackrabbit.core.state.SessionItemStateManager.nodeAdded( SessionItemStateManager.java:829) at org.apache.jackrabbit.core.state.StateChangeDispatcher.notifyNodeAdded( StateChangeDispatcher.java:152) at org.apache.jackrabbit.core.state.LocalItemStateManager.nodeAdded( LocalItemStateManager.java:479) at org.apache.jackrabbit.core.state.NodeState.notifyNodeAdded( NodeState.java:788) at org.apache.jackrabbit.core.state.NodeState.addChildNodeEntry( NodeState.java:377) at org.apache.jackrabbit.core.BatchedItemOperations.copyNodeState( BatchedItemOperations.java:1674) at org.apache.jackrabbit.core.BatchedItemOperations.copy( BatchedItemOperations.java:311) at org.apache.jackrabbit.core.WorkspaceImpl.internalCopy( WorkspaceImpl.java:298) at org.apache.jackrabbit.core.WorkspaceImpl.clone(WorkspaceImpl.java :405) at org.apache.jackrabbit.rmi.server.ServerWorkspace.clone( ServerWorkspace.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I have simply created two workspaces one called 'clonetest' and the other 'clonedestination' I then created a node called 'test-node' and then cloned this successfully from clonetest to clonedestination. I then created a node in clonetest which is a child of 'test-node' called 'child'. When trying to then clone test-node from clonetest to clonedestination again I am getting the above exception. public void testWorkspaceClone() throws Exception { try { ClientRepositoryFactory factory = new ClientRepositoryFactory(); Repository repository = factory.getRepository (LOCALHOST_REPOSITORY); Session shanoSession = repository.login(new SimpleCredentials( "shane", "preater".toCharArray()), "clonedestination"); String path = "/test-node"; shanoSession.getWorkspace().clone("clonetest", path, path, true); shanoSession.save(); shanoSession.logout(); } catch (Exception e) { e.printStackTrace(); throw e; } } I have seen that an error very similar to this was raised as JCR-452 and this was fixed for 1.1. Does this mean I am doing something wrong or, should I re-open the JIRA issue? Kind regards, Shane.
Re: Pulling binary data from a property
Hi Stefan, Thanks for the help. I will put further questions to the user list. Sorry about that. I am using the RMI implementation to aquire a repository instance so this maybe my problem. The repository is acquired through JNDI and then I simply acquire the correct node using session.getRootNode(); then node.getNode("myNode"); Hopefully this will help narrow the problem down. Shane. On 11/10/06, Stefan Guggisberg <[EMAIL PROTECTED]> wrote: hi shane, On 10/11/06, Shane Preater <[EMAIL PROTECTED]> wrote: > Hi, > > I have a property on a node called 'blobData' this property has been loaded > using the following snippet: > > ValueFactory factory = session.getValueFactory(); > Value value = factory.createValue(new ByteArrayInputStream(data)); > node.setProperty(propertyName, value); > > Then obviously further on a call to session.save(); is used to persist this. > > I am now trying to get this binary information back from the property using: > > InputStream inputStream = node.getProperty(property) > .getStream(); > int readInt = 0; > while ((readInt = inputStream.read()) >= 0) { > outputStream.write(readInt); > } > return outputStream.toByteArray(); > > However this always returns an empty byte array as the first call to > inputStream.read() returns -1 indicating the end of the stream. > > Could someone point me in the direction of my error. your code looks fine so far. if you are directly accessing a local jackrabbit instance i guess the error must be in that part of the code that you didn't provide. if you're accessing a remote jackrabbit instance through RMI there could be an issue wrt stream handling in the RMI implementation. in any case it would be good if you could provide a complete code sample. btw: the users list would be more appropriate for such questions. cheers stefan > > Thanks, > Shane. > >
Pulling binary data from a property
Hi, I have a property on a node called 'blobData' this property has been loaded using the following snippet: ValueFactory factory = session.getValueFactory(); Value value = factory.createValue(new ByteArrayInputStream(data)); node.setProperty(propertyName, value); Then obviously further on a call to session.save(); is used to persist this. I am now trying to get this binary information back from the property using: InputStream inputStream = node.getProperty(property) .getStream(); int readInt = 0; while ((readInt = inputStream.read()) >= 0) { outputStream.write(readInt); } return outputStream.toByteArray(); However this always returns an empty byte array as the first call to inputStream.read() returns -1 indicating the end of the stream. Could someone point me in the direction of my error. Thanks, Shane.
Re: xpath aggregation question
Hi, Thanks for the swift response. I'll have to work out a way around the finding the parent node but hey such is life. I actually really like the brand idea you have proposed and I think I will follow your advice and set it as a reference. Cheers, Shane. On 04/10/06, Jukka Zitting <[EMAIL PROTECTED]> wrote: Hi, On 10/4/06, Shane Preater <[EMAIL PROTECTED]> wrote: > I am trying to return the distinct brand property on a series of nodes and > have no idea how to go about it. Any help would be fantastic. > > for example: > /categories//products/* will return all my products > all the nodes returned have a brand property so: > > /categories//products/*/@brand will return all the brands > > but how make the list distinct? Or will I need to use an SQL query instead? Again, I'm sorry to say that the query features in JSR 170 won't help you there. There is no support for aggregate results, joins, or other advanced query features, so you'll essentially need to work around the limitation by post-processing the query results. But again there is an alternative where you setup separate referenceable brand nodes like /categories/brands/* and turn the @brand property of a product node into a reference. Then you can list all your brands by listing the children of /categories/brands and you will also have a very efficient way to retrieve all the products of a given brand. BR, Jukka Zitting -- Yukatan - http://yukatan.fi/ - [EMAIL PROTECTED] Software craftsmanship, JCR consulting, and Java development
xpath aggregation question
Hi, I am trying to return the distinct brand property on a series of nodes and have no idea how to go about it. Any help would be fantastic. for example: /categories//products/* will return all my products all the nodes returned have a brand property so: /categories//products/*/@brand will return all the brands but how make the list distinct? Or will I need to use an SQL query instead? Many thanks, Shane.
XPath question 1
HI, I am looking to return the following nodes: /jcr:root/people/users/hashing-bit/* (hashing bit is the hashing we are using to ensure node depth and therefore performance). I need to return the nodes above in a search where they have an additional child node called author essentially all the users with the author role. I have tried the following: /jcr:root/people/users/hashing-bit/*/author/parent::* /jcr:root/people/users/hashing-bit/*[element(author, nt:unstructured)] /jcr:root/people/users/hashing-bit/*[node-name(./*) = 'author'] All to no avail. I am not sure if this is a specific jackrabbit question or a more general xpath if its the latter I apologise. Many thanks, Shane Preater
Re: Getting the path from a NodeId
Ahh its even easier. Because I am in the AccessManager I do not know about the workspace but. I am supplied with a HierarchyManager which gives me a function called getPath and takes an ItemId. So sorted. Thanks for your help though. I just hope this helps some others who are trying to setup more comprehensive authorization than the default. Shane. On 01/09/06, Nicolas <[EMAIL PROTECTED]> wrote: Hi, This has been discussed already. If you know the workspace of the NodeId then you can use session.getNodeById. Cheers Nico my blog! http://www.deviant-abstraction.net !! On 9/1/06, Shane Preater <[EMAIL PROTECTED]> wrote: > > Hi, > I am tasked with adding Jaas support to my clients Jack rabbit repository > so > that they can define who can read, write and remove nodes in a repository > depending on Principals. > > They need to be able to set the permissions for nodes based on an XPath > string which can then be resolved against the actual path of the node > requested. > > In the AccessManager there is a function called checkPermission(ItemId, > int) > which looks like the right place to add this but in order for it to work I > need to resolve the nodes path from the NodeId. Is this even possible? > > Many thanks, > Shane. > >
Getting the path from a NodeId
Hi, I am tasked with adding Jaas support to my clients Jack rabbit repository so that they can define who can read, write and remove nodes in a repository depending on Principals. They need to be able to set the permissions for nodes based on an XPath string which can then be resolved against the actual path of the node requested. In the AccessManager there is a function called checkPermission(ItemId, int) which looks like the right place to add this but in order for it to work I need to resolve the nodes path from the NodeId. Is this even possible? Many thanks, Shane.