Thanks Ross. It seems like the content is present on the filesystem. I can see 
the old documents in the repository/datastore folders. But the link between the 
Jackrabbit metadata (ex: path) and the content seems to be broken. Any reason 
on why this would happen?

Does Jackrabbit use UUIDs internally to store the metadata and the content 
itself?

Thanks
Sumit

From: [email protected] [mailto:[email protected]]
Sent: Monday, December 19, 2011 9:20 PM
To: [email protected]
Cc: [email protected]
Subject: Re: Jackrabbit 2.2.5 - loss of data [SEC=UNCLASSIFIED]

This looks suspiciously like a problem I have had before, where somebody writes 
a script to delete files that look like temp files, no file extensions, over a 
month old.  I had one that was deleting classes created at runtime, so each 
morning there was a good chance of getting classloader errors.

Best of luck.



From:        "Shah, Sumit (CGI Federal)" <[email protected]>
To:        "[email protected]" <[email protected]>
Date:        20/12/2011 11:58 AM
Subject:        Jackrabbit 2.2.5 - loss of data
________________________________



Hi All,

I am running into a serious issue. It seems like I am unable to retrieve 
documents from Jackrabbit that are more than a month old. I get the following 
error:

"JCR Action 'Get stream' cannot be performed because the provided path does not 
exist"

I am running Jackrabbit in standalone mode and also in a clustered environment. 
I am seeing the same issue on both. When does this happen? Is there a self 
initiated process that cleans up the data within Jackrabbit? What are the 
possible resolutions to this?

I would appreciate any help on this.

Thanks
Sumit

Reply via email to