Miro,

I thought by storing my jackrabbit data inside the database would allow 
multiple jackrabbit instances on different machines to access the same backing 
store (the db is the authoritative source).

I understand that write operations on one machine will not update the cache on 
the other machines, but the write operations should update the blob in the db, 
so that other machines can pick up the content from the db (once their cache's 
expire).

Can you let me know if my beliefs above are wrong?  In the meantime, I will do 
some testing, and investigate jcr-rmi, etc...

Thanks.
Phillip


----- Original Message -----
From: "Miro Walker" <[EMAIL PROTECTED]>
To: [email protected]
Sent: Friday, April 20, 2007 11:05:15 AM (GMT-0500) America/New_York
Subject: Re: SimpleDBPersistenceManager file cache?

Hi Phillip,

Why would putting blobs in the database have a bearing on which
machines can access them? Because of the way that Jackrabbit's caching
works, you can't ordinarily access the same backing store (database or
other) from multiple jackrabbit instances on different machines (as
write operations on one machine will not be reflected in the cache on
the other machine(s)). If you want to do this, I believe you might
need to:

* use jcr-rmi or webdav or some other custom remoting protocol to
allow multiple client machines to access a single jackrabbit server
instance; or
* possibly look into clustering multiple jackrabbit servers using the
new capabilities of JR 1.3.

The latter is pretty new technology, so comes with the obvious warnings :-).

Note also that blobs and workspace data are not the only pieces of
transactional data that can be stored on the filesystem. You may also
want to look at the DBFileSystem stuff to allow you to store
workspace.xml files in there too, otherwise newly created workspaces
will only exist on the local filesystem.

Cheers,

Miro

Reply via email to