On Wed, Feb 17, 2010 at 16:33, Jukka Zitting <jukka.zitt...@gmail.com> wrote:
> Hi,
>
> In addition to the search index (that deserves it's own thread), we
> have various different mechanisms for persisting repository
> information:
>
>    persistence managers for node and property states
>    data store for binary properties
>    journal for cluster synchronization records
>    file system abstraction for various minor things like:
>        custom_nodetypes.xml for registered node types
>        ns_reg.properties for registered namespaces
>        rootUUID file for the root node UUID (essentially a constant)
>        workspace.xml for workspace configuration
>        locks for open-scoped locks
>
> Most information that we store in the repository is duplicated in at
> least two of these locations. Reducing the amount of different storage
> formats and mechanisms we use would make a lot of things easier.
>
> Ideally we'd have a single clustered persistence layer that we could
> use for all of these things (and the search index).
>
> How can we achieve something like that?

I think a good idea (and I guess this is what you have in mind) would
be to have a lower-level persistence API that is journal/cluster-aware
and allows for all the basic node and property storage. But all the
type constraints, versioning etc. JCR features are only done on a
higher level. This could make it easier to use the lower-level
persistence for things like the above w/o necessarily breaking JCR
contracts. An import/export on this level could also be used for fast
and full-coverage backups and dumps.

These parts could then be made accessible via the JCR API selectively,
but eg. read-only (for those where JCR defines a separate API, egg.
namespace registry, node types).

The main advantage however, would be that of more flexibility in
writing the persistence layer and optimizing it for performance.
Currently many things are entangled (eg. child node entry list of
nodestates, see JCR-642) across many layers.

Regards,
Alex

-- 
Alexander Klimetschek
alexander.klimetsc...@day.com

Reply via email to