Hi Michi,

On 8/26/06, Michael Wechner <[EMAIL PROTECTED]> wrote:
well, it might depend on how much one believes in JCR as one fits all
solution.


 I don't believe that JCR is a one fits all, actually you'll find
many posts to jackrabbit ML from me where I try to explain problems I
faced while trying to use the api/jackrabbit for situations that don't
fit very well [see bellow]. Unfortunately it's something I learned
after seeing problems while testing my applications.

Even if JCR is a great thing, it might not fit all purposes and one
might be glad to have an alternative entry point,


 I agree with you. However I tend to think JCR is a good match for
graffito. As far as I see in the sources most of the API/impl
development effort was put on coding a subset of the features
jackrabbit already provides. I guess that in case JCR is chosen as the
only api to interact with the repository the development effort could
focus on implementing the portlets for content management, and
eventually adding features to the underlying jcr implementation.

but I cannot really judge
because I do not Graffito well enough

 Me neither, I'm just a newbie in Graffito and CMSs in general trying
to understand the internals to see where graffito is heading and what
I can expect from it and contribute to it in the near future.

To name a few first hand problems I faced:

 1. data aggregation is not taken into account neither in the api nor
in jackrabbit. It's very slow to try to summarize properties of a big
number of nodes, RDBMS are far more useful for this kind of use case.

2. Jackrabbit is not optimized for data models that contains big
collections, in jackrabbit ML users are often invited to review the
data model when they face problems caused by this.

 3. Scalability, "implicit clustering" as is called in jackrabbit ML
can be easily achieved with any kind of simple master/slave
replication mechanism, however clustering with session replication and
fail over seem to be hard to achieve. On the contraty with ORM tools
like hibernate it's very easy.

4. IMHO the jackrabbit design is pretty much coupled and it's quite
hard to tune the internals to fit specific needs.

btw, recently I read some posts on Cosmo ML about moving from JCR to a
more classic RDBMS approach due to scalability issues and learning
curve for jcr newcomers, you'll surely find more info about these
kinds of problems in cosmo ML.

From a more general point of view I think JCR is not a good match for
designs that are very Object oriented. I think jcr features are better
used when the design treats the contents separately from the business
logic. I think that a CMS that works on top of JCR should be mostly
behavior agnostic. I mean that the business logic can be modelled not
as verbs in objects that maps to an x persistent storage but as
business processes that modify the underlying content.

e.g. in a DMS there might be no need for a Document object, but
there's the "document" data type that contains the properties and
constraints of that kind of content, and there are processes that
manage the lifecycle of that content given specific business needs.
Going deeper to implementation details I guess it could be implemented
by a process that listens and secures the content according to
business rules and fires events when the process is invoked from any
interface, either from a portal with cms portlets, or from a network
interface invoked by a user touching the content via any protocol
(webdav, nfs, cifs, ftp, smtp, ldap, etc.).

I'm probably wrong, as I told I consider myself a newbie, but IMHO
that would be a data centric application that take full advantage of
the api, a CMS that provides a GUI to manage those objects split in
content plus processes.

sorry for the long post and thanks for the feedback,
edgar


Just my 2 cents

Michi

Reply via email to