Jörn Nettingsmeier schrieb:
Thorsten Scherler wrote:
[...]
See the Jboss workflow engine that is getting used in e.g. Daisy and
Alfresco and compare it with ours. Much more powerful, tools
support, ... Our own JCR implementation is just another bullet point in
a big list.
...and yes we need jcr, now! A CMS without JCR support is doomed.
hmmm. i don't buy this. i agree that a cms that tries to maintain its
own repo system is likely doomed, yes.
but it's not like there aren't any alternatives to JCR.
i'd like to discuss this further, learn about the pros and cons of JCR
vs. an RDBMS vs. SVN vs. eXist vs. younameit and then arrive at a
decision and maybe a repo abstraction that makes switching repo engines
really easy (which might mean to leave some of the cleverness in lenya
and not use mechanisms that might be available on one backend but not on
another).
From my POV, the major benefits of JCR vs. the others are:
- You get a Java API and can choose your implementation.
- You get versioning and transactions out of the box.
- You can do XPath queries, which IMO fits our XML-based approach.
- JCR was designed for content management. There are best practises.
...but what do you mean with: expose and handle more stuff through
sitemaps and pipelines than it currently does. can you give an example?
[...]
i don't like how we are using a gazillion input modules and passing way
too many params to our xslts. i'd prefer to get at lenya-internal data
in big xml chunks which i then aggregate with the content and match in
xslts.
We had that in 1.2 (page envelope). There are some major performance
drawbacks:
- You need to compute everyting up front.
- You have to compute the data for each "parallel" pipeline (e.g., for
navigation elements and XML chunks which are called using XInclude).
This generates a lot of XML, and therefore SAX+XSLT processing overhead.
IMO a transformer is (in most cases) the most appropriate technology to
feed data into the SAX stream:
- It doesn't clutter the sitemaps like input modules.
- It provides data on demand, only if they are requested by the
processed XML.
for instance, i'd like to be able to say "give me this file's revision
history for the last n revs" and "give me a map of user ids vs. their
emails", and then i can implement the "last edited by user
<[EMAIL PROTECTED]>" stuff in my xsl, where it's easy to see and understand.
These cases could be handled by a transformer:
1) <rev:insert-history document="lenya-document:1234-234-asdf"/>
would be expanded to
<rev:history document="...">
<rev:revision ...
<rev:revision ...
...
</rev:history>
which can be evaluated by a subsequent XSLT.
2) <ac:insert-user-data user="{$currentUser}"/>
would be expanded to
<ac:user-data user="john">
<ac:email>[EMAIL PROTECTED]</ac:email>
<ac:name>John</ac:name>
...
</ac:user-data>
[...]
if we somehow exported an xml view of our repository, i'm sure many
users could think of very clever things to do with their content and
metadata.
In principle I agree, though I'd rather provide a well-documented and
extensible set of "custom Lenya tags" (see above). AFAIK most CMSs
provide such a mechanism, and users are used to it. A big plus of Lenya
is that, thanks to Cocoon, we have XML processing pipelines which allow
very flexible and powerful tag expansion scenarios.
[...]
(Regarding the workflow engine I have no strong opinion at the moment,
only that it would be nice to have workflow-driven usecases instead of
the other way round as it is now.)
-- Andreas
--
Andreas Hartmann, CTO
BeCompany GmbH
http://www.becompany.ch
Tel.: +41 (0) 43 818 57 01
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]