Hi, On 5/5/07, Ahmed Mohombe <[EMAIL PROTECTED]> wrote:
I don't expect you to agree :), but if you would feed a JCR based DMS with a few GB of documents I'm convinced you would see what I mean :).
We have production systems with millions of documents and terabytes of data in a JCR content repository, so as you expected I don't agree with you. :-) Of course the scalability and performance is conditional to how well your content model and use cases match the characteristics that the repository is designed and optimized for. Email storage is different from the scenarios I've so far seen Jackrabbit being used in, so at this point I can't guarantee that we won't face performance issues, but as of now I don't see bottlenecks that we couldn't at least work around.
> One important aspect in achieving good performance is how you model > your content. In most cases this is not really an option for the developers, as the required "metadata" from documents is already to a high degree predefined by the customer.
Well, there's typically some flexibility between the high-level requirements and how you reflect that in your implementation. Otherwise you wouldn't have any need for the DB specialists you mentioned. :-)
This is not a bash against JCR, but just a few cents from my experience. In a lot of projects, JCR got a fair chance, but failed, mostly for performance reasons (the developers loved the API). After several optimization trials, the cheapest solution was always to get a few very expensive DB specialists and have the problem solved the "relational way" :).
I'd love to discuss the problems you faced, but perhaps the specifics are better suited for the Jackrabbit mailing lists. BR, Jukka Zitting --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
