Robert, I don't think any of us, including myself, had a misunderstanding
about the fact that the limitation is for a large number of child nodes
under SAME parent. No one said 50K in the entire repository was causing
problems, but 50K children under same parent IS a problem if it's slow.
It's a very significant issue for actual application developers trying to
build something, because everything looks like its performing great but
will fail miserably when you scale it up. It's hard to call JCR 'enterprise
scale' with such a silly limitation staring is all right in the face
defying any solution.

Best regards,
Clay Ferguson
[email protected]


On Sat, Nov 14, 2015 at 2:02 AM, Robert Munteanu <[email protected]> wrote:

> On Nov 14, 2015 2:21 AM, "Clay Ferguson" <[email protected]> wrote:
> >
> > In my opinion this one issue is the single most crippling achilies heel
> of
> > the entire JCR. Very likely to drive away many potential users of this
> API.
> > It's touted as an enterprise-scale API, but yet chokes on just a few tens
> > of thousands of nodes. This, IMO urgently needs to be addressed. I know
> > it's a technical limitation, and not a design decision, but to me that
> just
> > means it's an 'unsolved' problem. I'm not complaining or criticizing
> > developers, i'm just saying that as a community we need to solve this. I
> > should be able to have a 50 million nodes, and not be a problem, in an
> > ideal situation. RDBMS have solved these issues years ago, by a "never
> load
> > everything all at once" rule. However somehow the "It's ok to load all
> > children in memory" mentality caught on in the JCR and we are now stuck
> > with the results.
>
> Nope that this usually applies to direct child nodes, i.e. 50k nodes with
> the same parent.
>
> Such a number spread throughout the repository is not an issue.
>
> Robert
>

Reply via email to