Hi

Environment: Jackrabbit 1.3.1, Tomcat, MySQL (data plus blobs)
Use case: Jackrabbit is used as a BIG dumping ground for data that is
fed into it all the time in large numbers

We have been using jackrabbit and have noticed a memory leak issue
that seems to exist in the cache management.

We have tried to put a bug in JIRA at
https://issues.apache.org/jira/browse/JCR-1037, and have been told
that unless we can recreate the bug in a very basic format, nothing
can be done about it. The problem arises just there. We are not using
basic nodetypes, we have extended nodetypes, most of the data is
versioned and there is a background auditing thread based on the
observation manager that is always running. Hence recreating the
problem that can be reproduced is hard to do. We tried providing a
video to display the problem, but that did not help.

This remains an outstanding problem for us and we can consistently
reproduce it with our code.

So the questions are:
1. Has anyone else experienced these problems? I see outstanding
issues in the Jira regarding cache and memory leaks that have either
been closed due to irreproducibility or lack of activity. The fact is
that this bug has not been acknowledged. Hence we wonder why we are
the only people having this bug, I don't think that the use case is
any different than what jackrabbit was designed to handle.

2. Is there a way to just turn caching off? If not, I see the merits
of being allowed to do so.

3. If we were to start creating a fix for this issue, where should we
start? We can work with the Dev team and create the fix that could
help the community.

I would appreciate it, if we can get a conversation started because
this is a show-stopping category bug for us.

Thanks.

Vikas.

Reply via email to