[ https://issues.apache.org/jira/browse/JCR-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12594524#action_12594524 ]
saasira edited comment on JCR-619 at 5/7/08 12:19 AM: -------------------------------------------------------------------- I would like to use the distributed Caching capabilities of the above mentioned Caching softwares which are proven and can handle hundreds of GB of in-memory or disk cache distributed over the network across dozens of computers. Actually I want to use the Content repository in a distributed environment with some master servers (load balanced to cater the failover scenario) mapping the content workspace locations across the network depending on some rules defined during the repository configuration. The The mapping information is replicated across the master servers and the content is spread across several clients(content on each client node is further replicated to siblings to address failover). In such a complex scenario, I would prefer to use a well known and proven Caching software to handle the job. And if the Jackrabbit cache is aimed at doing that my question is would it not be a duplicate effort when there are some very efficient caching libraries available and are standards compliant? Or Is it not possible to have a pluggable Cache implementations for use with Jackrabbit? Or may be I might have misunderstood some thing ---- Is it possible to use the Caching softwares on top of Jackrabbit, with out bothering about the jackrabbit's internal cache implementation? I just found out that a similar work is going on in Jackrabbit and a discussion may be related to JCR-872 was (Author: saasira): I would like to use the distributed Caching capabilities of the above mentioned Caching softwares which are proven and can handle hundreds of GB of in-memory or disk cache distributed over the network across dozens of computers. Actually I want to use the Content repository in a distributed environment with some master servers (load balanced to cater the failover scenario) mapping the content workspace locations across the network depending on some rules defined during the repository configuration. The The mapping information is replicated across the master servers and the content is spread across several clients(content on each client node is further replicated to siblings to address failover). In such a complex scenario, I would prefer to use a well known and proven Caching software to handle the job. And if the Jackrabbit cache is aimed at doing that my question is would it not be a duplicate effort when there are some very efficient caching libraries available and are standards compliant? Or Is it not possible to have a pluggable Cache implementations for use with Jackrabbit? Or may be I might have misunderstood some thing ---- Is it possible to use the Caching softwares on top of Jackrabbit, with out bothering about the jackrabbit's internal cache implementation? > CacheManager (Memory Management in Jackrabbit) > ---------------------------------------------- > > Key: JCR-619 > URL: https://issues.apache.org/jira/browse/JCR-619 > Project: Jackrabbit > Issue Type: New Feature > Components: jackrabbit-core > Reporter: Thomas Mueller > Assignee: Stefan Guggisberg > Fix For: 1.2.1 > > Attachments: cacheManager.txt, cacheManager2.txt, cacheManager5.txt, > cacheManager6.txt, cacheManager7.txt, jackrabbit-cachemanager-config.patch, > stack.txt > > > Jackrabbit can run out of memory because the the combined size of the various > caches is not managed. The biggest problem (for me) is the combined size of > the o.a.j.core.state.MLRUItemStateCache caches. Each session seems to create > a few (?) of those caches, and each one is limited to 4 MB by default. > I have implemented a dynamic (cache-) memory management service that > distributes a fixed amount of memory dynamically to all those caches. > Here is the patch -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.