> -----Oorspronkelijk bericht-----
> Van: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Namens Kees Jongenburger
> Verzonden: donderdag 21 oktober 2004 19:47
> Aan: [EMAIL PROTECTED]
> Onderwerp: Re: Pluggable cache-implementation.
>
>
> > What makes me think this way is the fact that for instance
> data trees
> > (like for a) are not cached well by the multilevel cache. You could
> > write a cache that checks if a query is for some part of the cloud,
> > and then process it in a special way. If not, it passes it
> up (to the
> > multilevel cache...)
> >
> > I would like to know what people think about it..
> I have no clue what you are talking about. can you give a
> clearer example?
>
Ok, that was maybe not a very clear story. Let me elaborate
The problem:
At vara we now have twoo websites that are build with mmbase, but have
an additional application for forum like functionality.
One of them is the debatplaats and it uses the myvietnam forum, and the
other is the kassa-online site. The latter was originally build with the
forum as part of the cloud, but the forum was lifted from the cloud for
performance reasons. I apperars that the multilevel cache dous not hack
it for tree like data structures, where cache invalidation should be
conservative. Data that is structured in this nature, and what is more
important, is mutated at high ferquency can not be cached well with the
multilevel cache.
Different applications benefit from different types of cacheing. As the
application of mmbase shifts from website-engine to webapp-engine, we
need a system that supports different kinds of speciallized cacheing in
a pluggable way.
That's why I thought of a caching chain, or, better word: caching
pipeline. This pipeline could be called in stead of the multilevel cache
and indead contains the multilevel cache.
The idear is that othere caches could be inserted before the multilevel
cache. If you want to retreve something from the cache you go like
cachePipeline.get(key). And the pipeline will pass it to the first cache
in the stack.
Any cache would have to be able to descide if the request (cache key)
would fit their function.
If so: It tries to deliver data. If it has no match for the key, it
returns null, and the query is run.
If not so: it passes the reguest up in the pipeline.
If the data is put to the cache it works the same
(cachePipeline.put(key, value)) and the data again goes up the different
caches, that one by one descide if they shoud put the data.
Every cache would have to descide if they want to handle the request
based on their configuration. This could be based on what builders are
involved, or whatever.
The multilevel cache should be the last cache, handling the caching if
no other cache will.
Interface PipelineCache{
void init() //read configuration
void setParentCache(PipelineCache parent} //to pass the request
on
Object get(Object key) //either handles the request or passes it
on to the parent
}
The order and contents of the pipeline should be configurable, so the
caceching mechanism becomes easily extensibe.
The configuration will contain both the caches that should be loaded
(and configured) and the order they get in the pipeline stack
That was the first idear
The second idear is an implementation of a Pipeline cache that will be
usefull for forums and other such applications. It could keep a tree of
nodes in memory and add newly commitd nodes to the tree in stead of
invalidating the data. It would use mmbase events to te informed of node
commits and deletes.
I talked about it with Michiel and Pierre, and they pointed out that it
might be a good idear to first take a good look at the multilevel cache,
becouse it might be optimized a bit or maybe it could enhanced to use
different invalidation rules for based on some configuration.
Still I think this is an excellent potential extention point of mmbase.
I am shure more situations will (or do) arise where custom caching is
necesairy.
Is this more clear?
Ernst
}