Hello.

After some discussion with Zoran Vasiljevic some time ago, I got down to
writing an AOLserver module similar to ns_cache, but that would work on
two levels:

1/ global caches - would store "deep copy" of Tcl objects (ie. store
lists as lists, not as strings; same for int/long/double)
2/ thread local cache - used for storing Tcl_Obj versions of the global
cache entries

I decided to use one thread local cache to store all the Tcl_Obj's -
this way I can limit cache memory size used per thread.

After some 3-4 hours of writing and testing I came up with quite stable
code. This is not intended for release right now, but perhaps I'll
publish it someplace when I make the code nicer and more compatible iwth
ns_cache ... (I'll even perhaps write an option to create ns_cache
command as well).

For plaintext, I did not notice any significant difference - my module
was actually about 5% slower (probably a matter of optimizing the code).

The interesting results start with working on lists and lists with
sublists. I did _cache flush; time {dosomething [_cache eval ]}, where
dosomething was mostly lindex or lrange [result]. The two level cache
worked about 2-10 times faster (depending on sizes of the list).

I tried wrapping my database query (which returns a list of over 1000
elements, and the elements are a list of two sublists - the list is
built so that it can be used in Tcl 'eval' command easily).
   time {lrange [_cache eval cache key {select}] 800 999} 100
returned about 42000 for ns_cache and 26600 for my cache command.
(after calling it again [so that it does not do 'select'] it was
respectively 800 and 120).

My questions are:
1/ Is anyone interested in this code? Should I develop it only for my
internal purposes or does anybody want a generic module as well?
2/ Is the difference really significant? Especially that it will consume
some more memory...

--
WK

"Data typing is an illusion. Everything is a sequence of bytes."
                                                              -Todd Coram

Reply via email to