Dieter Maurer wrote:
Florent Guillaume wrote at 2005-7-8 20:36 +0200:
The RAMCacheManager does a costly pseudo-pickling of the objects it
stores to compute their size, but that information is only used in
the statistics screen.
I replaced it by the following code:
[...]
That's a fine
Florent Guillaume wrote:
The RAMCacheManager does a costly pseudo-pickling of the objects it
stores to compute their size, but that information is only used in the
statistics screen.
The motivation was actually more subtle: I wanted to prevent
applications from caching things that weren't
On Sun, Jul 10, 2005 at 12:18:03AM -0600, Shane Hathaway wrote:
| Catalog results in particular are an obvious thing to cache, but they
| aren't safe for caching because they link back to the catalog. You'd
| have major thread problems and probably inconsistent results.
Would using thread.local
Sidnei da Silva wrote:
On Sun, Jul 10, 2005 at 12:18:03AM -0600, Shane Hathaway wrote:
| Catalog results in particular are an obvious thing to cache, but they
| aren't safe for caching because they link back to the catalog. You'd
| have major thread problems and probably inconsistent results.
On Sun, Jul 10, 2005 at 09:32:29AM -0600, Shane Hathaway wrote:
| Would using thread.local help here?
|
| I don't think so. You want either a shared cache (like RAMCacheManager)
| or a per-database-connection cache (which would let you cache catalog
| results.) Database connections are not
Florent Guillaume wrote at 2005-7-8 20:36 +0200:
The RAMCacheManager does a costly pseudo-pickling of the objects it
stores to compute their size, but that information is only used in
the statistics screen.
I replaced it by the following code:
try: from cPickle import Pickler,
The RAMCacheManager does a costly pseudo-pickling of the objects it
stores to compute their size, but that information is only used in
the statistics screen.
1. how about not computing size by default?
2. or, how about using the size to have a cache threshold based on
the size. That would