Raymond Hettinger <[email protected]> added the comment:
For comparison, here is a recipe that I was originally going to include in the
FAQ entry but later decided against it.
It only had an advantage over @lru_cache with instances so large that we can't
wait for them to age out of the cache. It shouldn't be used if new, equivalent
instances to be created; otherwise, the hit rate would fall. The class needs
to be weak-referenceable, so __weakref__ needs to be a listed field when
__slots__ are defined. Also, @weak_lru is slower than @lru_cache.
Compared to @cached_method in the current PR, @weak_lru creates a single
unified cache rather than many separate caches. This gives lower space
overhead, allows a collective maxsize to be specified, and gives central
control over cache statistics and clearing. If the instances support hashing
and equality tests, the @weak_lru recipe increases the hit rate across
instances that are equivalent but not identical.
That said, @cached_method is much faster than @weak_lru because it doesn't need
to create a new ref() on every call and it doesn't need a pure python wrapper.
-----------------------------------------------------
import functools
import weakref
def weak_lru(maxsize=128, typed=False):
'LRU Cache decorator that keeps a weak reference to "self"'
proxy = weakref.proxy
def decorator(func):
_func = functools.lru_cache(maxsize, typed)(func)
@functools.wraps(func)
def wrapper(self, /, *args, **kwargs):
return _func(proxy(self), *args, **kwargs)
return wrapper
return decorator
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue45588>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com