On Thu, Aug 3, 2017 at 9:36 AM, Ian Kelly <ian.g.ke...@gmail.com> wrote: > On Thu, Aug 3, 2017 at 8:35 AM, Paul Moore <p.f.mo...@gmail.com> wrote: >> On Tuesday, 1 August 2017 15:54:42 UTC+1, t...@tomforb.es wrote: >>> > _sentinel = object() >>> > _val = _sentinel >>> > def val(): >>> > if _val is _sentinel: >>> > # Calculate _val >>> > return _val >>> > >>> > seems entirely sufficient for this case. Write a custom decorator if you >>> > use the idiom often enough to make it worth the effort. >>> >>> I did some timings with this as part of my timings above and found it to be >>> significantly slower than lru_cache with the C extension. I had to add >>> `nonlocal` to get `_val` to resolve, which I think kills performance a bit. >>> >>> I agree with the premise though, it might be worth exploring. >> >> It's worth pointing out that there's nothing *wrong* with using lru_cache >> with maxsize=None. You're going to find it hard to get a pure-Python >> equivalent that's faster (after all, even maintaining a single variable is >> still a dict lookup, which is all the cache does when LRU functionality is >> disabled). > > The single variable is only a dict lookup if it's a global. Locals and > closures are faster. > > def simple_cache(function): > sentinel = object() > cached = sentinel > > @functools.wraps(function) > def wrapper(*args, **kwargs): > nonlocal cached > if args or kwargs: > return function(*args, **kwargs) # No caching with args > if cached is sentinel: > cached = function() > return cached > return wrapper > > *Zero* dict lookups at call-time. If that's not (marginally) faster > than lru_cache with maxsize=None I'll eat my socks.
Reading above, I now realize you were referring to the C extension version of lru_cache. Yes, I'm sure that's faster. I still maintain that this has to be faster than the pure-Python version however. -- https://mail.python.org/mailman/listinfo/python-list