On Tue, Aug 1, 2017 at 10:50 PM,  <t...@tomforb.es> wrote:
> And you have a simple function:
>
> def test():
>    return object()
>
> I get the following numbers without much variance:
>
> 1. lru_cache(maxsize=None) - 870ns
>
> 2. lru_cache() - 1300ns
>
> 3. no cache - 100ns
>
> So, in the best case, without the C extension lru_cache is 8x as slow as the 
> function itself. I get that it's not a great comparison as few functions are 
> as simple as that, but a 700ns overhead is quite a bit, and if you're putting 
> it around simple functions and expecting them to be faster, that may not be 
> true.
>
> With the C extension I get 50ns for both. So obviously a lot faster. Maybe it 
> doesn't matter...

There's a drastic semantic difference here, though. Without a cache,
you're constructing a new, unique object every time you call it; with
a cache, you return the same one every time. A better comparison would
be:

_sentinel = object()
def test():
    return _sentinel

Not that it changes your point about timings, but if you're slapping a
cache onto functions to try to make them faster, you want to be sure
you're not breaking their semantics.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to