[issue42711] lru_cache and NotImplemented

2020-12-21 Thread Kaleb Barrett


New submission from Kaleb Barrett :

Having to return `NotImplemented` (which counts as a successful function call) 
in `functools.lru_cache` and `functools.cache` decorated binary dunder methods 
has the potential to fill the LRU cache with calls that will in ultimately 
result in errors (read "garbage").

There are a few ways to avoid this IMO:

1. Change `functools.lru_cache` not cache calls that result in `NotImplemented`.
2. Allow a user to easily extend `functools.lru_cache` (the implementation is 
not extensible right now) to do the previously mentioned operation.
3. Add the ability to *throw* `NotImplemented` in binary dunder methods so the 
function call does not complete. This would work exactly like returning 
`NotImplemented` and be handled by the runtime.

And my current solution...

4. Copy-paste `functools.lru_cache` and add in changes for solution 1 :)

--
components: Library (Lib)
messages: 383573
nosy: ktbarrett
priority: normal
severity: normal
status: open
title: lru_cache and NotImplemented
type: performance
versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42711] lru_cache and NotImplemented

2020-12-22 Thread Raymond Hettinger


Raymond Hettinger  added the comment:

> has the potential to fill the LRU cache with calls that
> will in ultimately result in errors

How would a call result in an error?  The worst that can happen is a cache miss 
and the underlying function gets called.


> Change `functools.lru_cache` not cache calls that 
> result in `NotImplemented`.

That would be a backwards incompatible change.


> Allow a user to easily extend `functools.lru_cache`

The implementation is intentionally closed off to support thread-safety, having 
a C implementation, ease of use, and making it performant.  

We've provided OrderedDict to make it relatively easy to roll your own cache 
variants.  Essentially the only algorithmically interesting part of the 
lru_cache is the linked list. OrderedDict has that logic already built in (the 
move_to_end() method does the equivalent of the linked list update).


> Add the ability to *throw* `NotImplemented` in binary dunder
> methods so the function call does not complete. This would 
> work exactly like returning `NotImplemented` and be handled 
> by the runtime.

Deep changes to the language would require a PEP.

> And my current solution...
> 4. Copy-paste `functools.lru_cache` 

The usual technique is to cache only what you want cached.  

Consider factoring-out the expensive logic rather than caching the 
NotImplemented logic:

def __add__(self, other):
if not isinstance(other, type(self)):
return NotImplemented
self._expensive_method(other)

@lru_cache
def _expensive_method(self, other):
...

--
assignee:  -> rhettinger
nosy: +rhettinger
priority: normal -> low
type: performance -> behavior
versions:  -Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42711] lru_cache and NotImplemented

2020-12-23 Thread Raymond Hettinger


Raymond Hettinger  added the comment:

Thanks for the suggestion, but I think it is better to use the tool as designed 
without any special cases.

--
resolution:  -> rejected
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com