INADA Naoki <songofaca...@gmail.com> added the comment:

> Please stop revising every single thing you look at.  The traditional design 
> of LRU caches used doubly linked lists for a reason.  In particular, when 
> there is a high hit rate, the links can be updated without churning the 
> underlying dictionary.

I don't proposing removing doubly linked list; OrderedDict uses
doubly-linked list too, and I found no problem for most-hit scenario.

On the other hand, I found problem of OrderedDict for most mis hit
scenario.

Now I think lru_cache's implementation is better OrderedDict.
PyODict is slower than lru_cache's dict + linked list because of
historical reason (compatibility with pure Python implemantation.)

So I stop trying to remove lru_cache's own implementation.
I'll try to reduce overhead of lru_cache, by removing GC header
from link node.

----------
title: Make OrderedDict can be used for LRU from C -> Reduce lru_cache memory 
overhead.

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32422>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to