[issue32422] Reduce lru_cache memory overhead.

2017-12-25 Thread INADA Naoki
INADA Naoki added the comment: New changeset 3070b71e5eedf62e49b8e7dedab75742a5f67ece by INADA Naoki in branch 'master': bpo-32422: Reduce lru_cache memory usage (GH-5008) https://github.com/python/cpython/commit/3070b71e5eedf62e49b8e7dedab75742a5f67ece --

[issue32422] Reduce lru_cache memory overhead.

2017-12-25 Thread INADA Naoki
Change by INADA Naoki : -- resolution: -> fixed stage: patch review -> resolved status: open -> closed ___ Python tracker ___

[issue32422] Reduce lru_cache memory overhead.

2017-12-25 Thread Serhiy Storchaka
Change by Serhiy Storchaka : -- type: -> resource usage ___ Python tracker ___

[issue32422] Reduce lru_cache memory overhead.

2017-12-25 Thread Serhiy Storchaka
Serhiy Storchaka added the comment: LGTM. -- ___ Python tracker ___ ___

[issue32422] Reduce lru_cache memory overhead.

2017-12-24 Thread INADA Naoki
INADA Naoki added the comment: PR-5008 benchmark: $ ./python -m perf compare_to master.json patched2.json -G Faster (9): - gc(100): 98.3 ms +- 0.3 ms -> 29.9 ms +- 0.4 ms: 3.29x faster (-70%) - gc(10): 11.7 ms +- 0.0 ms -> 3.71 ms +- 0.03 ms: 3.14x faster (-68%)

[issue32422] Reduce lru_cache memory overhead.

2017-12-24 Thread INADA Naoki
Change by INADA Naoki : -- keywords: +patch pull_requests: +4897 stage: -> patch review ___ Python tracker ___

[issue32422] Reduce lru_cache memory overhead.

2017-12-24 Thread INADA Naoki
INADA Naoki added the comment: > Please stop revising every single thing you look at. The traditional design > of LRU caches used doubly linked lists for a reason. In particular, when > there is a high hit rate, the links can be updated without churning the >