#28977: Change Local Memory Cache to Use LRU -----------------------------------------------+------------------------ Reporter: Grant Jenks | Owner: nobody Type: New feature | Status: new Component: Core (Cache system) | Version: master Severity: Normal | Keywords: Triage Stage: Unreviewed | Has patch: 0 Needs documentation: 0 | Needs tests: 0 Patch needs improvement: 0 | Easy pickings: 0 UI/UX: 0 | -----------------------------------------------+------------------------ The current local memory cache (locmem) in Django uses a pseudo-random culling strategy. Rather than random, the OrderedDict data type can be used to implement an LRU eviction policy. A prototype implementation is already used by functools.lru_cache and Python 3 now supports OrderedDict.move_to_end and OrderedDict.popitem to ease the implementation.
I'm willing to work on a pull request that changes locmem to use an LRU eviction strategy but I wanted to first check whether it would be accepted. I did some research to find a good reason for the locmem's random culling strategy but did not find one. There's also a bit of prior art at https://pypi.python.org/pypi/django- lrucache-backend. -- Ticket URL: <https://code.djangoproject.com/ticket/28977> Django <https://code.djangoproject.com/> The Web framework for perfectionists with deadlines. -- You received this message because you are subscribed to the Google Groups "Django updates" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-updates+unsubscr...@googlegroups.com. To post to this group, send email to django-updates@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/django-updates/053.96628805ab337bca1404aeb1014521ca%40djangoproject.com. For more options, visit https://groups.google.com/d/optout.