#28977: Change Local Memory Cache to Use LRU
-------------------------------------+-------------------------------------
     Reporter:  Grant Jenks          |                    Owner:  Grant
                                     |  Jenks
         Type:  New feature          |                   Status:  assigned
    Component:  Core (Cache system)  |                  Version:  master
     Severity:  Normal               |               Resolution:
     Keywords:                       |             Triage Stage:  Accepted
    Has patch:  1                    |      Needs documentation:  0
  Needs tests:  0                    |  Patch needs improvement:  0
Easy pickings:  0                    |                    UI/UX:  0
-------------------------------------+-------------------------------------
Changes (by Grant Jenks):

 * has_patch:  0 => 1
 * stage:  Someday/Maybe => Accepted


Comment:

 Pull request created at https://github.com/django/django/pull/9555

 Django developers mailing list thread at
 https://groups.google.com/forum/#!topic/django-developers/Gz2XqtoYmNk

 Summarizing the two responses: Josh Smeaton (author of django-lrucache-
 backend, project cited in the initial post) was in favor of providing a
 better default option. Adam Johnson was also +1. Josh and Adam were also
 interested in providing a way to disable cache key validation. I'm +0 on
 that change so long as the default is to validate the key. I'd rather not
 add those changes myself.

 I did some very simple benchmarking locally:

 Here's the performance of cache.get for the current implementation:

 {{{
 $ python manage.py shell
 Python 3.6.3 (default, Oct  5 2017, 22:47:21)
 Type 'copyright', 'credits' or 'license' for more information
 IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.

 In [1]: from django.core.cache import cache

 In [2]: cache.set(b'foo', b'bar')

 In [3]: %timeit cache.get(b'foo')
 14.2 µs ± 109 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
 }}}

 And here's the performance of cache.get for the new implementation:

 {{{
 $ python manage.py shell
 Python 3.6.3 (default, Oct  5 2017, 22:47:21)
 Type 'copyright', 'credits' or 'license' for more information
 IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.

 In [1]: from django.core.cache import cache

 In [2]: cache.set(b'foo', b'bar')

 In [3]: %timeit cache.get(b'foo')
 6.29 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
 }}}

 I also tried Josh's django-lrucache-backend implementation:

 {{{
 $ python manage.py shell
 Python 3.6.3 (default, Oct  5 2017, 22:47:21)
 Type 'copyright', 'credits' or 'license' for more information
 IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.

 In [3]: from django.core.cache import caches

 In [4]: cache = caches['local']

 In [5]: cache.set(b'foo', b'bar')

 In [6]: %timeit cache.get(b'foo')
 10.1 µs ± 135 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
 }}}

 It's not a great benchmark but it's encouraging. The new implementation
 appears to be faster than both the current implementation and the django-
 lrucache-backend. I haven't profiled but I would guess
 collections.OrderedDict is very fast, as is threading.RLock both of which
 are introduced by these changes.

-- 
Ticket URL: <https://code.djangoproject.com/ticket/28977#comment:4>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.

-- 
You received this message because you are subscribed to the Google Groups 
"Django updates" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-updates+unsubscr...@googlegroups.com.
To post to this group, send email to django-updates@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-updates/068.62c3e55b1ce399566e18a23bced11359%40djangoproject.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to