Yup, good analysis, good numbers, thanks for running those. I'm happy with
those results and think we should proceed.
On Thursday, 11 January 2018 05:11:56 UTC+11, Adam Johnson wrote:
>
> Grant, you're a star. I think the tradeoff is acceptable too.
>
> On 10 January 2018 at 17:05, Grant Jenks
Grant, you're a star. I think the tradeoff is acceptable too.
On 10 January 2018 at 17:05, Grant Jenks wrote:
> I was able to run the more extensive benchmarks under no-contention and
> high-contention scenarios with measurements at the 50th, 90th, 99th, and
> 100th
I was able to run the more extensive benchmarks under no-contention and
high-contention scenarios with measurements at the 50th, 90th, 99th, and
100th percentiles. I updated the ticket at
https://code.djangoproject.com/ticket/28977 with the results.
Under high-contention scenarios, the RWLock did
Nice to meet you too. Your benchmarking code was extremely handy when I was
profiling lru-cache-backend, so thank you!
Are you able to run the same benchmarks using this version of the cache to
see how it performs in low/medium/high eviction scenarios? I think those
benchmarks will be nicer
Josh, it's nice to meet you here. I cited your django-lrucache-backend
project in the original post of the Trac ticket. I'm also the author of
DiskCache http://www.grantjenks.com/docs/diskcache/ which your project refs
for benchmarks :) I added some benchmark data to the Trac ticket which may
I'm +1 for moving to LRU too, the eviction algorithm has always looked
weird to me. And Josh's library shows there are valid uses of local memory
caching in applications - perhaps moreso these days than when Django added
caching and memcached was the latest thing.
> You can also get a very nice
To lend some weight to this, I've implemented an LRU loc mem cache and have
done some benchmarking. There are some graphs in the
readme: https://github.com/kogan/django-lrucache-backend - which I've
written a little about
Hi all--
Long time user, first time poster, here. Thank you all for Django!
The current local memory cache (locmem) in Django uses a pseudo-random
culling strategy. Rather than random, the OrderedDict data type can be used
to implement an LRU eviction policy. A prototype implementation is