On Mon, Sep 23, 2013 at 5:55 AM, Anssi Kääriäinen
wrote:
> On 09/20/2013 06:29 PM, Tom Evans wrote:
>>
>> On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
>> wrote:
It seems more sensible to hook something that has the lifetime of the
request to the request, rather than stick it
On 09/20/2013 06:29 PM, Tom Evans wrote:
On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
wrote:
It seems more sensible to hook something that has the lifetime of the
request to the request, rather than stick it in TLS, keyed to the
thread serving the request.
Jupp, sadly I don't see a sens
On Saturday, September 21, 2013 2:12:31 AM UTC+2, Curtis Maloney wrote:
> Is there anything else?
>
Ain't that enough? :p
--
You received this message because you are subscribed to the Google Groups
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it,
OK. So the goals of this effort are:
1) to avoid resource over commitment [e.g. too many connections]
2) to help relieve the burden of concurrency from the cache backends.
Issues to avoid are:
a) TLS is "slow" (citation, please?)
b) New API better damn well be worth it!
Is there anything else?
Hi Tom,
On Friday, September 20, 2013 5:04:41 PM UTC+2, Tom Evans wrote:
>
> On the other hand each call to get_cache('foo') now requires access to
> TLS. TLS is slw. Going through something slow to get to something
> that is supposed to be a speed up...
>
You are making a good point, even
On Fri, Sep 20, 2013 at 4:13 PM, Florian Apolloner
wrote:
>> It seems more sensible to hook something that has the lifetime of the
>> request to the request, rather than stick it in TLS, keyed to the
>> thread serving the request.
>
>
> Jupp, sadly I don't see a sensible way around thread local st
On Fri, Sep 20, 2013 at 3:10 PM, Florian Apolloner
wrote:
> The main issue here isn't recreating the objects on demand, but the impact
> they have, eg a new memcached connection. Now imagine a complex system where
> each part issues get_cache('something') to get the cache
On the other hand each c
Hi Tom,
On Friday, September 20, 2013 3:29:03 PM UTC+2, Tom Evans wrote:
>
> Before you
> go too far down the thread local route, could you verify that
> retrieving cache objects from a thread local cache is in any way
> faster than simply recreating them as demanded.
>
The main issue here is
On Wed, Sep 18, 2013 at 12:29 PM, Curtis Maloney
wrote:
> I started working on a CacheManager for dealing with thread local cache
> instances, as was suggested on IRC by more than one person.
>
The problem that Florian identified was that recreating cache
instances each time get_cache() was calle
Yeah... simpler solution is simpler :)
--
C
On 20 September 2013 17:04, Florian Apolloner wrote:
>
>
> On Friday, September 20, 2013 8:58:25 AM UTC+2, Curtis Maloney wrote:
>>
>> I guess the remaining question to address is : close()
>>
> Leave it as is I think.
>
>
>> Thinking as I type...
On Friday, September 20, 2013 8:58:25 AM UTC+2, Curtis Maloney wrote:
>
> I guess the remaining question to address is : close()
>
Leave it as is I think.
> Thinking as I type... it wouldn't hurt, also, to allow a cache backend to
> provide an interface to a connection pool, so the manager c
I guess the remaining question to address is : close()
It looks like it was added to appease an issue with memcached, which may or
may not still be an issue [comments in tickets suggest it was a design
decision by the memcached authors].
Thinking as I type... it wouldn't hurt, also, to allow a c
Hi,
On Wednesday, September 18, 2013 1:29:25 PM UTC+2, Curtis Maloney wrote:
>
> 1) Can we share "ad-hoc" caches -- that is, ones created by passing more
> than just the CACHES alias.
>
Imo no, you probably have a good reason if you create ad-hoc ones
> 2) What to do about django.core.cache.cac
I started working on a CacheManager for dealing with thread local cache
instances, as was suggested on IRC by more than one person.
Firstly, I propose we remove the schema://backend... syntax for defining
cache configs, as it's no longer even documented [that I could find quickly]
Secondly, have
Hi,
On Monday, September 2, 2013 6:39:09 AM UTC+2, Curtis Maloney wrote:
>
> Whilst it's conceivable some cache backend will have the smarts to
> multiplex requests on a single connection, I suspect that's more the
> exception than the case.
>
Agreed
> Obviously, the default would be one p
Bit of a rambling, thinking-out-loud-ish post...
Whilst it's conceivable some cache backend will have the smarts to
multiplex requests on a single connection, I suspect that's more the
exception than the case.
However, that doesn't mean the cache backend can't be left with the
opportunity to man
Hi,
On Sunday, September 1, 2013 4:34:54 AM UTC+2, Curtis Maloney wrote:
>
> I've a possible solution -
> https://github.com/funkybob/django/compare/simple_caches
>
> Basically, the existing API and behaviours are still available through
> get_cache, but you can avoid duplicate instances of cach
I've a possible solution -
https://github.com/funkybob/django/compare/simple_caches
Basically, the existing API and behaviours are still available through
get_cache, but you can avoid duplicate instances of caches using
django.core.cache.caches[name]
--
Curtis
On 31 August 2013 15:44, Curtis Ma
As a simple short-term solution, why not cache calls to get_cache that
don't pass additional arguments? That is, ones that only get
pre-configured caches.
--
Curtis
On 25 August 2013 23:26, Florian Apolloner wrote:
> Hi,
>
> so when reviewing https://github.com/django/django/pull/1490/ I onc
Hi,
so when reviewing https://github.com/django/django/pull/1490/ I once again
ran over an issue with our current caching implementation: Namely get_cache
creates a new instance every time which is kind of suboptimal if you don't
store it as module level variable like we do with the default cac
20 matches
Mail list logo