On Fri, Oct 8, 2010 at 10:09 AM, Jacob Burch <jacobbu...@gmail.com> wrote:
> Bringing this back for more design decision discussion. I've started a
> (very basic) wiki page with a brief summation of the situation here:
> http://code.djangoproject.com/wiki/PylibmcSupport
>
> Of note from that wiki page are three main issues I raised with
> various django developers while at Django Con that would need to be
> addressed before adequate pylibmc support was added into Django:
>
> * The suggested use of using query-string for client-specific options
> and libmemcached 'behaviors' eventually leading to massive client
> strings, and the use of a CACHE_SETTINGS dictionary may be a better
> solution.

CACHE_SETTINGS works for me. This also allows for options that aren't
strings, or can't be easily serialized as strings, or are painful to
URI-encode.

In the interests of fully supporting existing setups, we'll probably
need to do something like:

full_settings = ...parse options from CACHE_BACKEND string ...
if settings.CACHE_SETTINGS:
    full_settings.update(settings.CACHE_SETTINGS)

but other implementation details than that, I'm +1.

> * Pylibmc 1.1 doesn't play that nice with mod_wsgi due to it's use of
> the Simplified GIL API (see: 
> http://www.dctrwatson.com/2010/09/beware-of-using-pylibmc-1-1-and-mod_wsgi/).
> Probably just need to make note of it in the documentation, but it's
> worth noting.
...
> * Because it isn't bound by the GIL, pylibmc by default isn't thread
> safe. There are a few different solutions, the most apparent being to
> use pylibmc's ThreadMappedPool. More info here:
> http://lericson.blogg.se/code/2009/september/draft-sept-20-2009.html
> and http://blog.sendapatch.se/2009/september/pooling-with-pylibmc-pt-2.html

Am I missing something, or aren't these variants of the same problem?

> Originally Myself and Noah Silas were working on a solution that would
> have two separate backends, one using of the ThreadMappedPool and one
> using the basic client. At this point, I think it's probably not even
> worth using one that uses the basic client, and having us keep one
> memached backend that can use multiple libraries (defaulting to python-
> memcached, and having initially pylibmc support as well.)
> ing.

I'm a little confused on the state of play here. If I'm understanding
this correctly:

 * In the simple/naive usage of the API ('basic client') pylibmc isn't
thread-safe, and therefore can't be used under mod_wsgi without
special configuration.

 * It is possible to use a ThreadMappedPool to access the pylibmc
client in a thread-safe way.

If this is the case, a ThreadMappedPool-based implementation sounds
like the only viable option; it doesn't strike me as a good idea for
Django to ship a cache backend that doesn't play well with mod_wsgi
without special configuration.

I'm also slightly unclear as to exactly what solution you're proposing. Is it:

 1) A completely new "pylibmc" backend implementing a ThreadMappedPool
interface (possibly inheriting from the memcached backend or a
factored base class)

 2) A revised import order inside the existing memcached backend that
would import pylibmc before or after attempted imports of cmemcache
and memcache

 3) A configuration option for the existing memcached backend that
controls which client library is used

The older mailing list thread seems to suggest (3) as the right
approach, but it isn't clear to me that this is still what you're
proposing.

Also, are there any API-level discrepancies remaining that need to be
considered? The earlier django-dev thread suggests that there are some
problems with get_multi(), but it also says that they've been fixed.
Is this the case?

Yours,
Russ Magee %-)

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.

Reply via email to