On Thu, Feb 18, 2010 at 1:34 PM, ramu...@gmail.com <ramu...@gmail.com> wrote:
> Does anyone know why django can't keep a data bigger, than 1 mb in
> memcached ?
>
> This is no big deal to cut big data to 1mb pieces before setting and
> merge this pieces after getting from memcached. And this algorithm can
> work transparently for users.

I think Ramusus is asking for the low-level caching framework to split
serialized data into 1MB chunks before storing in a bunch of keys if
the data is larger than 1MB.

He's not asking why there's a limit, he's asking for Django to work
around the limit.

I think the general reasoning for a 1MB limit is that it places bounds
on access/storage times for a given key, and avoids an antipattern of
storing really large objects in cache.  I've wished for special-case
side-stepping of the 1MB limit before myself, but I'd hate for it to
be the default.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.

Reply via email to