On Nov 4, 1:20 am, Marco Paolini <markopaol...@gmail.com> wrote:
> time to write some patches, now!

Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows different database backends to be
able to decide how to do the chunked reads, and there is a qs method
to use when you want extreme caching avoidance - even if it means non-
safe behavior is sqlite3 / using named cursors in postgresql.

For SQLite, results for User.objects.all()[0:10000]
.chunked(): 25852.0kB
.iterator(): 65508.0kB
Modifications will be seen by the iterator.

Postgresql:
.chunked(): 26716.0kB
.iterator(): 46652.0kB

MySQL should not have any changes between chunked <-> iterator,
neither should Oracle.

I would write tests for this, but it seems a bit hard - how to test if
the backend fetched the objects in one go or not? You could test
memory usage, but that seems brittle, or you could test internals of
the backend, but that is ugly and brittle. Ideas?

The patch is at:
https://github.com/akaariai/django/commit/8990e20df50ce110fe6ddbbdfed7a98987bb5835

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.

Reply via email to