On Wed, Nov 2, 2011 at 11:33 AM, Tom Evans wrote:
>> other connections in other transactions are locked too?
>
> Yes. The exact wording from the C API:
>
> """
> On the other hand, you shouldn't use mysql_use_result() if you are
> doing a lot of processing for each row on the client side, or if th
"""
so, summarizing again:
- mysql supports chunked fetch but will lock the table while fetching is in
progress (likely causing deadlocks)
- postgresql does not seem to suffer this issue and chunked fetch seems
doable (not trivial) using named cursor
- oracle does chunked fetch already (som
I think the discussion actually went a bit sideways. Is there value in a model
method to return an iterator which pulls results from a temporary table that
gets filled from a model query? This puts the onus on the django-user to use
the correct method.
Model.foo().bar().buffered() or .from_tm
On 02/11/2011 17:33, Tom Evans wrote:
On Wed, Nov 2, 2011 at 4:22 PM, Marco Paolini wrote:
On 02/11/2011 17:12, Tom Evans wrote:
If you do a database query that quickly returns a lot of rows from the
database, and each row returned from the database requires long
processing in django, and you
On Wed, Nov 2, 2011 at 4:22 PM, Marco Paolini wrote:
> On 02/11/2011 17:12, Tom Evans wrote:
>> If you do a database query that quickly returns a lot of rows from the
>> database, and each row returned from the database requires long
>> processing in django, and you use mysql_use_result, then othe
On 02/11/2011 17:12, Tom Evans wrote:
On Wed, Nov 2, 2011 at 11:28 AM, Marco Paolini wrote:
mysql can do chunked row fetching from server, but only one row at a time
curs = connection.cursor(CursorUseResultMixIn)
curs.fetchmany(100) # fetches 100 rows, one by one
Marco
The downsides to mys
On Wed, Nov 2, 2011 at 11:28 AM, Marco Paolini wrote:
> mysql can do chunked row fetching from server, but only one row at a time
>
> curs = connection.cursor(CursorUseResultMixIn)
> curs.fetchmany(100) # fetches 100 rows, one by one
>
> Marco
>
The downsides to mysql_use_result over mysql_store_
On 02/11/2011 15:18, Anssi Kääriäinen wrote:
On 11/02/2011 01:36 PM, Marco Paolini wrote:
maybe we could implement something like:
for obj in qs.all().chunked(100):
pass
.chunked() will automatically issue LIMITed SELECTs
that should work with all backends
I don't think that will be a perfor
On 02/11/2011 14:36, Ian Kelly wrote:
On Wed, Nov 2, 2011 at 5:05 AM, Anssi Kääriäinen
wrote:
For PostgreSQL this would be a nice feature. Any idea what MySQL and Oracle
do currently?
If I'm following the thread correctly, the oracle backend already does
chunked reads. The default chunk siz
On 11/02/2011 01:36 PM, Marco Paolini wrote:
maybe we could implement something like:
for obj in qs.all().chunked(100):
pass
.chunked() will automatically issue LIMITed SELECTs
that should work with all backends
I don't think that will be a performance improvement - this will get rid
of th
On Wed, Nov 2, 2011 at 5:05 AM, Anssi Kääriäinen
wrote:
> For PostgreSQL this would be a nice feature. Any idea what MySQL and Oracle
> do currently?
If I'm following the thread correctly, the oracle backend already does
chunked reads. The default chunk size is 100 rows, IIRC.
--
You received
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, different transactions (or is there some sort of
On 02/11/2011 10:10, Luke Plant wrote:
On 02/11/11 08:48, Marco Paolini wrote:
thanks for pointing that to me, do you see this as an issue to be fixed?
If there is some interest, I might give it a try.
Maybe it's not fixable, at least I can investigate a bit
Apparently, the protocol between
On 02/11/11 08:48, Marco Paolini wrote:
> thanks for pointing that to me, do you see this as an issue to be fixed?
>
> If there is some interest, I might give it a try.
>
> Maybe it's not fixable, at least I can investigate a bit
Apparently, the protocol between the Postgres client and server o
On 02/11/2011 09:43, Luke Plant wrote:
On 02/11/11 00:41, Marco Paolini wrote:
so if you do this:
for obj in Entry.objects.all():
pass
django does this:
- creates a cursor
- then calls fetchmany(100) until ALL rows are fetched
- creates a list containing ALL fetched rows
- passes th
On 02/11/11 00:41, Marco Paolini wrote:
> so if you do this:
>
> for obj in Entry.objects.all():
> pass
>
> django does this:
> - creates a cursor
> - then calls fetchmany(100) until ALL rows are fetched
> - creates a list containing ALL fetched rows
> - passes this list to queryset instanc
19 matches
Mail list logo