Eric Wertman wrote:
I am trying to use pymssql, and have an issue where by the execute
(not the fetch) is appearing to load all records into memory.

if I execute

con = pymssql.connect(...)
cur = con.cursor()
cur.execute(sql)
rec  = cur.fetchone()

if I put in a query which returns a lot of records into "sql" then the
execute never returns, pythons memory usage slowly ballons till the
machine cant give anymore. If I put a sql query returning only few
rows, then it works fine.

So I am not sure why an execute would feel the need to load all rows,
but its seriously crippling. Does anyone know if this is a bug or
something I can "turn off"

I ran into this myself.  After some digging I discovered that what you
are after is a server-side cursor that isn't implemented yet in
pymssql.  There is one in MySQLdb, but it's not the default behavior.
Regardless of your usage (fetchone vs fetchmany), the result set is
held client side.  AFAIK the only workaround is to keep your result
set small (enough).  If you use fetchmany and iterate over it
directly, it may keep your memory usage down,  I can't remember if
that worked.  I definitely tried making a generator with it, that did
not help.


... or just switch to pyodbc, for example, which behaves
perfectly well with this snippet against a table of >24 million
rows:

<code>
import pyodbc

conn = [
 "Driver={SQL Server}",
 "Server=SVR17",
 "Database=TDI",
 "TrustedConnection=Yes"
]
db = pyodbc.connect (";".join (conn))
q = db.cursor ()
q.execute ("SELECT * FROM revenue") # 24 million rows
q.fetchone ()
q.close ()

</code>


TJG
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to