[sqlalchemy] Re: Dialect for Vertica db connectivity ?
I put a rework of the code posted by Bo into a package https://pypi.python.org/pypi/vertica-sqlalchemy/0.1 Selects, joins, table introspection works. Let me know if you can use it. Does anyone have an email for Bo so I can attribute him and check the license? thanks, James On Saturday, 16 March 2013 22:44:56 UTC, Femi Anthony wrote: Jonathan, thanks a lot. I'll test it out using the postgresSQL dialect. Femi On Friday, March 15, 2013 4:06:33 PM UTC-4, Jonathan Vanasco wrote: @Femi - I did a quick search online, but couldn't find any current ( since HP acquisition ) documentation. HOWEVER -- all of the old documentation and QAs that are still online talk about Vertica reimplementing the PostgreSQL syntax and functions. That's in line with what I remembered earlier, where the psql client was even their recommended command-line interface. ( Also, it was invented/started by the same guy who started PostgreSQL ) It's possible that things have changed, but I would try treating it as PostgreSQL. Unless they did a HUGE 360 pivot, I think that should work. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] mysql + query.execute memory usage
Hi, I'm using sqlalchemy to generate a query that returns lots of data. The trouble is, when calling query.execute() instead of returning the resultproxy straight away and allowing me to fetch data as I would like, query.execute blocks and the memory usage grows to gigabytes before getting killed for too much memory. This looks to me like execute is prefetching the entire result. Is there any way to prevent query.execute loading the entire result? Thanks, James -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=.
[sqlalchemy] Re: mysql + query.execute memory usage
On Nov 18, 3:01 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Nov 18, 2009, at 9:57 AM, James Casbon wrote: Hi, I'm using sqlalchemy to generate a query that returns lots of data. The trouble is, when calling query.execute() instead of returning the resultproxy straight away and allowing me to fetch data as I would like, query.execute blocks and the memory usage grows to gigabytes before getting killed for too much memory. This looks to me like execute is prefetching the entire result. Is there any way to prevent query.execute loading the entire result? for ORM look into using yield_per() or applying limit()/offset(). without the ORM no rows are buffered on the SQLA side. Note however that MySQLdb is likely prefetching the entire result set in any case (this is psycopg2s behavior but haven't confirmed for MySQLdb). Thanks, but not using the ORM. Looks like you have to specify a server side cursor - see SSCursor in http://mysql-python.sourceforge.net/MySQLdb.html I don't recall any way of forcing sqlalchemy to use a particular cursor? James -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=.