Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-13 Thread 耿 爽
On Tue, Feb 12, 2013 at 9:37 PM, Simon King si...@simonking.org.uk wrote:

  Caching the metadata should be fairly easy if you are happy with that
  approach. I think MetaData instances are picklable:
 
 
 http://stackoverflow.com/questions/11785457/sqlalchemy-autoloaded-orm-persistence


Just pickled all the metadata and it works nicely.
Thanks.

Br,
Shaung

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-13 Thread shaung
On Tuesday, February 12, 2013 9:13:48 PM UTC+9, betelgeuse wrote:

 I had a smilar problem. 
 I had a ms sql database that another application created and I need to 
 select data from it. There was lots of tables so I tried reflection but 
 it was slow so I decided to use sa declarative method. But declaring all 
 the tables again in python was too much work. I use sqlautocode to 
 generate declerative table classes and use them in my models with some 
 minor modifications. if the db structure does not change too often this 
 will speed up things. 


I've been doing that way with django.
Tried sqlautocode but got an ImportError: cannot import name 
_deferred_relation error.
(I'm using SA 0.8)
Maybe something is broken but don't have much time to look into it :(

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread shaung
For the following code:

from sqlalchemy import create_engine, MetaData, Table

dbengine = create_engine('mssql+pyodbc://MYDSN')
dbmeta = MetaData()
dbmeta.bind = dbengine

def get_table(name):
table = DBTable(name, dbmeta, autoload=True, autoload_with=dbengine)



It takes 50 seconds or so per call to `get_table`.
Did I miss something?
Where should I look at?
Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread Simon King
On Tue, Feb 12, 2013 at 10:12 AM, shaung shaun.g...@gmail.com wrote:
 For the following code:

 from sqlalchemy import create_engine, MetaData, Table

 dbengine = create_engine('mssql+pyodbc://MYDSN')
 dbmeta = MetaData()
 dbmeta.bind = dbengine

 def get_table(name):
 table = DBTable(name, dbmeta, autoload=True, autoload_with=dbengine)


If you add echo='debug' to your create_engine call, SA will log all
calls to the database and rows returned, which might give you an idea
of where all the time is being spent.

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread shaung
On Tuesday, February 12, 2013 7:18:37 PM UTC+9, Simon King wrote:

 If you add echo='debug' to your create_engine call, SA will log all 
 calls to the database and rows returned, which might give you an idea 
 of where all the time is being spent. 


Thanks, Simon. I've looked through the debug log and found the reason.

It turns out that the table has several foreign key constraints,
and SA is inspecting all of the related tables and all the related tables 
to the related tables...
There were 23 tables involved, which explained the long execution time.

So is there anything I can do about this?
I'm considering two possibilities: 

1. Ignore the constraints to speed up
2. Or cache all the meta data to a disk file so no need to wait when 
restarting the program

Either would be fine for me.
Is it possible?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread Timuçin Kızılay

On 12-02-2013 13:29, shaung wrote:

On Tuesday, February 12, 2013 7:18:37 PM UTC+9, Simon King wrote:

If you add echo='debug' to your create_engine call, SA will log all
calls to the database and rows returned, which might give you an idea
of where all the time is being spent.


Thanks, Simon. I've looked through the debug log and found the reason.

It turns out that the table has several foreign key constraints,
and SA is inspecting all of the related tables and all the related
tables to the related tables...
There were 23 tables involved, which explained the long execution time.

So is there anything I can do about this?
I'm considering two possibilities:

1. Ignore the constraints to speed up
2. Or cache all the meta data to a disk file so no need to wait when
restarting the program

Either would be fine for me.
Is it possible?



I had a smilar problem.
I had a ms sql database that another application created and I need to 
select data from it. There was lots of tables so I tried reflection but 
it was slow so I decided to use sa declarative method. But declaring all 
the tables again in python was too much work. I use sqlautocode to 
generate declerative table classes and use them in my models with some 
minor modifications. if the db structure does not change too often this 
will speed up things.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread Simon King
On Tue, Feb 12, 2013 at 11:29 AM, shaung shaun.g...@gmail.com wrote:
 On Tuesday, February 12, 2013 7:18:37 PM UTC+9, Simon King wrote:

 If you add echo='debug' to your create_engine call, SA will log all
 calls to the database and rows returned, which might give you an idea
 of where all the time is being spent.


 Thanks, Simon. I've looked through the debug log and found the reason.

 It turns out that the table has several foreign key constraints,
 and SA is inspecting all of the related tables and all the related tables to
 the related tables...
 There were 23 tables involved, which explained the long execution time.

 So is there anything I can do about this?
 I'm considering two possibilities:

 1. Ignore the constraints to speed up
 2. Or cache all the meta data to a disk file so no need to wait when
 restarting the program

 Either would be fine for me.
 Is it possible?


Caching the metadata should be fairly easy if you are happy with that
approach. I think MetaData instances are picklable:

  
http://stackoverflow.com/questions/11785457/sqlalchemy-autoloaded-orm-persistence

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Low performance when reflecting tables via pyodbc+mssql

2013-02-12 Thread Michael Bayer

On Feb 12, 2013, at 6:29 AM, shaung shaun.g...@gmail.com wrote:

 On Tuesday, February 12, 2013 7:18:37 PM UTC+9, Simon King wrote:
 If you add echo='debug' to your create_engine call, SA will log all 
 calls to the database and rows returned, which might give you an idea 
 of where all the time is being spent. 
 
 
 Thanks, Simon. I've looked through the debug log and found the reason.
 
 It turns out that the table has several foreign key constraints,
 and SA is inspecting all of the related tables and all the related tables to 
 the related tables...
 There were 23 tables involved, which explained the long execution time.
 
 So is there anything I can do about this?
 I'm considering two possibilities: 
 
 1. Ignore the constraints to speed up
 2. Or cache all the meta data to a disk file so no need to wait when 
 restarting the program
 

you can pickle the metadata for this purpose.Though I wonder if it's time 
to revisit that behavior of reflection, it would be easy enough to have it stop 
reflecting after one level deep.I'm actually not even sure why it's so 
critical that it even traverse the first level of foreign keys, since those 
ForeignKey objects could just remain unresolved until one ensured that the 
other tables were also pulled in explicitly.   The Table object would still 
work with those FK objects unresolved.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.