Re: [sqlalchemy] Doing reflection with multiple remote databases.

2013-01-16 Thread Hetii
The point is in that just some of them are identical, some others not, its 
more like sets of group models.One model is valid for 300 databases, 
another one for another 500 etc...

So i need a to have a fast way that will compare each database across the 
rest to see with one can be shared and then of course build it.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/YKsvHyl1qCEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Doing reflection with multiple remote databases.

2013-01-16 Thread Michael Bayer
if you want some code you can adapt to compare schemas take a look at the 
autogenerate code in Alembic:

https://bitbucket.org/zzzeek/alembic/src/b0118a7df6ec71597b3b7849183768e4d59c9c49/alembic/autogenerate.py?at=default



On Jan 16, 2013, at 5:48 AM, Hetii wrote:

 The point is in that just some of them are identical, some others not, its 
 more like sets of group models.One model is valid for 300 databases, another 
 one for another 500 etc...
 
 So i need a to have a fast way that will compare each database across the 
 rest to see with one can be shared and then of course build it.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/YKsvHyl1qCEJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Doing reflection with multiple remote databases.

2013-01-16 Thread Claudio Freire
On Tue, Jan 15, 2013 at 8:48 PM, Hetii ghet...@gmail.com wrote:
 Even when i dump all of them into declarative base model, its still huge
 amount of data that need to be parsed and loaded.

 I want to ask if its possible to share table/column definition across
 different database models to reduce amount of used resources?

Even if you can't share the objects themselves (not sure you can, you
probably can't), you can share the code that generates them.

Remember python is dynamic, and class blah is code that actually
creates a class object:

def init():
   class Blah
   class Blah
   return locals()

globals().update(init())

^
You can call that init function as many times as you want. I use
something like that to map two identical databases (master and
replica) into two namespaces I can pick depending on the task.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Doing reflection with multiple remote databases.

2013-01-15 Thread Hetii
Welcome.

I have a bit rare scenario.
In my case i need to be able to work with over 1500 remote databases and i 
cannot change this fact.

Reflecting all of them are not possible, because it consume to much time 
and resources, so need generate model for them.

Of course this process also take long time, and depends on dialect 
inspector implementation. For example by getting single table columns 
definition in mysql dialect its emitted one query, for postgresql four.

Even when i dump all of them into declarative base model, its still huge 
amount of data that need to be parsed and loaded.

I want to ask if its possible to share table/column definition across 
different database models to reduce amount of used resources?

if someone have idea how to organize/optimize structure for such 
application model then please share :)


Best regards.





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/ZnXjMNSe9PYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Doing reflection with multiple remote databases.

2013-01-15 Thread Michael Bayer

On Jan 15, 2013, at 6:48 PM, Hetii wrote:

 Welcome.
 
 I have a bit rare scenario.
 In my case i need to be able to work with over 1500 remote databases and i 
 cannot change this fact.
 
 Reflecting all of them are not possible, because it consume to much time and 
 resources, so need generate model for them.
 
 Of course this process also take long time, and depends on dialect inspector 
 implementation. For example by getting single table columns definition in 
 mysql dialect its emitted one query, for postgresql four.
 
 Even when i dump all of them into declarative base model, its still huge 
 amount of data that need to be parsed and loaded.
 
 I want to ask if its possible to share table/column definition across 
 different database models to reduce amount of used resources?
 
 if someone have idea how to organize/optimize structure for such application 
 model then please share :)

are the schemas in all 1500 databases identical ?  in that case you only need 
to reflect it once into a MetaData object, and then that object you can share 
with as many engines/connections as you want.  the MetaData and your model are 
unique to a certain schema design, not a database connection.

if the schemas are *not* identical, then we might do some reductionist 
thinking.   Your app would have a SQL query that works against all of the 
databases, suggesting that it only cares about a common denominator of 
table/column definitions.  In which case the schema you reflect from database 
#1 is still useable, you just wouldn't refer to any of those tables/columns 
that aren't global to all schemas.

if the schemas are completely different, then there's no sharing to happen 
anyway.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.