yeah...i think most people want to just use bind_to with their  
session in order to switch around databases for now....its the most  
expedient.

on this subject, I do have a notion for a more ambitious "clustering"  
layer, since at this point I've had a couple of users (at least one  
of them an ex-user now) who really want the ability to build  
comprehensive multi-database access into SA.  but they all want to  
build it into ORM.  My notion of this is that it should be built into  
Engine/Connection/constructed SQL, which ultimately would work a lot  
better...but, seems more difficult.  i think its less difficult  
because ORM has enough to worry about without having to build in  
hooks in the fifteen different places where it executes SQL.   if you  
intercept at the "execute" layer, then theres just one hook, doesnt  
get in anyones way, doesnt pollute the ORM, and then you can work  
with straight SQL in a clustered fashion too.

the basic idea is that the Connection object would be replaced by a  
proxy which intercepts SQL and applies rules to it, which results in  
any number of activities...the two that come to mind is "execute this  
SQL on database X", the other is "execute this SQL on databases X, Y,  
and Z", where the latter is used to build instant clustered writes.

the "writing to multiple dbs part", if i had to build an app that  
does it, is something id rather accomplish by using the clustering  
options available for the database itself.  but people still want  
this inside of SA, for things such as writing to both postgres and  
sqlite simultaneously, for example.

The other part of this that people are skeptical about is the  
"intercepts SQL" part.  I was talking with Glyph (of Twisted) at  
Pycon about this.  He was all concerned about SQL statements that  
have comment strings in them, etc., and how its ulitmately impossible  
to be sure what kind of SQL is being passed (specifically, with  
regards to transaction auto-commit decisions...which is something  
that SA implements by looking at the SQL at the string level).

but for this clustering idea, i dont want to look at SQL strings.    
Pretty much all SQL in SA is generated as SQL expressions first (with  
the exception of literal text statements...which are actually quite  
rare as a full statement, since you cant really map to one anyway  
unless its been broken out into the form of a select()).  The main  
point of a SQL expression is that its a data structure which is  
directly introspectable through APIs.  there is no ambiguity as to  
whether a statement is a SELECT, INSERT, etc., and additionally you  
can dig into the expression and pull out all sorts of information  
such as what rows are being targeted, etc.  the ORM performs all  
kinds of anaylses of SQL fragments, table expressions, etc. and that  
is all made possible through ClauseElement, as well as the main way  
to look at ClauseElements which is through ClauseVisitor.

So as far as ive gotten on this is, it would be a new package  
"sqlalchemy.scf" (for SQLAlchemy Clustering Framework), it will use a  
ProxyConnection object that can intercept constructed SQL issued via  
execute_clauseelement() / execute_compiled() (we would have to build  
in ProxyConnection, the current ProxyEngine is pretty much non- 
functional), and you will be able to create rulesets that will tested  
using a ClauseVisitor...the statement is received, a ClauseVisitor  
applied to it, and as the visitor encounters various kinds of sub- 
elements of the statement, rules will be retrieved based on matches  
which then contain instructions on how the statement should be handled.

at this point, its at best a mid-0.4 idea and maybe even a 0.5 idea.


On May 30, 2007, at 9:36 AM, King Simon-NFHD78 wrote:

>
> I think the answer to this partly depends on which parts of SA you are
> using, and how your databases are set up. If all the databases have
> different schemas, and you are using the low-level SA API (or your ORM
> objects are each defined against a single database), I would probably
> use a metadata/engine instance per database. You can then define your
> Table objects against the appropriate metadata instances.
>
> If you need to persist the same ORM class to multiple databases, then
> you could look in to the 'bind_to' parameter to create_session
> (mentioned here:
> http://www.sqlalchemy.org/docs/unitofwork.html#unitofwork_api_bind).
>
> Hope that helps,
>
> Simon
>
> -----Original Message-----
> From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
> On Behalf Of Alchemist
> Sent: 30 May 2007 14:19
> To: sqlalchemy
> Subject: [sqlalchemy] how to retrieve/update data from/on multiple
> databases
>
>
> Working with:
> Python 2.4
> SQLAlchemy 0.3.7
> Postgresql 8.2 database servers
>
> I am working in a multidatabase environment.  I need to perform a
> query over multiple databases, then manipulate the obtained results
> and commit my changes in different tables in different databases.
>
> How can I query from multiple databases?
> How can INSERTs/UPDATEs be performed on different tables in different
> databases?
>
> Thank you.
>
>
>
>
> >


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to