On 09/02/2012 20:06, Michael Bayer wrote:

On Feb 9, 2012, at 3:02 PM, Chris Withers wrote:

Almost, but...

We can't have per-connection models, that would be semantically weird.
Using a different metadata at session creation time, keyed off the dsn of the 
database connected to, and with irrelevant classes blowing up if used but 
ignored otherwise is semantically what I'm after. Clues as to how to implement 
gratefully received...

do you have multiple, simultaneous engines when the app runs,

Yes. Some will have totally different schemas, where I wouldn't imagine any mapped classes would interact with each other. For these, I'd imagine a separate MetaData instance as in the "Model Setup" section would work.

However, some will have very similar schemas^1 with just a few tables deliberately missing. Here, it's "okay" (ie: programmer error) if queries involving classes mapped to these blow up because the wrong engine is used and so a table is missing. But, the reflection needs to not blow up.

With one metadata object shared between these engines (which is what declarative with reflection appears to support), if the first engine used in an app (and these can be web apps and, just as annoyingly, unit test runs, where the order is arbitrary) is one where a table is missing, the class will be declaratively mapped, but the reflection will fail, so it won't ever get mapped and will blow up if later used with an engine that *does* have the table.

So, in my head I'm thinking of something like (excuse the hand waving):

data_per_dsn = dict()

def getSession(dsn):
    if dsn not in data_per_dsn:
        engine = make_engine(dsn) # ^2
        metadata = MetaData()
        reflect_tables_and_map_declarative_classes(metadata, engine)
        data_per_dsn[dsn] = engine, metadata
    engine, metadata = data_per_dsn[dsn]
    return make_session(engine, metadata?)

...which would solve both the cases above.

The main problem I see is the case where the above would result in two metadata objects for one declarative class.

Wow, I feel like I'm making a hash of describing this, I'm hope I'm vaguely succeeding in getting the info across :-S

Chris

^1 Let's pretend that it's just missing tables, the fact that supposedly identical tables in different environments have evolved over time to have different primary keys^3 and numbers of columns is *not* something SQLAlchemy should have to deal with ;-)

^2 Oh for a function in SQLAlchemy to turn a dsn into a SQLAlchemy URL, and maybe back again...

^3 Let's also pretend that (seriously?!) some tables had not ended up with unique indexes that postgres is happy enough with but SQLAlchemy doesn't reflect correctly, because they're not *actually* primary keys - *sigh* - again, SA should not have to deal with this ;-)

--
Simplistix - Content Management, Batch Processing & Python Consulting
           - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


Reply via email to