I'm puzzling out how to connect to multiple databases that have the  
same tables (and the same table definitions). The use case is that  
data may be retrieved from two or more similar dbs and perhaps copied  
between them. Following earlier advice, I've used a single metadata  
and 'bind' in the session creation to indicate which engine. However  
there's a problem. Example code below::

        from sqlalchemy import *
        from sqlalchemy.orm import *

        uri1 = # blah blah
        uri2 =  # blah blah

        class Dummy (object):
                "simple mapped class"
                pass
        
        class DbConnection (object):
                "encapsulate engine & session"
                def __init__ (self, uri, dbschema):
                        engine = create_engine (uri)
                        Session = sessionmaker (bind=engine, autoflush=False,  
transactional=False)
                        self._session = Session()
                        dbschema.metadata.create_all (bind=engine, 
checkfirst=True)
                
                def __del__ (self):
                        self._session.close()


        class DbSchema (object):
                "metadata & table defintions"
                def __init__ (self):
                        self.metadata = MetaData()
                        self.dummy_table = Table('dummy', self.metadata,
                                Column('id', Integer, primary_key=True),
                                Column('name', Unicode(40)),
                        )
                        mapper (Dummy, self.dummy_table)
                
        # define tables and connect to both dbs
        schema = DbSchema()
        conn1 = DbConnection (uri1, schema)
        conn2 = DbConnection (uri2, schema)

        # create object
        x = Dummy()
        x.name = u'testname'

        # save to one and detach object
        conn1._session.save_or_update (x)
        conn1._session.flush ()
        conn1._session.expunge (x)

        # save to two silently fails!
        conn2._session.save_or_update (x)
        conn2._session.flush ()
        conn2._session.expunge (x)


The object saves obediently to the first db, but fails to save to the  
second (although tables are created correctly). The successful write  
is always the first one, regardless of the order of opening  
connections. It doesn't depend on how the primary key is defined:  
autogenerated or supplied. I therefore surmise this is because  
'save_or_update' detects the object as already being persistent,  
despite the use of 'expunge'.

Is there another step required to detach the object, or is another  
approach necessary to handle multiple dbs?

--
Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health
[EMAIL PROTECTED] / [EMAIL PROTECTED]




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to