Thank you very much again .. that worked like a charm.

For future reference, to give some idea, this is the function i now
use to generate the root base table/class for the generated joined
class inheritance tree:

def createObject():
    dct = dict(association_tables = {},
               __tablename__ = 'Object',
               id = Column(Integer, primary_key = True),
               type_name = Column(String(50),
ForeignKey(datatypes.Type.__table__.c.name), nullable = False, default
= 'Object'),
               created_at = Column(DateTime, default = func.now()),
               __mapper_args__ = {'polymorphic_on': 'type_name'},
               type = relationship(datatypes.Type, uselist = False))
    return type('Object', (declarative_base(bind = datatypes.engine),
MethodMixin), dct)

with datatypes.Type the main class/table holding the generated class/
table parameters, MethodMixin holding some extra methods (like
__repr__), association_tables holding the associations for many-to-
many relationships  (still to be tested). Classes and corresponding
tables are named the same.

I haven't tested much, but for now everything seems to work. Note that
each time this function runs, an new 'Base' class is created for the
generated class 'Object'.

Questions are very welcome.

Cheers, Lars



On Feb 27, 4:28 pm, Michael Bayer <mike...@zzzcomputing.com> wrote:
> >> The only database link between these two sets is the 'polymorphic on'
> >> column in the root base table in set 2, which is a foreign key to a
> >> Type table in set 1.
>
> also, I'd set up this foreign key using a literal Column so you don't have to 
> worry about string lookups:
>
> ForeignKey(MyRoot.__table__.c.type)
>
> and definitely don't make any backrefs into the fixed system !   Otherwise 
> then yes we might have to look at clearing out mappers entirely between tests.
>
>
>
>
>
>
>
>
>
> >> For a typical test i would like to:
>
> >> 1) create records in set 1 of tables (representing classes/tables with
> >> their attributes/foreign keys and fields),
> >> 2) from these records generate the tables/classes, where the tables
> >> will be in set 2.
> >> 3) add records to the generated tables/classes and test whether
> >> adding, updating, deleting and querying works as intended.
>
> >> To be able to perform multiple of these tests in one run, i need to
> >> empty the tables of set 1. However i need to completely remove any
> >> data (mappings, class definitions, records, tables) from set 2,
> >> between individual tests.
>
> > so what I would do is:
>
> > 1. the "other tables" generated would be in their own MetaData object.  
> > After you've done drop_all() for those, throw that whole MetaData away.
> > 2. the "other classes" generated, I'm assuming these are generated as the 
> > result of some function being called and aren't declared within a module.   
> > When you're done with them, throw them away too.
> > 3. The next run of tests would redo the entire process, using a new 
> > MetaData object, and generating new classes.
>
> > Key here is that there's no point at which the dynamically generated 
> > classes/tables/mappers need to be present as de-composed elements, once 
> > they've been composed.   They're just thrown away.
>
> > Using two Base classes would be an easy way to achieve this.
>
> >> I (naively) thought of some ways this might be possible:
>
> >> 1) use two separate metadata objects for the same database, bind them
> >> to separate 'Base' classes, one for each set  and replace the one
> >> representing set 2 before each individual test,
> >> 2) find some way to remove all data concerning set 2 of tables from
> >> mappings, metadata, database, etc. between tests,
> >> 3) use two databases, one for each set of tables and forego the
> >> foreign key realtionship between then (or maybe copy set 1 to the
> >> second database)
>
> >> Please advise on which of these approaches are possible, more
> >> straightforward, ... or whether another approach might be more
> >> appropriate.
>
> >> Cheers, Lars
>
> >> On Feb 26, 10:47 pm, Michael Bayer <mike...@zzzcomputing.com> wrote:
> >>> On Feb 26, 2012, at 12:47 PM, lars van gemerden wrote:
>
> >>>> I was wrong, the method emptied the database, but I was checking the
> >>>> tables in the metadata.
>
> >>>> This time I am also removing the tables from the metadata, but if i
> >>>> generate the same tables in two separate test methods (with a call to
> >>>> tearDown ans setUp in between), I still get an error about a backref
> >>>> name on a relationship already existing.
>
> >>> OK I think you're mixing concepts up here, a backref is an ORM concept.  
> >>> The Table and Metadata objects are part of Core and know absolutely 
> >>> nothing about the ORM or mappings.    Removing a Table from a particular 
> >>> MetaData has almost no effect as all the ORM mappings still point to it.  
> >>> In reality the MetaData.remove() method is mostly useless, except that a 
> >>> create_all() will no longer hit that Table, foreign key references will 
> >>> no longer find it, and you can replace it with a new Table object of the 
> >>> same name, but again nothing to do with the ORM and nothing to do with 
> >>> the state of that removed Table, which still points to that MetaData and 
> >>> will otherwise function normally.
>
> >>> If you want to remove mappings, you can call clear_mappers().  The use 
> >>> case for removing individual mappers is not supported as there is no 
> >>> support for doing all the reverse bookkeeping of removing 
> >>> relationships(), backrefs, and inheritance structures, and there's really 
> >>> no need for such a feature.
>
> >>> Like MetaData.remove(), there's almost no real world use case for 
> >>> clear_mappers() except that of the SQLAlchemy unit tests themselves, or 
> >>> tests of other ORM-integration layers like Elixir, which are testing the 
> >>> ORM itself with various kinds of mappings against the same set of classes.
>
> >>> Unit tests in an outside world application would normally be against a 
> >>> schema that's an integral part of the application, and doesn't change 
> >>> with regards to classes.   There's virtually no reason in normal 
> >>> applications against a fixed schema to tear down mappings and table 
> >>> metadata between tests.    SQLAlchemy docs stress the Declarative pattern 
> >>> very much these days as we're really trying to get it across that the 
> >>> composition of class, table metadata, and mapping is best regarded as an 
> >>> atomic structure - it exists only as that composite, or not at all.   
> >>> Breaking it apart has little use unless you're testing the mechanics of 
> >>> the mapping itself.
>
> >>> Throughout all of this, we are *not* talking about the tables and schema 
> >>> that are in the actual database.   It is typical that unit tests do drop 
> >>> all those tables in between test suites, and recreate them for another 
> >>> test suite.    Though I tend to favor not actually dropping / recreating 
> >>> and instead running the tests within a transaction that's rolled back at 
> >>> the end as it's much more efficient, especially on backends like Oracle, 
> >>> Postgresql, MSSQL where creates/drops are more expensive.   Dropping and 
> >>> recreating the tables in the database though is independent of the 
> >>> structure represented by Metadata/Table, though, that structure lives on 
> >>> and can be reused.    Metadata/Table describes only the *structure* of a 
> >>> particular schema.   They are not linked to the actual *presence* of 
> >>> those tables within a target schema.
>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "sqlalchemy" group.
> >> To post to this group, send email to sqlalchemy@googlegroups.com.
> >> To unsubscribe from this group, send email to 
> >> sqlalchemy+unsubscr...@googlegroups.com.
> >> For more options, visit this group 
> >> athttp://groups.google.com/group/sqlalchemy?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to