I've been waiting for an opportunity to use SQLAlchemy for a while,
and finally had an application come up that let me.  Everything is
going great, but along the way I ran into a few behaviors that while I
was able to work with, was wondering if I'm taking the right actions.
The first two items arose when storing new objects, and the third when
working on deletion.  This is with SQLAlchemy 0.3.4 (and Python 2.5),
and the bulk of the work through the ORM.

1. I have a simple "add" operation which does the basic object
   allocation, session.save() and session.flush().  The issue was when
   I tried to add an object with a SQL violation (duplicate value on
   unique column).  No problems with the SQLError exception, and
   rollback in the session transaction.  But what caught me a bit by
   surprise was the fact that the "bad" object was still in the
   session.  So a subsequent attempt to "add" a new (valid) object
   through the same session failed on the earlier object - still
   apparently in the UOW scope - before even getting to the valid
   object.  This seemed common enough that I expected to find easy
   references to it, but I've had trouble finding information on it.

   My "fix" was to trap the exception and explicitly session.expunge()
   the faulty object.  It works, but seems a little fragile - I'm not
   sure how this would extrapolate to a larger UOW (e.g., how do I
   discover the object at fault), or if I need to be doing this in a
   blanket except: clause - e.g., are there other sorts of failures
   that can leave a persistently failed object in the identity map.  I
   saw a prior note on this list about flush exceptions sometimes
   occurring instead of SQLError and am not sure if conditions
   resulting in the Flush version of the exception could cause the
   same problem.

   Is there a cleaner way to roll back the UOW within the session?  Or
   some way to identify objects in the identity map that failed to
   persist?  session.clear() seems overkill and interferes with other
   instances I might not need to involve.  Or is standard practice to
   keep new object addition tight code-wise to more easily know the
   object at fault when something goes wrong?

2. I'm unclear what (if anything) to expect with respect to my object
   instances in memory post-save for primary key or default value
   columns.  My initial expectation - particularly with the warning
   about flush() in the docs and related objects - is that the
   instance I had saved() to the session would be untouched and I'd
   have to retrieve it from the database again.  But in a quick test I
   saw the primary key and defaults seemed to have information after
   the flush() without any further steps.

   Of course, then it bit me later when I found that my DateTime columns
   (with a default of func.current_timestamp()) of my instances had
   string values and not datetime as I expected.

   I suppose I was seeing some aspect of the Python-level field
   defaults, perhaps getting formatted in preparation for use by the
   database.  My "fix" was to explicitly refresh() the saved object
   after the flush, which properly reconstituted the fields in the
   proper data types.  But it seems tedious to have to do that for
   every new object being saved, so I'm not sure it's the right
   approach.  And the fact that my existing instance gets its fields
   updated anyway gives me a mild concern that I might miss a refresh.

3. I was implementing an archive of deleted objects, sharing the
   object definition with two mappers - one primary and a second to a
   "deleted" table with a different entity name (basically an audit
   table for deleted instances).
   
   I was excited to see the entity_name availability with alternate
   mappers, and figured it would be an elegant way to handle the
   archive.  I hoped there would be an easy way to delete an object
   through the primary mapper and then save it to the alternate
   mapper, while using the same object instance (to avoid having to
   manually duplicate the object). But SQLAlchemy was very good at
   knowing I was playing with an instance that it already had
   associated with the first mapper.  In the end, rather than poke too
   much at artifically clearing SA state fields, I just copied my
   object (all non-"_" prefixed attributes) to a new instance, and
   saved that to the alternate entity_name.
   
   Anyway, I was wondering if there was a clean way to use a single
   object instance in this way, or even a supported way of clearing an
   instance so SA doesn't think it used to be associated somewhere.  I
   wanted the deletion/insertion to be within the same transaction,
   and fiddling with explicit outer transactions (with a separate
   flush for the delete versus insert) for example didn't seem to
   impact the fact that my current instance was kept stuck to the
   original mapper.

Thanks for any information.

-- David


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to