>
> > session.begin_nested()
> > session.delete(some_obj)
> > session.commit()
>
> > session.query....for..some_obj finds it.
>
> > 1. How can I get the changes in the nested transaction to turn up in
> > the outer transaction without having to commit the outer transaction.
>
> you should be getting the results you expect.   commit() always issues  
> a flush which in this case should be issuing a DELETE, and then do a  
> RELEASE SAVEPOINT.   Take a look at your SQL log output to ensure the  
> expected conversation is occuring.   Also note that if you issue  
> session.query(cls).get(x), the get() call specifically will pull from  
> the cache, but in this case the cached object should be expired (since  
> commit() expires everything) - it will then check that the object was  
> not deleted, and in this case would still return None since it was.
>

The sequence of events I mentioned were incorrect. The following
happens

session.begin_nested()
session.delete(some_obj)
session.commit()

if some_obj in some_other_obj.foreign_key_based_collection:
                       ---> returns true

where some_other_obj_table & some_obj are in a 1 to many fk
relationship and some_other_obj.foreign_key_based_collection is a
sqlachemy instrumented list generated by sqlalchemy.

Turning on the logs made it clear that accessing the collection was a
pure python-memory operation. The delete(some_obj) in the nested
transaction actually did not update all SA collections that some_obj
was a member of (whereas a top level commit - does update such
collections).  For now, I am explicitly expiring some_other_obj after
the nested transaction is committed - the next access of
some_other_obj is rebuilding the collection from the DB.

> > 2. I am new to Sqlalchemy and DB based apps in general. The actual
> > problem I am trying to solve is to write apis that create/update/
> > delete objects while in a long running transaction without having to
> > commit this long running transaction.

_snip_

> a long running transaction is going to introduce locking issues and be  
> subject to stale data if your database is subject to concurrent  
> access.  Assuming that's all acceptable, there's no need to use  
> savepoints unless you need to be able to rollback within that long  
> running transaction and continue.  it depends on what you're trying to  
> do.

This a stateful session aware webapp responding to HTTP requests -
concurrent access is expected including sessions modifying data that
other sessions may be reading/modifying. In such a scenario would you
recommend many small transactions as compared to fewer larger
transactions per HTTP session ? Our HTTP sessions are expected to live
long.

-srp
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to