Thanks  Michael,

Just to make clear what exactly begin_nested() is contributing:

Normal case:
session.rollback() goes back to the last session.commit()

session.begin_nested() case:
session.rollback() goes back to the last session.begin_nested() or
session.commit(), whichever occurred last.

Correct?


On May 28, 11:54 am, Michael Bayer <mike...@zzzcomputing.com> wrote:
> An option to add along to the unique constraint, if you expect to get 
> collisions often, is to use a SAVEPOINT so that a process can roll back 
> partially if this particular INSERT fails, then use the row.  The Session 
> offers SAVEPOINT via begin_nested():
>
> session.begin_nested()
> try:
>      session.add(thing_that_may_exist_already)
>      session.commit()  # flushes, and commits only the "savepoint"
> except exc.IntegrityError:
>     session.rollback()
>     thing_that_may_exist_already = 
> session.query(Thing).filter_by(<criteiron>).one()
>
> the difference between using locks to prevent concurrent dupes versus using 
> constraints and expecting dupes to fail is known as pessimistic versus 
> optimistic locking.
>
> On May 28, 2012, at 10:38 AM, Jeff wrote:
>
>
>
>
>
>
>
> > The unique constraint sounds like a workable solution! I'll implement
> > that with a try/except and report back if that was effective. Thanks!
>
> > On May 28, 5:43 am, Simon King <si...@simonking.org.uk> wrote:
> >> On Sun, May 27, 2012 at 6:18 PM, Jeff <jeffalst...@gmail.com> wrote:
> >>> Thanks,
>
> >>> I have indeed spent a lot of time looking at SELECT FOR UPDATE, but as
> >>> far as I can tell that locks rows that have been selected. That is not
> >>> helpful in this use case, in which the issue is rows not existing, and
> >>> then later existing. Am I misunderstanding?
>
> >>> On May 27, 11:48 am, "A.M." <age...@themactionfaction.com> wrote:
> >>>> On May 27, 2012, at 1:07 AM, Jeff wrote:
>
> >>>>> I have multiple processes accessing  a table. All of these processes
> >>>>> want to read a set of rows from the table, and if the rows are not
> >>>>> present they will make a calculation and insert the rows themselves.
> >>>>> The issue comes where process  A does a query to see if the target set
> >>>>> of rows is present in the table, and they're not, and then another
> >>>>> starts calculating. While it's calculating, process B inserts the
> >>>>> rows. Then process A inserts the rows, and now we have two copies of
> >>>>> these sets of rows. Bad.
>
> >>>> You should look at "SELECT FOR UPDATE".
>
> >>>>http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmo...
>
> >>>> Cheers,
> >>>> M
>
> >> Could you put unique constraints on the table so that the second
> >> process will get an error when it tries to insert the duplicate rows?
> >> It won't prevent you from performing the calculations twice, but at
> >> least you won't get the duplicates.
>
> >> Another option would be to write some sort of "pending" marker into
> >> the table, so that subsequent processes know that the result is
> >> already being calculated.
>
> >> Simon
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to