On Mar 17, 2014, at 10:38 PM, Morgan McClure <mcclure.mor...@gmail.com> wrote:

> I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
> postgres
> 
> If I put a unique constraint on the join table, and I add a duplicate, I get 
> an integrity error.
> If I change the collection class to set, it won't double commit, however a 
> set prevents me from using things like order_by.
> 
> In my scenario, I'm massively multi-threaded and so the check-before commit 
> methodology won't work (proven it breaks with 24 processes).
> Is it possible to get a nice elegant solution to this without defining a 
> custom collection_class?
> 
> I believe this is a regression (enhancement?) from version 0.8, but on 0.8 I 
> was using mysql and now I'm using postgres.

im not seeing how this is any kind of change from version 8 to 9, or even from 
version 7, 6, or 5;  a list will certainly allow duplicates that will give you 
integrity errors, and a set certainly won't.   using many processes of course 
you can't coordinate those in memory with a set, only the database knows the 
right answer.

the approach here unfortunately is to use traditional means of adding new rows 
while checking for an existing one.   which means either emitting a SELECT 
first and ensuring adequate coordination between these 24 processes using 
transaction isolation or locks, or using a simple optimistic approach where you 
start a savepoint (begin_nested()), attempt the operation, catch IntegrityError 
and then continue with the row now known to be already present.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to