Re: [sqlalchemy] Unique Many-to-Many

2014-03-18 Thread Morgan McClure
Yeah I confirmed set collections don't actually fix it, I guess setting a 
rollback point is required.
Is there any chance this is a difference between the way mysql does table 
locking and postgres?
The collections in question have lazy=dynamic set so I'm wondering if 
previously it wasn't a problem because the DB was locking the whole table?



On Monday, March 17, 2014 8:49:57 PM UTC-7, Michael Bayer wrote:


 On Mar 17, 2014, at 10:38 PM, Morgan McClure 
 mcclure...@gmail.comjavascript: 
 wrote: 

  I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
 postgres 
  
  If I put a unique constraint on the join table, and I add a duplicate, I 
 get an integrity error. 
  If I change the collection class to set, it won't double commit, however 
 a set prevents me from using things like order_by. 
  
  In my scenario, I'm massively multi-threaded and so the check-before 
 commit methodology won't work (proven it breaks with 24 processes). 
  Is it possible to get a nice elegant solution to this without defining a 
 custom collection_class? 
  
  I believe this is a regression (enhancement?) from version 0.8, but on 
 0.8 I was using mysql and now I'm using postgres. 

 im not seeing how this is any kind of change from version 8 to 9, or even 
 from version 7, 6, or 5;  a list will certainly allow duplicates that will 
 give you integrity errors, and a set certainly won’t.   using many 
 processes of course you can’t coordinate those in memory with a set, only 
 the database knows the right answer. 

 the approach here unfortunately is to use traditional means of adding new 
 rows while checking for an existing one.   which means either emitting a 
 SELECT first and ensuring adequate coordination between these 24 processes 
 using transaction isolation or locks, or using a simple optimistic approach 
 where you start a savepoint (begin_nested()), attempt the operation, catch 
 IntegrityError and then continue with the row now known to be already 
 present.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unique Many-to-Many

2014-03-18 Thread Michael Bayer

On Mar 18, 2014, at 11:30 AM, Morgan McClure mcclure.mor...@gmail.com wrote:

 Yeah I confirmed set collections don't actually fix it, I guess setting a 
 rollback point is required.
 Is there any chance this is a difference between the way mysql does table 
 locking and postgres?
 The collections in question have lazy=dynamic set so I'm wondering if 
 previously it wasn't a problem because the DB was locking the whole table?

the behavior of isolation between mysql and postgresql is different, not to 
mention you might have things set differently, so sure that can totally change 
things.

I usually examine isolation behavior by opening two separate database sessions, 
starting a transaction on both, then just working with direct SQL statements to 
see at what point a transaction blocks or not, or if it throws an exception on 
commit, stuff like that.

you'd need to carefully study links like the following to learn more:

http://dev.mysql.com/doc/refman/5.1/en/innodb-transaction-model.html

http://www.postgresql.org/docs/9.1/static/transaction-iso.html




 
 
 
 On Monday, March 17, 2014 8:49:57 PM UTC-7, Michael Bayer wrote:
 
 On Mar 17, 2014, at 10:38 PM, Morgan McClure mcclure...@gmail.com wrote: 
 
  I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
  postgres 
  
  If I put a unique constraint on the join table, and I add a duplicate, I 
  get an integrity error. 
  If I change the collection class to set, it won't double commit, however a 
  set prevents me from using things like order_by. 
  
  In my scenario, I'm massively multi-threaded and so the check-before commit 
  methodology won't work (proven it breaks with 24 processes). 
  Is it possible to get a nice elegant solution to this without defining a 
  custom collection_class? 
  
  I believe this is a regression (enhancement?) from version 0.8, but on 0.8 
  I was using mysql and now I'm using postgres. 
 
 im not seeing how this is any kind of change from version 8 to 9, or even 
 from version 7, 6, or 5;  a list will certainly allow duplicates that will 
 give you integrity errors, and a set certainly won't.   using many processes 
 of course you can't coordinate those in memory with a set, only the database 
 knows the right answer. 
 
 the approach here unfortunately is to use traditional means of adding new 
 rows while checking for an existing one.   which means either emitting a 
 SELECT first and ensuring adequate coordination between these 24 processes 
 using transaction isolation or locks, or using a simple optimistic approach 
 where you start a savepoint (begin_nested()), attempt the operation, catch 
 IntegrityError and then continue with the row now known to be already present.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email tosqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Unique Many-to-Many

2014-03-17 Thread Morgan McClure
I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
postgres

If I put a unique constraint on the join table, and I add a duplicate, I 
get an integrity error.
If I change the collection class to set, it won't double commit, however a 
set prevents me from using things like order_by.

In my scenario, I'm massively multi-threaded and so the check-before commit 
methodology won't work (proven it breaks with 24 processes).
Is it possible to get a nice elegant solution to this without defining a 
custom collection_class?

I believe this is a regression (enhancement?) from version 0.8, but on 0.8 
I was using mysql and now I'm using postgres.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unique Many-to-Many

2014-03-17 Thread Michael Bayer

On Mar 17, 2014, at 10:38 PM, Morgan McClure mcclure.mor...@gmail.com wrote:

 I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
 postgres
 
 If I put a unique constraint on the join table, and I add a duplicate, I get 
 an integrity error.
 If I change the collection class to set, it won't double commit, however a 
 set prevents me from using things like order_by.
 
 In my scenario, I'm massively multi-threaded and so the check-before commit 
 methodology won't work (proven it breaks with 24 processes).
 Is it possible to get a nice elegant solution to this without defining a 
 custom collection_class?
 
 I believe this is a regression (enhancement?) from version 0.8, but on 0.8 I 
 was using mysql and now I'm using postgres.

im not seeing how this is any kind of change from version 8 to 9, or even from 
version 7, 6, or 5;  a list will certainly allow duplicates that will give you 
integrity errors, and a set certainly won't.   using many processes of course 
you can't coordinate those in memory with a set, only the database knows the 
right answer.

the approach here unfortunately is to use traditional means of adding new rows 
while checking for an existing one.   which means either emitting a SELECT 
first and ensuring adequate coordination between these 24 processes using 
transaction isolation or locks, or using a simple optimistic approach where you 
start a savepoint (begin_nested()), attempt the operation, catch IntegrityError 
and then continue with the row now known to be already present.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.