Re: [sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-24 Thread Michael Bayer
im not sure about the autoincrement, though MySQL is known for pretty high performance so I'm sure whatever technique it uses there is very efficient, if it isn't maintaining an internal sequence-like-object, it's at the very least just consulting the index and shouldn't be detectable as a lock

[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
More data: A typical not-quite-worst-but-in-the-class-of-worst case scenario is a half a million rows per insert. Absolute worst case scenarios could be 10 times that. So that insert will take awhile. Would there be any logic to breaking up all the inserts into one row per insert? Would that

Re: [sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Michael Bayer
My initial thought is that INSERTs shouldn't be locking the whole table, at least not throughout a whole transaction. There's some MySQL hints that can help with this, if you're on MyISAM take a look at http://dev.mysql.com/doc/refman/5.0/en/concurrent-inserts.html , possibly using the

[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
Thanks for the help and links! One additional data point: The table has an id field that autoincrements. A friend thought that might be a barrier to non-locking inserts, but wasn't sure. I'm having difficulty finding any resource explicitly saying that, though, and simply trying it would be