Thanks again Rick. The issue is that I have a LOT of duplicates (around 20-30%) - that's just how that tree is structured. Therefore, I think I am going to go with catching DB exceptions regardless, but also use an indexed collection to prevent duplicates.
Cheers! On Apr 20, 12:43 pm, Richard Harding <rhard...@mitechie.com> wrote: > I'm not sure, but I'd check the exception and see if you can get the info > about which of your 50 were the dupe. I don't recall if it's in the > traceback or exception error. If you can identify it then you could store it > aside and remove it from the session and retry the other 49 again. > > Otherwise, it's the case of finding the mole. Maybe run some sort of binary > split of the 50 so that you split the list in half, try to commit each half, > one works, one fails. Split the fail side again, etc. In this way you should > really only get down to what, 7 commits per 50? This is all assuming one > dupe/bad record in the group of 50. -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.