Am Mittwoch, 6. November 2013 21:58:53 UTC+1 schrieb Michael Bayer:
>
> I wrote a full post regarding this topic on stackoverflow at  
> http://stackoverflow.com/questions/11769366/why-is-sqlalchemy-insert-with-sqlite-25-times-slower-than-using-sqlite3-directly/11769768#11769768.
>   If you start with this, I can answer more specific questions. 
>

The article was very helpful, thanks. I still want to figure out the best 
balance between convenience and speed for my use case. Do the following 
make sense and is possible?

I work only with Postgresql and I'm sure that all involved objects have a 
unique id column which is called 'id'.  So before doing a session.commit(), 
I could check how many objects are in my session. As I'm just bulk 
inserting, I know that all of them are new and don't have their id set yet. 
Now I ask the database for that number of new ids, iterate over the objects 
in my session and set the ids. Internally all ids would come from a single 
sequence, so I don't have to care about object types and so on. Afterwards 
SqlAlchemy should be aware that ids have already been set, so no generated 
ids have to be returned and the session.commit() should be much simpler and 
faster.

Sounds like a still quite simple, but hopefully much faster solution. Do 
you agree?

kind regards,
Achim

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to