I drastically sped up my inserts by precomputing any defaults on a
column and passing them explicitly instead of calculating them on each
insert. For example, each row had a timestamp and the timestamp was
being calculated on each insert for each row. Since I was inserting
them all at the same
See it here on lines 323-352:
http://bazaar.launchpad.net/~bauble/bauble/trunk/annotate/head%3A/bauble/plugins/imex/csv_.py
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.01.2009 16:32 Uhr, Victor Lin wrote:
Hi,
So I think that would be better to buffer data and flush them.
This is easily spoken the implementation pattern of SA: unit-of-work.
You might check the documentation.
- -aj
-BEGIN PGP
inserting bulk data is most efficient if you use an insert many:
connection.execute(
table.insert(),
[{..row...}, {...row...}, {...row..} ..]
)
For additional performance, you want to ensure that the above insert
is not relying on any python-level default executions (like when
On Sun, Jan 18, 2009 at 1:32 PM, Victor Lin borns...@gmail.com wrote:
Hi,
I am building some applications that insert data to database heavily.
I commit in for-loop every iteration. It seems that it is not a smart
way to commit data every iteration. SQL queries implies expensive IO
action.
On Sun, Jan 18, 2009 at 1:46 PM, Ramiro Morales cra...@gmail.com wrote:
Take a look at ticket #6095 to know the motivation and rationale behind
the addition of the through option and some of the advantages when
compared with using you own model with FKs to the two related models.
Also and