On 2010-07-05 12:11, Pierre C wrote:

> The problem can generally be written as "tuples seeing multiple
> updates in the same transaction"?
>
> I think that every time PostgreSQL is used with an ORM, there is a
> certain amount of multiple updates taking place. I have actually
> been reworking clientside to get around multiple updates, since
> they popped up in one of my profiling runs. Allthough the time I
> optimized away ended being both "roundtrip time" + "update time",
> but having the database do half of it transparently, might have
> been sufficient to get me to have had a bigger problem elsewhere..
>
> To sum up. Yes I think indeed it is a real-world case.
>
> Jesper

 On the Python side, elixir and sqlalchemy have an excellent way of
 handling this, basically when you start a transaction, all changes
 are accumulated in a "session" object and only flushed to the
 database on session commit (which is also generally the transaction
 commit). This has multiple advantages, for instance it is able to
 issue multiple-line statements, updates are only done once, you save
 a lot of roundtrips, etc. Of course it is most of the time not
 compatible with database triggers, so if there are triggers the ORM
 needs to be told about them.

How about unique constraints, foreign key violations and checks? Would
you also pospone those errors to commit time? And transactions with lots of data?

It doesn't really seem like a net benefit to me, but I can see applications
where it easily will fit.

Jesper

Reply via email to