> -----Original Message-----
> From: Richard Huxton [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, March 15, 2005 11:38 AM
> To: Tambet Matiisen
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One tuple per transaction
> 
...
> 
> > Consider the often suggested solution for speeding up 
> "select count(*) 
> > from table" query: make another table rowcounts and for each of the 
> > original tables add insert and delete triggers to update 
> row count in 
> > rowcounts table. Actually this is standard denormalization 
> technique, 
> > which I use often. For example to ensure that order.total = 
> > sum(order_line.total).
> 
> This does of course completely destroy concurrency. Since you need to 
> lock the summary table, other clients have to wait until you are done.
> 

Yes, it does for rowcounts table. But consider the orders example - it
only locks the order which I add lines. As there is mostly one client
dealing with one order, but possibly thousands dealing with different
orders, it should not pose any concurrency restrictions.

> > Now, if typical inserts into your most active table occur 
> in batches 
> > of 3 rows, in one transaction, then row count for this table is 
> > updated 3 times during transaction. 3 updates generate 3 
> tuples, while 
> > 2 of them are dead from the very start. You effectively commit 2 
> > useless tuples. After millions of inserts you end up with rowcounts 
> > table having 2/3 of dead tuples and queries start to slow down.
> > 
> > Current solution is to vacuum often. My proposal was to create new 
> > tuple only with first update. The next updates in the same 
> transaction 
> > would update the existing tuple, not create a new.
> 
> How do you roll back to a savepoint with this model?
> 

Every savepoint initiates a new (sub)transaction.

  Tambet

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to