Hi,

----- Original Message -----
From: "Richard Huxton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Scott Marlowe" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, June 10, 2004 8:03 AM
Subject: Re: [GENERAL] Postgresql vs. aggregates


> [EMAIL PROTECTED] wrote:
>
> > But that raises an interesting idea. Suppose that instead of one
> > summary row, I had, let's say, 1000. When my application creates
> > an object, I choose one summary row at random (or round-robin) and
update
> > it. So now, instead of one row with many versions, I have 1000 with
1000x
> > fewer versions each. When I want object counts and sizes, I'd sum up
across
> > the 1000 summary rows. Would that allow me to maintain performance
> > for summary updates with less frequent vacuuming?
>
> Perhaps the simplest approach might be to define the summary table as
> containing a SERIAL and your count.
> Every time you add another object insert (nextval(...), 1)
> Every 10s summarise the table (i.e. replace 10 rows all "scored" 1 with
> 1 row scored 10)
> Use sum() over the much smaller table to find your total.
> Vacuum regularly.
>

Something along these lines except using a SUM instead of a COUNT.

http://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php


Nick




---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to