Le jeudi 20 décembre 2007, Decibel! a écrit :
> A work-around others have used is to have the trigger just insert
> into a 'staging' table and then periodically take the records from
> that table and summarize them somewhere else.
And you can even use the PgQ skytools implementation to easily have
On Dec 19, 2007, at 6:39 PM, Tom Lane wrote:
The thing that concerns me is dead tuples on the table_stats
table. I
believe that every insert of new data in one of the monitored tables
will result in an UPDATE of the table_stats table. When thousands
( or millions ) of rows are inserted, the s
Dan Harris <[EMAIL PROTECTED]> writes:
> The thing that concerns me is dead tuples on the table_stats table. I
> believe that every insert of new data in one of the monitored tables
> will result in an UPDATE of the table_stats table. When thousands
> ( or millions ) of rows are inserted, t
I've been fighting with the common workarounds for inadequate response
times on select count(*) and min(),max() on tables with tens of
millions of rows for quite a while now and understand the reasons for
the table scans.
I have applications that regularly poll a table ( ideally, the more