Sorry for being stupid, you can select the table info from the pg_class
table, so i can ignore the information schema.

If the stats collector is 'lossy ' i will not be able to use it, can anyone
confirm that it is ? So maybe my best option is to write a simple trigger
that just increments a counter (value in a separate table) after an
insert/update/delete and then add this trigger to each table i want to
record the stats for.  Would this new approach work (i.e. be "lossless") ?

Cheers,

Andy

On 11/01/07, Richard Huxton <dev@archonet.com> wrote:

Andy Dale wrote:
> Hi,
>
> I turned on the stats_row_level in the postgresql.conf file and now the
the
> calls to the stats functions work.  I want to get the inserted, updated,
> and
> deleted numbers on a given database, so i have written a query to do so:
>
> SELECT
>    sum(pg_stat_get_tuples_inserted(c.oid)) AS inserted,
>    sum(pg_stat_get_tuples_updated(c.oid)) AS updated,
>    sum(pg_stat_get_tuples_deleted(c.oid))  AS deleted
> FROM
>    pg_class c, information_schema.tables i
> WHERE
>    i.table_catalog = 'testdb'     AND
>    i.table_schema= 'public'     AND
>    i.table_name   =  c.relname
>
> I had to use the information schema as i could not figure out a way to
> fetch
> the tables of a particular database using only pg_* tables.

It's the pg_class table you'll want to start with. If you start psql
with -E and then do \dt you'll see the queries it uses.

> What i am
> really now concerned is reliability, is it possible that the stats can
be
> incorrect ? and are they never reset ?.  Also does using row level stats
> have a serious effect on the performance ?

Well, I'm not sure about incorrect. AFAIK the stats gatherer is "lossy",
so there's not a 100% guarantee that every read/write is measured.
Performance shouldn't be an issue unless you're already pushing the
limits of your hardware.

--
   Richard Huxton
   Archonet Ltd

Reply via email to