Hi there,

I have a table which looks similar to:

CREATE TABLE accounting
(
 id text NOT NULL,
 time timestamp with time zone,
 data1 int,
 data2 int,
 data3 int,
 data4 int,
 data5 int,
 data6 int,
 data7 int,
 data8 int,
 state int
 CONSTRAINT accounting_pkey PRIMARY KEY (id),
)

The table has about 300k rows but is growing steadily. The usage of this table is few selects and inserts, tons of updates and no deletes ever. Ratios are roughly
select:insert = 1:1
insert:update = 1:60

Now it turns out that almost all reporting queries use the time field and without any additional indexes it ends up doing slow and expensive sequential scans (10-20 seconds). Therefore I'd like to create and index on time to speed this up, yet I'm not entirely sure what overhead that introduces. Clearly there's some overhead during insertion of a new row which I can live with but what's not clear is the overhead during updates, and the postgresql manual doesn't make that explicit.

You see, all updates change most of the data fields but never ever touch the time field. Assuming correct and efficient behaviour of postgresql it should then also never touch the time index and incur zero overhead in its presence, but is this really the case? If it somehow does update the index too even though the value hasn't changed by some weird implementation detail I'd rather not have that index and live with slow queries for the few times a day that reporting is run.

Gunther

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to