Gene <[EMAIL PROTECTED]> writes:
> "Your best bet might be to partition the table into two subtables, one
> with "stable" data and one with the fresh data, and transfer rows from
> one to the other once they get stable. Storage density in the "fresh"
> part would be poor, but it should be small
You are correct the main part I'm worried about is the updates, being so far from the originals. fyi I
am partitioning the tables by the timestamp column,vacuum analyzing once per hour, creating one child
partition per day in a cron job. Because I'm using hibernate for database abstraction (statele
Gene <[EMAIL PROTECTED]> writes:
> I have a table that inserts lots of rows (million+ per day) int8 as primary
> key, and I cluster by a timestamp which is approximately the timestamp of
> the insert beforehand and is therefore in increasing order and doesn't
> change. Most of the rows are updated
I have a table that inserts lots of rows (million+ per day) int8 as primary key, and I cluster by a timestamp which is approximately the timestamp of the insert beforehand and is therefore in increasing order and doesn't change. Most of the rows are updated about 3 times over time roughly within th