Simon Riggs <[EMAIL PROTECTED]> writes:
> Given we expect an underestimate, can we put in a correction factor
> should the estimate get really low...sounds like we could end up
> choosing nested joins more often when we should have chosen merge joins.
One possibility: vacuum already knows how many
Simon Riggs <[EMAIL PROTECTED]> writes:
> On the topic of accuracy of the estimate: Updates cause additional data
> to be written to the table, so tables get bigger until vacuumed. Tables
> with many Inserts are also regularly trimmed with Deletes. With a
> relatively static workload and a regular
"Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]> writes:
>> rel->pages = RelationGetNumberOfBlocks(relation);
> Is RelationGetNumberOfBlocks cheap enough that you can easily use it for the
> optimizer ?
It's basically going to cost one extra lseek() kernel call ... per
query, per table referenced
> rel->pages = RelationGetNumberOfBlocks(relation);
Is RelationGetNumberOfBlocks cheap enough that you can easily use it for the
optimizer ?
I myself have always preferred more stable estimates that only change
when told to. I never liked that vacuum (without analyze) and create index
chan
There's been some previous discussion of getting rid of the pg_class
columns relpages and reltuples, in favor of having the planner check the
current relation block count directly (RelationGetNumberOfBlocks) and
extrapolate the current tuple count based on the most recently measured
tuples-per-page