On May 16, 2011, at 10:46 AM, Tom Lane wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <jes...@krogh.cc> wrote:
>>> Ok, it may not work as well with index'es, since having 1% in cache may very
>>> well mean that 90% of all requested blocks are there.. for tables in should
>>> be more trivial.
> 
>> Tables can have hot spots, too.  Consider a table that holds calendar
>> reservations.  Reservations can be inserted, updated, deleted.  But
>> typically, the most recent data will be what is most actively
>> modified, and the older data will be relatively more (though not
>> completely) static, and less frequently accessed.  Such examples are
>> common in many real-world applications.
> 
> Yes.  I'm not convinced that measuring the fraction of a table or index
> that's in cache is really going to help us much.  Historical cache hit
> rates might be useful, but only to the extent that the incoming query
> has a similar access pattern to those in the (recent?) past.  It's not
> an easy problem.
> 
> I almost wonder if we should not try to measure this at all, but instead
> let the DBA set a per-table or per-index number to use, analogous to the
> override we added recently for column n-distinct statistics ...

I think the challenge there would be how to define the scope of the hot-spot. 
Is it the last X pages? Last X serial values? Something like correlation?

Hmm... it would be interesting if we had average relation access times for each 
stats bucket on a per-column basis; that would give the planner a better idea 
of how much IO overhead there would be for a given WHERE clause.
--
Jim C. Nasby, Database Architect                   j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to