On Wed, 29 Mar 2006, Simon Riggs wrote:

First off, we need some good timings that show this effect. I believe
it, but we need some publicly discussable performance test cases to show
the effect and then show how much we've improved upon it, repeatably.

Yeah, a good vacuum benchmark would be nice, not so much for this specific case but in general.

Initially, I'd suggest just trying to improve this situation by
pre-scanning the physical index files into OS filesystem cache (only) -
i.e. dont lock the files at all. That way, all I/O is sequential into
memory and then after that all random I/O will be logical. But it would
*all* need to fit in cache.

If the index is small enough to fit in memory, it's not so much of a problem anyway...

We might be able to improve the index FSM allocation algorithm so that
we improve the locality of logically adjacent blocks. That way a larger
than memory index would be able to be read with a limited cache. We
could then replace the full pre-read with just a limited sequential scan
ahead.

That would be a good thing for index scan performance too.

Maybe effective_cache_size could be a real parameter after all?

The existing FSM allocation scheme provides this for certain kinds of
tables, but not others.

Can you elaborate, please? I couldn't find any evidence of that.

- Heikki

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to