On Tue, 12 May 2009, Thomas Finneid wrote:

on a database with 260 GB of data and an index size of 109GB on separate raid disks. there are
        85 city_ids, 2000
        street_ids per city,
        20 house_ids per street per city
        5 floor_ids per house_ per street per city

You should test what happens if you reduce the index to just being (city_id,street_id). Having all the fields in there makes the index larger, and it may end up being faster to just pull all of the ~100 data rows for a particular (city_id,street_id) using the smaller index and then filter out just the ones you need. Having a smaller index to traverse also means that you'll be more likely to keep all the index blocks in the buffer cache moving forward.

A second level improvement there is to then CLUSTER on the smaller index, which increases the odds you'll get all of the rows you need by fetching only a small number of data pages.

--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to