Hi

have the following table (theoretical)

table apartment_location (

        city_id    int,
        street_id  int,
        house_id   int,
        floor_id   int,
        owner      string
        ...
)

index .. ( city_id, street_id, house_id, floor_id ) tablespc indexspace;

on a database with 260 GB of data and an index size of 109GB on separate raid disks. there are
        85 city_ids, 2000
        street_ids per city,
        20 house_ids per street per city
        5 floor_ids per house_ per street per city

Then I perform a query to retrieve all house_ids for a specified city, house and floor ( a bit contrived, but the same cardinality applies)

  select street_id, floor_id from apartment_location where
        city_id = 67 and
        house_id = 6 and
        floor_id = 4

this returns about 2000 rows, but the query takes 3-4 seconds. It performas an index scan, and everything happens inside 6GB of memory.

So the question, any suggestions on how to possibly decrease the query time. From iostat etc. its seems that most of the work is reading the index, reading the data takes almost next to nothing.

Any suggestions?

regards

thomas





        

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to