Hello I have problem in my applications and don't know how to fix it.
This is the table and one of the indexes: CREATE TABLE foo ( id serial NOT NULL, foo_name character varying(100), realm_id integer ... and about 50 other columns ) CREATE INDEX idx_foo_name_realm ON foo USING btree (realm_id, foo_name); Table foo contains about 8 Million Rows. The problem: Consider this query: SELECT * FROM foo WHERE realm_id = 228 order by foo_name LIMIT 200 OFFSET 15000 And it's execution plan: "Limit (cost=57527.13..58294.16 rows=200 width=575) (actual time=182.302..184.971 rows=200 loops=1)" " -> Index Scan using idx_foo_name_realm on foo (cost=0.00..62159.98 rows=16208 width=575) (actual time=0.085..166.861 rows=15200 loops=1)" " Index Cond: (realm_id = 228)" "Total runtime: 185.591 ms" And now look at this: SELECT * FROM foo WHERE realm_id = 228 order by foo_name LIMIT 200 OFFSET 15999 "Limit (cost=59601.92..59602.42 rows=200 width=575) (actual time=1069.759..1072.310 rows=200 loops=1)" " -> Sort (cost=59561.92..59602.44 rows=16208 width=575) (actual time=929.948..1052.620 rows=16199 loops=1)" " Sort Key: foo_name" " Sort Method: external merge Disk: 8984kB" " -> Bitmap Heap Scan on foo (cost=306.69..54270.62 rows=16208 width=575) (actual time=9.612..235.902 rows=21788 loops=1)" " Recheck Cond: (realm_id = 228)" " -> Bitmap Index Scan on foo_realm_id (cost=0.00..302.64 rows=16208 width=0) (actual time=8.733..8.733 rows=21810 loops=1)" " Index Cond: (realm_id = 228)" "Total runtime: 1084.706 ms" Execution time increases tenfold because postgres stopped using the index. Can anybody explain to me what's going on and what can be done? Is this a memory problem?