Thanks for the response :)
It's currently set at 16mb, I've also tried upping sort_mem as well without any noticible impact on the uncached query. :(That's 50-ish ms versus 80-odd seconds.
It seems to me a merge join might be more appropriate here than a nestloop. What's your work_mem set at? Off-the-cuff numbers show the dataset weighing in the sub-ten mbyte range.
Provided it's not already at least that big, and you don't want to up it permanently, try saying:
SET work_mem = 10240; -- 10 mbytes
I'm not entirely sure what's up with the row-count estimates, the tables are updated quite frequently (and VACUUM is also run quite frequently), however I had just run a VACUUM ANALYZE on both databases before running the explain.immediately before running this query (uncached, of course) and see what happens.
Also, your row-count estimates look pretty off-base. When were these
tables last VACUUMed or ANALYZEd?
I'm also still baffled at the differences in the plans between the two servers, on the one that uses the index to sort, I get for comparison a nestloop of:
Nested Loop (cost=0.00..1175943.99 rows=1814 width=311) (actual time=25.337..26.867 rows=10 loops=1)
The plan that the "live" server seems to be using seems fairly inefficient.
Derek
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq