Hi Krit,

It looks like your actual problem is here:

>  Index Scan using t1_orderid_creationtime_idx on t1
>  (cost=0.43..1181104.36 rows=9879754 width=158)
>  (actual time=0.021..60830.724 rows=2416614 loops=1

This index scan estimates 9.8M rows, and had to touch 2.4M. The issue is that 
your LIMIT clause makes the planner overly optimistic. The worst case cost 
estimate for this part of the query is about 1.2M, which is much higher than 
the SEQ SCAN variation you posted. The planner must think it can get the rows 
without incurring the full cost, otherwise I can't see how the 1.2M cost 
estimate wasn't rolled into the total estimate.

Unfortunately behavior like this is pretty common when using LIMIT clauses. 
Sometimes the planner thinks it can get results much faster than it actually 
can, and it ends up reading a much larger portion of the data than it assumed 
would be necessary.

Just out of curiosity, Can you tell me what your default_statistics_target is?

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to