On 6/19/15 9:57 AM, Ian Pushee wrote:


On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:
Explain Analyze outputs (links as requested):
Default plan: http://explain.depesz.com/s/ib3k
Forced index (random_page_cost=1.0): http://explain.depesz.com/s/lYaP

Software/Hardware: PGSql 9.2.1, Windows 8.1, 8GB RAM
All pgsql settings are at their defaults.
increase work_mem. per session via set work_mem  = 'xxxMB'; or in
postgresql.conf, reload.



Hi Andreas,

The number of rows in the events table isn't constrained, so
unfortunately it isn't feasible to set work_mem high enough to allow an
in-memory sort. Forcing the planner to use the index works to produce a
fast query, so I'm wondering if there is a more general way to getting
the planner to take into account that work_mem isn't big enough to fit
the query which will result in a MUCH more costly external merge.

What Andreas is saying is the reason the sort is so expensive is because it spilled to disk. If you don't have enough memory to do the sort in-memory, then you probably don't have enough memory to buffer the table either, which means the index scan is going to be a LOT more expensive than a sort.

That said, the better your IO system is the lower you need to set random_page_cost. With a good raid setup 2.0 is a good starting point, and I've run as low as 1.1. I've never run a system on all SSD, but I've heard others recommend setting it as low as 1.0 on an all SSD setup.

It's also worth noting that there's some consensus that the optimizer is generally too eager to switch from an index scan to a seqscan.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to