seqscan = off for the session, but
> that is a Big Hammer for what is probably a smaller problem.
>
> Bob Lunney
>
> --- On *Wed, 6/2/10, Jori Jovanovich * wrote:
>
>
> From: Jori Jovanovich
> Subject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT
>
On Wed, 2 Jun 2010, Jori Jovanovich wrote:
(2) Making the query faster by making the string match LESS specific (odd,
seems like it should be MORE)
No, that's the way round it should be. The LIMIT changes it all. Consider
if you have a huge table, and half of the entries match your WHERE claus
enable_seqscan = off for the session, but that is
a Big Hammer for what is probably a smaller problem.
Bob Lunney
--- On Wed, 6/2/10, Jori Jovanovich wrote:
From: Jori Jovanovich
Subject: [PERFORM] SELECT ignoring index even though ORDER BY and LIMIT present
To: pgsql-performance@postgresql.org
Date
"Kevin Grittner" writes:
> Jori Jovanovich wrote:
>> what is the recommended way to solve this?
> The recommended way is to adjust your costing configuration to
> better reflect your environment.
Actually, it's probably not the costs so much as the row estimates.
For instance, that first query
2010/6/2 Jori Jovanovich
> hi,
>
> I have a problem space where the main goal is to search backward in time
> for events. Time can go back very far into the past, and so the
> table can get quite large. However, the vast majority of queries are all
> satisfied by relatively recent data. I have
Jori Jovanovich wrote:
> what is the recommended way to solve this?
The recommended way is to adjust your costing configuration to
better reflect your environment. What version of PostgreSQL is
this? What do you have set in your postgresql.conf file? What does
the hardware look like? How b
hi,
I have a problem space where the main goal is to search backward in time for
events. Time can go back very far into the past, and so the
table can get quite large. However, the vast majority of queries are all
satisfied by relatively recent data. I have an index on the row creation
date and