Given an index like this:
CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;
and a query like this:
SELECT * FROM t1 WHERE c1 = 123;
I'd like the planner to be smart enough to use an index scan using i1. Yes,
I can change the query to this:
SELECT * FROM t1 WHERE c1 = 123
On 2/22/04 5:06 PM, Tom Lane wrote:
John Siracusa [EMAIL PROTECTED] writes:
I want to do something that will convince Postgres that using the date
index is, by far, the best plan when running my queries, even when the
date column correlation stat drops well below 1.0.
Have you tried
On 2/22/04 6:40 PM, Tom Lane wrote:
John Siracusa [EMAIL PROTECTED] writes:
I think the key is to get the planner to correctly ballpark the number of
rows in the date range.
I thought it was. What you showed was
- Index Scan using mytable_date_idx on mytable (cost=0.00..3071.70 rows
This is a follow-up to an old thread of mine, but I can't find it now
so I'll just re-summarize.
I have a ~1 million row table that I mostly want to query by date
range. The rows are pretty uniformly spread over a 3 year date range.
I have an index on the date column, but it wasn't always
On 1/5/04 1:55 AM, Tom Lane wrote:
John Siracusa [EMAIL PROTECTED] writes:
Obviously the planner is making some bad choices here.
A fair conclusion ...
I know that it is trying to avoid random seeks or other scary things
implied by a correlation statistic that is not close to 1 or -1
On 1/5/04 11:45 AM, Christopher Browne wrote:
It sounds to me as though the statistics that are being collected
aren't good enough. That tends to be a sign that the quantity of
statistics (e.g. - bins in the histogram) are insufficient.
This would be resolved by changing the number of bins
Speaking of special cases (well, I was on the admin list) there are two
kinds that would really benefit from some attention.
1. The query select max(foo) from bar where the column foo has an index.
Aren't indexes ordered? If not, an ordered index would be useful in this
situation so that this
On 1/5/04 2:52 PM, Rod Taylor wrote:
max(foo) optimizations requires an extension to the aggregates system.
It will likely happen within a few releases.
Looking forward to it.
A work around can be accomplished today through the use of LIMIT and ORDER BY.
Wowzers, I never imagined that that'd
I have a very large table (about a million rows) which I most frequently
want to select a subset of rows from base on a date field. That date field
is indexed, and when Postgres uses that index, queries are fast. But
sometimes it decides not to use the index, resorting to a sequential scan