* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost <sfr...@snowman.net> writes:
> > I've not looked into the specific costing here to see why the BitmapAnd
> > ended up being chosen over just doing an index scan with the btree and
> > then filtering, but I do believe it to be a problem area that would be
> > good to try and improve.  The first question is probably- are we
> > properly accounting for the cost of scanning the index vs the cost of
> > scanning one index and then applying the filter?
> 
> We are costing it out in what seems a sane way to me.  In the given
> example the "bad" plan is estimated at just slightly cheaper than what
> (I assume) the "good" plan is.  I'm inclined to think this represents a
> failure to choose good cost parameters for the installation.

I'll try and get an opportunity to review the numbers for the case which
I ran across previously.

> Given how remarkably quick the single-index scan is, I also wonder if
> that index is fully cached while we had to read some of the other index
> from kernel or SSD.  Relative cache states of different indexes is a
> problem the planner doesn't currently try to deal with; it's possible
> that that could bias it towards trying to AND a large-but-not-fully-cached
> index with a smaller-and-fully-cached-one, when not bothering with the
> larger index would in fact be better.  You might be able to counter that
> to some extent by reducing effective_cache_size, though possibly that
> cure is worse than the disease.

Unfortunately, this doesn't actually hold water for the case which I ran
into as this was across multiple repeated invocations, where both indexes
were fully cached.  It was simply much more expensive to scan the entire
GIST index (which wasn't small) than to fetch and filter the records
returned from the btree index.

I could see possibly adjusting the costing of the bounding box operator
to be lower, to lower the overall cost of the plan to use the btree
index and then filter as compared to the BitmapAnd scan, but the actual
run-times weren't even close, as I recall (something like 45-60s vs.
less than a second), which makes me wonder about there being a missing
costing somewhere.  The number of records returned from the indexes was
similar to the case presented here- 100k records from one, 10m+ from the
other.

I'm just wondering how we manage to not realize that scanning through
gigabytes of index pages is going to be more expensive than running an
operator comparison across 100k records.

Thanks!

Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to