Robert Haas <robertmh...@gmail.com> writes: > On Tue, Aug 29, 2017 at 10:22 PM, Thomas Munro > <thomas.mu...@enterprisedb.com> wrote: >> (2) We could push a Bloom filter down to scans >> (many other databases do this, and at least one person has tried this >> with PostgreSQL and found it to pay off[1]).
> I think the hard part is going to be figuring out a query planner > framework for this, because pushing down the Bloom filter down to the > scan changes the cost and the row-count of the scan. Uh, why does the planner need to be involved at all? This seems like strictly an execution-time optimization. Even if you wanted to try to account for it in costing, I think the reliability of the estimate would be nil, never mind any questions about whether the planner's structure makes it easy to apply such an adjustment. Personally though I would not bother with (2); I think (1) would capture most of the win for a very small fraction of the complication. Just for starters, I do not think (2) works for batched hashes. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers