On 4 May 2016 at 09:18, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 4 May 2016 at 02:10, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote:
>> There are probably a few reasonably simple things we could do - e.g. ignore
>> foreign keys with just a single column, as the primary goal of the patch is
>> improving estimates with multi-column foreign keys. I believe that covers a
>> vast majority of foreign keys in the wild.
>>
>> If that's deemed insufficient, we'll have to resort to a more complex
>> improvement, perhaps something akin to the cache proposed in in the unijoin
>> patch. But if that's required, that's 9.7 material.
>
> I had thought that if we had a hashtable of rel OIDs which belong to
> relations with has_eclass_joins == true, then we could just skip
> foreign keys where the confrelid is not in the hashtable. Perhaps that
> could be optimised a bit more and we could have something akin to what
> predOK is for IndexOptInfo in ForeignKeyOptInfo which just gets set to
> true if the relation referenced by the foreign key is in the
> simple_rel_array. It's quite likely that if many foreign keys were
> used, then the query would have a great number of joins, and planning
> would be slow anyway.

I've spent a few hours looking at this and I've come up with the
attached patch, which flags each ForeignKeyOptInfo to say whether its
possible to be referenced in any join condition, with the logic that
if the referenced relation is in the simple_rte_array, then it could
be referenced.

I ran some of the tests Tomas posted with 1000 FKs and a 4-way join,
with 2 join columns.

Query:
explain analyze
select * from f1
inner join f2 on f1.a = f2.a and f1.b = f2.b
inner join f3 on f1.a = f3.a and f1.b = f3.b
inner join f4 on f1.a = f4.a and f1.b = f4.b;

SET enable_fkey_estimates = on;
duration: 30 s
number of transactions actually processed: 8173
latency average: 3.671 ms
tps = 272.420508 (including connections establishing)
tps = 272.586329 (excluding connections establishing)

SET enable_fkey_estimates = off;
duration: 30 s
number of transactions actually processed: 9153
latency average: 3.278 ms
tps = 305.098852 (including connections establishing)
tps = 305.286072 (excluding connections establishing)

So there's still a 10% planner slowdown for this worst case test, but
it's far less than what it was previously with the same test case.

I just also want to add that the aim of this patch was to fix a very
real world problem which also manifests itself in TPC-H Q9, where the
join to partsupp is drastically underestimated due to the 2 column
join condition, which in our test cases caused the GROUP BY to perform
a Hash Aggregate rather than a Sort/Group Aggregate and since we don't
spill HashAggs to disk, we get OOM for a large scale test on large
scale hardware.

Here's some sample EXPLAIN output from the query in question, which I
think is a smaller scale than the 3TB test where we had issues, but
still demonstrates the issue;

Hash Join  (cost=74686.00..597734.90 rows=2400 width=23) (actual
time=564.038..11645.047 rows=11997996 loops=1)
  Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND
(lineitem.l_partkey = partsupp.ps_partkey))

Here the estimate is off 5000x.

The attached patch is intended to assist discussion at the moment.
Likely some naming could be better, and the code would need to find a
better home.

The patch also fixes the missing IsA OpExpr test.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Attachment: mark_useful_foreignkeys.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to