On 8/11/21 4:51 PM, Mark Dilger wrote:


On Aug 11, 2021, at 5:08 AM, Dean Rasheed <dean.a.rash...@gmail.com> wrote:

This feels like rather an artificial example though. Is there any real
use for this sort of clause?

The test generated random combinations of clauses and then checked if
any had consistently worse performance.  These came up.  I don't
know that they represent anything real.

What was not random in the tests was the data in the tables. I've gotten curious if these types of clauses (with columns compared
against themselves) would still be bad for random rather than orderly
data sets. I'll go check.... >
    testing....

Wow.  Randomizing the data makes the problems even more extreme.  It
seems my original test set was actually playing to this patch's
strengths, not its weaknesses.  I've changed the columns to double
precision and filled the columns with random() data, where column1 gets
random()^1, column2 gets random()^2, etc.  So on average the larger
numbered columns will be smaller, and the mcv list will be irrelevant,
since values should not tend to repeat.


I tried using the same randomized data set, i.e. essentially

  create statistics s (mcv) on a, b, c from t;

  insert into t

  select random(), pow(random(), 2), pow(random(), 3), pow(random(),4)
    from generate_series(1,1000000) s(i);

  create statistics s (mcv) on a, b, c from t;

But I don't see any difference compared to the estimates without extended statistics, which is not surprising because there should be no MCV list built. So I'm a bit puzzled about the claim that random data make the problems more extreme. Can you explain?

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to