Hi Ashutosh!

On 2024-05-27 14:17, Ashutosh Bapat wrote:
On Fri, May 24, 2024 at 11:02 AM <arne.rol...@malkut.net> wrote:

Hi Ashutosh,

thanks for bringing this to my attention. I'll first share a few
thoughts about the change and respond regarding the test below.

I clearly understand your intention with this patch. It's an issue I
run
into from time to time.

I did some testing with some benchmark sets back with pg 14. I did
the
following: I planned with and without the partitionwise join GUC
(explain) and took the one with the lower cost to execute the query.

Interestingly, even discounting the overhead and additional planning

time, the option with the lower cost turned out to be slower on our
benchmark set back then. The median query with disabled GUC was
quicker,
but on average that was not the case. The observation is one, I'd
generally describe as "The more options you consider, the more ways
we
have to be horribly wrong. More options for the planner are a great
way
to uncover the various shortcomings of it."

That might be specific to the benchmark I was working with at the
time.
But that made me drop the issue back then. That is ofc no valid
reason
not to go in the direction of making the planner to consider more
options. :)

In summary, you are suggesting that partitionwise join performs better
than plain join even if the latter one has lower cost. Hence fixing
this issue has never become a priority for you. Am I right?

Plans with lower costs being slower is not new for optimizer.
Partitionwise join just adds another case.

Sorry for my confusing long text. I will try to recap my points concisely.

1. I think the order by pk frac limit plans had just to similar performance behaviour for me to bother. But afaics the main point of your proposal is not related to frac plans at all. 2. We can't expect the optimizers to simply yield better results by being given more options to be wrong. (Let me give a simple example: This patch makes our lack of cross table cross column statistics worse. We give it more opportunity to pick something horrible. 3. I dislike, that this patch makes much harder to debug, why no partitionwise join is chosen.


Maybe we can discuss that in person next week?

Sure.

On 2024-05-22 07:57, Ashutosh Bapat wrote:

The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and
later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The
modification just avoided eliminating the join, so that change can
be
ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests
to
test fractional paths being considered when creating ordered
append
paths. Reading the commit message, I was expecting a test which
did
not use a join as well and also which used inheritance. But it
seems
that the queries added by that commit, test all the required
scenarios
and hence we see two queries involving join between partitioned
tables. As the comment there says the intention is to verify index
only scans and not exactly partitionwise join. So just fixing the
expected output of one query looks fine. The other query will test
partitionwise join and fractional paths anyway. I am including
Tomas,
Arne and Zhihong, who worked on the first commit, to comment on
expected output changes.

The test was put there to make sure a fractional join is considered
in
the case that a partitionwise join is considered. Because that
wasn't
the case before.

The important part for my use case back then was that we do Merge
Join(s) at all. The test result after your patch still confirms
that.

If we simply modify the test as such, we no longer confirm, whether
the
code path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is
still working.

Maybe it's worthwhile to add something like

create index on fract_t0 ((id*id));

EXPLAIN (COSTS OFF)
SELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id
DESC
LIMIT 10;
QUERY PLAN

-------------------------------------------------------------------------------
Limit
->  Merge Append
Sort Key: ((x.id [1] * x.id [1])) DESC
->  Nested Loop
->  Index Scan Backward using fract_t0_expr_idx on
fract_t0 x_1
->  Index Only Scan using fract_t0_pkey on fract_t0
y_1
Index Cond: (id = x_1.id [2])
->  Sort
Sort Key: ((x_2.id [3] * x_2.id [3])) DESC
->  Hash Join
Hash Cond: (x_2.id [3] = y_2.id [4])
->  Seq Scan on fract_t1 x_2
->  Hash
->  Seq Scan on fract_t1 y_2

I am not sure, whether it's worth the extra test cycles on every
animal,
but since we are not creating an extra table it might be ok.
I don't have a very strong feeling about the above test case.

My patch removes redundant enable_partitionwise_join = on since that's
done very early in the test. Apart from that it does not change the
test. So if the expected output change is fine with you, I think we
should leave the test as is. Plan outputs are sometimes fragile and
thus make expected outputs flaky.

If at all, we can add to that. That would indeed give us more code test coverage. I will refrain from commenting further, since that discussion would get completely disconnected from the patch at hand.


I will create patches for the back-branches once the patch for
master
is in a committable state.

I am not sure, whether it's really a bug. I personally wouldn't be
brave
enough to back patch this. I don't want to deal with complaining end

users. Suddenly their optimizer, which always had horrible
estimates,
was actually able to do harmful stuff with them. Only due to a minor

version upgrade. I think that's a bad idea to backpatch something
with
complex performance implications. Especially since they might even
be
based on potentially inaccurate data...

Since it's a thinko I considered it as a bug. But I agree that it has
the potential to disturb plans after upgrade and thus upset users. So
I am fine if we don't backpatch.

--

Best Wishes,
Ashutosh Bapat


Links:
------
[1] http://x.id
[2] http://x_1.id
[3] http://x_2.id
[4] http://y_2.id

All the best
Arne


Reply via email to