Hi Ashutosh,

thanks for bringing this to my attention. I'll first share a few thoughts about the change and respond regarding the test below.

I clearly understand your intention with this patch. It's an issue I run into from time to time.

I did some testing with some benchmark sets back with pg 14. I did the following: I planned with and without the partitionwise join GUC (explain) and took the one with the lower cost to execute the query.

Interestingly, even discounting the overhead and additional planning time, the option with the lower cost turned out to be slower on our benchmark set back then. The median query with disabled GUC was quicker, but on average that was not the case. The observation is one, I'd generally describe as "The more options you consider, the more ways we have to be horribly wrong. More options for the planner are a great way to uncover the various shortcomings of it."

That might be specific to the benchmark I was working with at the time. But that made me drop the issue back then. That is ofc no valid reason not to go in the direction of making the planner to consider more options. :)

Maybe we can discuss that in person next week?

On 2024-05-22 07:57, Ashutosh Bapat wrote:
On Mon, May 6, 2024 at 6:28 PM Ashutosh Bapat
<ashutosh.bapat....@gmail.com> wrote:
4. meson test ends up with failures like below:

4/290 postgresql:regress / regress/regress
ERROR          32.67s
6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade
ERROR          56.96s
35/290 postgresql:recovery / recovery/027_stream_regress
ERROR          40.20s

(all due to "regression tests pass" failures)
[...]

So with the patch that SQL does not use partitionwise join as it
finds
it more optimal to stick to a plan with cost of "1.10..2.21"
instead
of "1.11..2.22" (w/ partition_join), nitpicking but still a
failure
technically. Perhaps it could be even removed? (it's pretty close
to
noise?).

The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and
later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The
modification just avoided eliminating the join, so that change can be
ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests to
test fractional paths being considered when creating ordered append
paths. Reading the commit message, I was expecting a test which did
not use a join as well and also which used inheritance. But it seems
that the queries added by that commit, test all the required scenarios
and hence we see two queries involving join between partitioned
tables. As the comment there says the intention is to verify index
only scans and not exactly partitionwise join. So just fixing the
expected output of one query looks fine. The other query will test
partitionwise join and fractional paths anyway. I am including Tomas,
Arne and Zhihong, who worked on the first commit, to comment on
expected output changes.

The test was put there to make sure a fractional join is considered in the case that a partitionwise join is considered. Because that wasn't the case before.

The important part for my use case back then was that we do Merge Join(s) at all. The test result after your patch still confirms that.

If we simply modify the test as such, we no longer confirm, whether the code path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is still working.

Maybe it's worthwhile to add something like

create index on fract_t0 ((id*id));

EXPLAIN (COSTS OFF)
SELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id DESC LIMIT 10;
                                  QUERY PLAN
-------------------------------------------------------------------------------
 Limit
   ->  Merge Append
         Sort Key: ((x.id * x.id)) DESC
         ->  Nested Loop
-> Index Scan Backward using fract_t0_expr_idx on fract_t0 x_1
               ->  Index Only Scan using fract_t0_pkey on fract_t0 y_1
                     Index Cond: (id = x_1.id)
         ->  Sort
               Sort Key: ((x_2.id * x_2.id)) DESC
               ->  Hash Join
                     Hash Cond: (x_2.id = y_2.id)
                     ->  Seq Scan on fract_t1 x_2
                     ->  Hash
                           ->  Seq Scan on fract_t1 y_2


I am not sure, whether it's worth the extra test cycles on every animal, but since we are not creating an extra table it might be ok.
I don't have a very strong feeling about the above test case.

I will create patches for the back-branches once the patch for master
is in a committable state.

I am not sure, whether it's really a bug. I personally wouldn't be brave enough to back patch this. I don't want to deal with complaining end users. Suddenly their optimizer, which always had horrible estimates, was actually able to do harmful stuff with them. Only due to a minor version upgrade. I think that's a bad idea to backpatch something with complex performance implications. Especially since they might even be based on potentially inaccurate data...


--

Best Wishes,
Ashutosh Bapat

All the best
Arne


Reply via email to