On Mon, May 6, 2024 at 4:30 PM Peter Geoghegan wrote:
> FWIW I always found those weird inconsistencies to be annoying at
> best, and confusing at worst. I speak as somebody that uses
> disable_cost a lot.
>
> I certainly wouldn't ask anybody to make it a priority for that reason
On Mon, May 6, 2024 at 8:27 AM Robert Haas wrote:
> Stepping back a bit, my current view of this area is: disable_cost is
> highly imperfect both as an idea and as implemented in PostgreSQL.
> Although I'm discovering that the current implementation gets more
> things right than I
Robert Haas writes:
> I'll look into this, unless you want to do it.
I have a draft patch already. Need to add a test case.
> Incidentally, another thing I just noticed is that
> IsCurrentOfClause()'s test for (node->cvarno == rel->relid) is
> possibly dead code. At least, there are no
s stuff I had not noticed.
> > In general I think you're right that something less rickety than
> > the disable_cost hack would be a good idea to ensure the desired
> > TidPath gets chosen, but this problem is not the fault of that.
> > We're not making the TidPath with th
!tidstate->tss_isCurrentOf);
It still seems OK, because anything that might come in from RLS quals
would be AND'ed not OR'ed with the CurrentOfExpr.
> In general I think you're right that something less rickety than
> the disable_cost hack would be a good idea to ensure the desir
f we don't want to do that in general,
then we need some kind of hack in TidQualFromRestrictInfo to accept
CurrentOfExpr quals anyway.
In general I think you're right that something less rickety than
the disable_cost hack would be a good idea to ensure the desired
TidPath gets chosen, but this pro
On Mon, May 6, 2024 at 9:39 AM Robert Haas wrote:
> It's not very clear that this mechanism is actually 100% reliable,
It isn't. Here's a test case. As a non-superuser, do this:
create table foo (a int, b text, primary key (a));
insert into foo values (1, 'Apple');
alter table foo enable row
tal patch that I wrote last week. The idea of the current
code is that cost_qual_eval_walker charges disable_cost for
CurrentOfExpr, but cost_tidscan then subtracts disable_cost if
tidquals contains a CurrentOfExpr, so that we effectively disable
everything except TID scan paths and, I think, also any
of the disabled ones caused the RelOptInfo to
> have no paths. Also, you might end up enabling one that caused the
> planner to do something different than it would do today. For
> example, a Path that today incurs 2x disable_cost vs a Path that only
> receives 1x disable_cost might
e years and those seem to have made it without concerns about
performance.
> BTW, I looked through costsize.c just now to see exactly what we are
> using disable_cost for, and it seemed like a majority of the cases are
> just wrong. Where possible, we should implement a plan-ty
his? There's no shortage of
> other ways to make the planner faster if that's an issue.
The concern was to not *add* CPU cycles in order to make this area
better. But I do tend to agree that we've exhausted all the other
options.
BTW, I looked through costsize.c just now to see exactly wha
Also, you might end up enabling one that caused the
planner to do something different than it would do today. For
example, a Path that today incurs 2x disable_cost vs a Path that only
receives 1x disable_cost might do something different if you just went
and enabled a bunch of enable* GUCs before repla
et picked even when it isn't strictly necessary to
do so, just because some plan that uses it looks better on cost.
Presumably that problem can in turn be fixed by deciding that we also
need to keep disable_cost around (or the separate disable-counter idea
that we were discussing recently in another branch
ight
be more intuitive than what I posted before. I'll do some experiments.
> I think using that logic, the current scenario with enable_indexscan
> and enable_indexonlyscan makes complete sense. I mean, including
> enable_indexscan=0 adding disable_cost to IOS Paths.
This, for me, i
ethod to get the previous behaviour for the cases where the
planner makes a dumb choice or to avoid some bug in the new feature.
I think using that logic, the current scenario with enable_indexscan
and enable_indexonlyscan makes complete sense. I mean, including
enable_indexscan=0 adding disable_cost
>
> [A] enable_indexscan=false adds disable_cost to the cost of every
> Index Scan path *and also* every Index-Only Scan path. So disabling
> index-scans also in effect discourages the use of index-only scans,
> which would make sense if we didn't have a separate setting called
> enab
is not above reproach.
Yes, that is wrong, surely there is a reason we have two vars. Thanks for
digging into this: if nothing else, the code will be better for this
discussion, even if we do nothing for now with disable_cost.
Cheers,
Greg
her it got changed
subsequently. Anyway, the current behavior is:
[A] enable_indexscan=false adds disable_cost to the cost of every
Index Scan path *and also* every Index-Only Scan path. So disabling
index-scans also in effect discourages the use of index-only scans,
which would make sense if we didn
Robert Haas writes:
> On Tue, Apr 2, 2024 at 11:54 AM Tom Lane wrote:
>> I suspect that it'd behave poorly when there are both disabled and
>> promoted sub-paths in a tree, for pretty much the same reasons you
>> explained just upthread.
> Hmm, can you explain further? I think essentially you'd
On Tue, Apr 2, 2024 at 11:54 AM Tom Lane wrote:
> > I'm pretty sure negative costs are going to create a variety of
> > unpleasant planning artifacts.
>
> Indeed. It might be okay to have negative values for disabled-ness
> if we treat disabled-ness as a "separate order of infinity", but
> I
Robert Haas writes:
> On Tue, Apr 2, 2024 at 10:04 AM Greg Sabino Mullane
> wrote:
>> if (!enable_seqscan)
>> startup_cost += disable_cost;
>> else if (promote_seqscan)
>> startup_cost -= promotion_cost; // or replace "promote" with "
ostsize.c?
>
> Cost disable_cost = 1.0e10;
> Costpromotion_cost = 1.0e10; // or higher or lower, depending on how
> strongly we want to "beat" disable_costs effects.
> ...
>
> if (!enable_seqscan)
> startup_cost += d
on. As our costing
is a based on positive numbers, what if we did something like this in
costsize.c?
Cost disable_cost = 1.0e10;
Costpromotion_cost = 1.0e10; // or higher or lower, depending on
how strongly we want to "beat" disable_costs effects.
...
if (!enable_s
On Mon, Apr 1, 2024 at 5:00 PM Tom Lane wrote:
> Very interesting, thanks for the summary. So the fact that
> disable_cost is additive across plan nodes is actually a pretty
> important property of the current setup. I think this is closely
> related to one argument you made against
Robert Haas writes:
> One of the things I realized relatively early is that the patch does
> nothing to propagate disable_cost upward through the plan tree.
> ...
> After straining my brain over various plan changes for a long time,
> and hacking on the code somewhat, I rea
; changes. You (or someone) should look into why that's happening.
>
> I've not read the patch, but given this description I would expect
> there to be *zero* regression changes --- I don't think we have any
> test cases that depend on disable_cost being finite. If there's more
> th
cheaper than the other. If we were to bump up the
> disable_cost it would make this problem worse.
Hmm, good point.
> So maybe the fix could be to set disable_cost to something like
> 1.0e110 and adjust compare_path_costs_fuzzily to not apply the
> fuzz_factor for paths >= disabl
David Rowley writes:
> So maybe the fix could be to set disable_cost to something like
> 1.0e110 and adjust compare_path_costs_fuzzily to not apply the
> fuzz_factor for paths >= disable_cost. However, I wonder if that
> risks the costs going infinite after a couple of cartesian
On Wed, 13 Mar 2024 at 08:55, Robert Haas wrote:
> But in the absence of that, we need some way to privilege the
> non-disabled paths over the disabled ones -- and I'd prefer to have
> something more principled than disable_cost, if we can work out the
> details.
The primary place
that point and restart planning,
disabling all of the plan-choice constraints and now creating all
paths for each RelOptInfo, then everything would, I believe, be just
fine. We'd end up needing neither disable_cost nor the mechanism
proposed by this patch.
But in the absence of that, we need some wa
Robert Haas writes:
> On Tue, Mar 12, 2024 at 1:32 PM Tom Lane wrote:
>> BTW, having written that paragraph, I wonder if we couldn't get
>> the same end result with a nearly one-line change that consists of
>> making disable_cost be IEEE infinity.
> I don't think so, b
On Tue, Mar 12, 2024 at 1:32 PM Tom Lane wrote:
> BTW, having written that paragraph, I wonder if we couldn't get
> the same end result with a nearly one-line change that consists of
> making disable_cost be IEEE infinity. Years ago we didn't want
> to rely on IEEE float semantics
Robert Haas writes:
> On Thu, Aug 3, 2023 at 5:22 AM Jian Guo wrote:
>> I have write an initial patch to retire the disable_cost GUC, which labeled
>> a flag on the Path struct instead of adding up a big cost which is hard to
>> estimate. Though it involved in
On Thu, Aug 3, 2023 at 5:22 AM Jian Guo wrote:
> I have write an initial patch to retire the disable_cost GUC, which labeled a
> flag on the Path struct instead of adding up a big cost which is hard to
> estimate. Though it involved in tons of plan changes in regression tests, I
>
Hi hackers,
I have write an initial patch to retire the disable_cost GUC, which labeled a
flag on the Path struct instead of adding up a big cost which is hard to
estimate. Though it involved in tons of plan changes in regression tests, I
have tested on some simple test cases such as eagerly
I think this would be ready to abstract away behind a few functions that
could always be replaced by something else later...
However on further thought I really think just using a 32-bit float and 32
bits of other bitmaps or counters would be a better approach.
On Sun., Dec. 15, 2019, 14:54
Thomas Munro writes:
> On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe wrote:
>> Doesn't that rely on a specific implementation of double precision (IEEE)?
>> I thought that we don't want to limit ourselves to platforms with IEEE
>> floats.
> Just by the way, you might want to read the second
On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe wrote:
> Doesn't that rely on a specific implementation of double precision (IEEE)?
> I thought that we don't want to limit ourselves to platforms with IEEE floats.
Just by the way, you might want to read the second last paragraph of
the commit
I used the high-order bit of the fractional
> > bits of the double. (see Wikipedia for double precision floating point for
> > the layout).
> >
> > The idea is to set a special bit when disable_cost is added to a cost.
> > Dedicating multiple bits instead of just 1 w
double precision floating point for
> the layout).
>
> The idea is to set a special bit when disable_cost is added to a cost.
> Dedicating multiple bits instead of just 1 would be easily done, but as it
> is we can accumulate many disable_costs without overflowing, so just
>
Tomas Vondra writes:
> On Fri, Nov 01, 2019 at 09:30:52AM -0700, Jim Finnerty wrote:
>> re: coping with adding disable_cost more than once
>>
>> Another option would be to have a 2-part Cost structure. If disable_cost is
>> ever added to the Cost, then you set a f
re expensive than others, regardless of the cost.
Getting rid of disable_cost would be a nice thing to do, but I would
rather not do it by adding still more complexity to add_path(), not
to mention having to bloat Paths with a separate "disabled" marker.
The idea that I've been thinking about
On 2019-11-01 12:56:30 -0400, Robert Haas wrote:
> On Fri, Nov 1, 2019 at 12:43 PM Andres Freund wrote:
> > As a first step I'd be inclined to "just" adjust disable_cost up to
> > something like 1.0e12. Unfortunately much higher and and we're getting
> > into the a
On Fri, Nov 01, 2019 at 09:30:52AM -0700, Jim Finnerty wrote:
re: coping with adding disable_cost more than once
Another option would be to have a 2-part Cost structure. If disable_cost is
ever added to the Cost, then you set a flag recording this. If any plans
exist that have
problems of that kind. I think if a baserel has
no paths, then we know right away that we've got a problem, but for
joinrels it might be more complicated.
> As a first step I'd be inclined to "just" adjust disable_cost up to
> something like 1.0e12. Unfortunately much higher and and we'r
that we'd always notice that we
have no plan early enough to know which paths to reconsider? I think
there's cases where that'd only happen a few levels up.
As a first step I'd be inclined to "just" adjust disable_cost up to
something like 1.0e12. Unfortunately much higher and and we're get
On Fri, Nov 1, 2019 at 12:00 PM Andres Freund wrote:
> That seems like a bad idea - we add the cost multiple times. And we
> still want to compare plans that potentially involve that cost, if
> there's no other way to plan the query.
Yeah. I kind of wonder if we shouldn't instead (a) skip
Hi,
On 2019-11-01 19:58:04 +1300, Thomas Munro wrote:
> On Fri, Nov 1, 2019 at 7:42 PM Zhenghua Lyu wrote:
> > It is tricky to set disable_cost a huge number. Can we come up with
> > better solution?
>
> What happens if you use DBL_MAX?
That seems like a bad i
ut If I enlarge the disable_cost to 1e30,
> then, planner will generate hash join.
>
> So I guess that disable_cost is not large enough for huge amount of data.
>
> It is tricky to set disable_cost a huge number. Can we come up with
> better solution?
>
Isn't it a ca
On Fri, Nov 1, 2019 at 7:42 PM Zhenghua Lyu wrote:
> It is tricky to set disable_cost a huge number. Can we come up with
> better solution?
What happens if you use DBL_MAX?
Hi,
Postgres has a global variable `disable_cost`. It is set the value
1.0e10.
This value will be added to the cost of path if related GUC is set off.
For example,
if enable_nestloop is set off, when planner trys to add nestloop join
path, it continues
to add such path
51 matches
Mail list logo