E posted several
years back, I believe. Plus the timescaledb one in one or two places.
I worked a couple of things out through trial and error. It's
relatively hard to follow the documentation, and there have been
features added to newer LLVM versions.
--
Peter Geoghegan
clang-format
Description: Binary data
ily about my
fixed preferences (though it can be hard to tell!). It's really not
surprising that clang-format cannot quite perfectly simulate pgindent.
How flexible can we be about stuff like that? Obviously there is no
clear answer right now.
--
Peter Geoghegan
and
tools like meson in return. It would probably make it practical to
have much stronger rules about how committed code must be indented --
rules that are practical, and can actually be enforced. That trade-off
seems likely to be worth it in my view, though it's not something that
I feel too strongly about.
--
Peter Geoghegan
havior. Aggressive/antiwraparound VACUUMs are
naturally much more likely to coincide with periodic DDL, just because
they take so much longer. That is a dangerous combination.
--
Peter Geoghegan
h a default as
low as 1.1x or even 1.05x. That's going to make very little difference
to those users that really rely on the no-auto-cancellation behavior,
while at the same time making things a lot safer for scenarios like
the Joyent/Manta "DROP TRIGGER" outage (not perfectly safe, by any
means, but meaningfully safer).
--
Peter Geoghegan
ovacuum.
Even if it's valuable to maintain this kind of VACUUM/autovacuum
parity (which I tend to doubt), doesn't the same argument work almost
as well with whatever stripped down version you come up with? It's
also confusing that a manual VACUUM command will be doing an
ANALYZE-like thing. Especially in cases where it's really expensive
relative to the work of VACUUM, because VACUUM scanned so few pages.
You just have to make some kind of trade-off.
--
Peter Geoghegan
t down at all, in the end.
Presumably you'll want to add the same I/O prefetching logic to this
cut-down version, just for example. Since without that there will be
no competition between it and ANALYZE proper. Besides which, isn't it
kinda wasteful to not just do a full ANALYZE? Sure, you can avoid
detoasting overhead that way. But even still.
--
Peter Geoghegan
I now understand that you're in favor of addressing the root problem
directly. I am also in favor of that approach. I'd be more than happy
to get rid of the band aid as part of that whole effort.
--
Peter Geoghegan
only thing that can be
done is to either make VACUUM behave somewhat like ANALYZE in at least
some cases, or to have it invoke ANALYZE directly (or indirectly) in
those same cases.
--
Peter Geoghegan
vacuum is frequent enough.
>
> As a demo: The attached sql script ends up with a table containing 10k rows,
> but relpages being set 1 million.
I saw that too. But then I checked again a few seconds later, and
autoanalyze had run, so reltuples was 10k. Just like it would have if
there was no VACUUM statements in your script.
--
Peter Geoghegan
directed, ndead, nunused)
> around heap_page_prune() and a
> pgstat_count_heap_vacuum(nunused)
> in lazy_vacuum_heap_page(), we'd likely end up with a better approximation
> than what vac_estimate_reltuples() does, in the "partially scanned" case.
What does vac_estimate_reltuples() have to do with dead tuples?
--
Peter Geoghegan
On Wed, Jan 18, 2023 at 5:49 PM Andres Freund wrote:
> On 2023-01-18 15:28:19 -0800, Peter Geoghegan wrote:
> > Perhaps we should make vac_estimate_reltuples focus on the pages that
> > VACUUM newly set all-visible each time (not including all-visible
> > pages that got scan
What if it was just a simple multiplier on
freeze_max_age/multixact_freeze_max_age, without changing any other
detail?
--
Peter Geoghegan
v5-0002-Add-table-age-trigger-concept-to-autovacuum.patch
Description: Binary data
v5-0001-Add-autovacuum-trigger-instrumentation.patch
Description: Binary data
ings change while VACUUM runs. The other problem is that the
thing that is counted isn't broken down into distinct subcategories of
things -- things are bunched together that shouldn't be.
Oh wait, you were thinking of what I said before -- my "awkward but
logical question". Is that it?
--
Peter Geoghegan
here errors can grow without bound. But
you have to draw the line somewhere, unless you're willing to replace
the whole approach with something that stores historic metadata.
What kind of tradeoff do you want to make here? I think that you have
to make one.
--
Peter Geoghegan
On Wed, Jan 18, 2023 at 3:28 PM Peter Geoghegan wrote:
> The problems in this area tend to be that vac_estimate_reltuples()
> behaves as if it sees a random sample, when in fact it's far from
> random -- it's the same scanned_pages as last time, and the ten other
> times
On Wed, Jan 18, 2023 at 2:37 PM Peter Geoghegan wrote:
> Maybe you're right to be concerned to the degree that you're concerned
> -- I'm not sure. I'm just adding what I see as important context.
The problems in this area tend to be that vac_estimate_reltuples()
beha
to
accumulate over time.
Maybe you're right to be concerned to the degree that you're concerned
-- I'm not sure. I'm just adding what I see as important context.
--
Peter Geoghegan
ch for this? I'd be very interested
in seeing this through. Could definitely review it.
--
Peter Geoghegan
On Wed, Jan 18, 2023 at 1:02 PM Peter Geoghegan wrote:
> Some of what I'm proposing arguably amounts to deliberately adding a
> bias. But that's not an unreasonable thing in itself. I think of it as
> related to the bias-variance tradeoff, which is a concept that comes
&g
big
part of the overall picture, but not everything. It tells us
relatively little about the benefits, except perhaps when most pages
are all-visible.
--
Peter Geoghegan
On Wed, Jan 18, 2023 at 11:02 AM Robert Haas wrote:
> On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan wrote:
> > pgstat_report_analyze() will totally override the
> > tabentry->dead_tuples information that drives autovacuum.c, based on
> > an estimate derived from a rand
ch. One of the advantages of running
VACUUM sooner is that it provides us with relatively reliable
information about the needs of the table.
We can also cheat, sort of. If we find another justification for
autovacuuming (e.g., it's a quiet time for the system as a whole), and
it works out to help with this other problem, it may be just as good
for users.
--
Peter Geoghegan
On Tue, Jan 17, 2023 at 2:11 PM Robert Haas wrote:
> On Tue, Jan 17, 2023 at 3:08 PM Peter Geoghegan wrote:
> > If you assume that there is chronic undercounting of dead tuples
> > (which I think is very common), ...
>
> Why do you think that?
For the reasons I gave abo
r code, to a casual
> reader, but I suppose it's a style thing. I will change it if you
> insist.
I certainly won't insist.
> I'd have to expose the pg_locale_t struct, which didn't seem desirable
> to me. Do you think it's enough of a performance concern to be worth
> some ugliness there?
I don't know. Quite possibly not. It would be nice to have some data
on that, though.
--
Peter Geoghegan
ointer be
> changed, too?
Yes, I think so.
--
Peter Geoghegan
can come up with.
> Perhaps its worth to separately track the number of times a backend would have
> liked to cancel autovac, but couldn't due to anti-wrap? If changes to the
> no-auto-cancel behaviour don't make it in, it'd at least allow us to collect
> more data about the prevalence of the problem and in what situations it
> occurs? Even just adding some logging for that case seems like it'd be an
> improvement.
Hmm, maybe.
> I think with a bit of polish "Add autovacuum trigger instrumentation." ought
> to be quickly mergeable.
Yeah, I'll try to get that part out of the way quickly.
--
Peter Geoghegan
n the same
choice.
[1]
https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Constantly_updated_tables_.28usually_smaller_tables.29
--
Peter Geoghegan
ge concept. As I've said already, there will
usually be a very asymmetric quality to the problem in cases like the
Joyent outage. Even a modest amount of additional XID-space-headroom
will very likely be all that will be needed at the critical juncture.
It may not be perfect, but it still has every potential to make things
safer for some users, without making things any less safe for other
users.
--
Peter Geoghegan
On Wed, Jan 11, 2023 at 3:11 PM Peter Geoghegan wrote:
> On Wed, Jan 11, 2023 at 3:00 PM Andres Freund wrote:
> > What are your thoughts about the place for the helper functions? You're ok
> > with rmgrdesc_utils.[ch]?
>
> Yeah, that seems okay.
BTW, while playing ar
On Mon, Jan 16, 2023 at 10:00 AM Peter Geoghegan wrote:
> Now we treat the scanning and freezing strategies as two independent
> choices. Of course they're not independent in any practical sense, but
> I think it's slightly simpler and more elegant that way --
On Mon, Jan 16, 2023 at 10:10 AM Peter Geoghegan wrote:
> Attached is v16, which incorporates some of Matthias' feedback.
0001 (the freezing strategies patch) is now committable IMV. Or at
least will be once I polish the docs a bit more. I plan on committing
0001 some time next week, bar
workers could become "headless".
You would need some kind of state machine to make sure that critical
dependencies were respected (e.g., always do the heap vacuuming step
after all indexes are vacuumed), but that possibly isn't that hard,
and still gives you a lot.
As for this patch of mine: do you think that it would be acceptable to
pursue a version based on your autovacuum_no_auto_cancel_age design
for 16? Perhaps this can include something like
pgstat_report_autovac_failure(). It's not even the work of
implementing pgstat_report_autovac_failure() that creates risk that
it'll miss the 16 feature freeze deadline. I'm more concerned that
introducing a more complicated design will lead to the patch being
bikeshedded to death.
--
Peter Geoghegan
x27;t any practical reason why
lazy_scan_strategy needs to anticipate what will happen in the near
future with freezing.
--
Peter Geoghegan
is still autocancellable) quite a large
number of attempts before giving up. If the table age isn't really
growing too fast, why not continue to be patient, possibly for quite a
long time?
Perhaps a hybrid strategy could be useful? Something like what I came
up with already, *plus* a mechanism that gives up after (say) 1000
cancellations, and escalates to no-auto-cancel, regardless of table
age. It seems sensible to assume that a less aggressive approach is
just hopeless relatively quickly (in wall clock time and logical XID
time) once we see sufficiently many cancellations against the same
table.
--
Peter Geoghegan
-> recheck_relation_needs_vacanalyze() ->
> relation_needs_vacanalyze()?
>
> These variables really shouldn't be globals. It makes sense to cache them
> locally in do_autovacuum(), but reusing them
> recheck_relation_needs_vacanalyze() and sharing it between launcher and worker
> is bad.
I am not sure. I do hope that there isn't some subtle way in which the
design relies on that. It seems obviously weird, and so I have to
wonder if there is a reason behind it that isn't immediately apparent.
--
Peter Geoghegan
v4-0002-Add-table-age-trigger-concept-to-autovacuum.patch
Description: Binary data
v4-0001-Add-autovacuum-trigger-instrumentation.patch
Description: Binary data
uch IMV.
Comments on 0003-*:
I suggest that some of the very trivial functions you have here (such
as pg_locale_deterministic()) be made inline functions.
Comments on 0006-*:
* get_builtin_libc_library() could be indented in a way that would
make it easier to understand.
--
Peter Geoghegan
e way. The point
is to make decisions dynamically, based on the observed conditions in
the table. And to delay committing to things until there really is no
alternative, to maximize our opportunities to avoid disaster. In
short: loose, springy behavior.
Imposing absolute obligations on VACUUM has the potential to create
lots of problems. It is sometimes necessary, but can easily be
overused, making a bad situation much worse.
--
Peter Geoghegan
.
The similar outages I was called in to help with personally had either
an automated TRUNCATE or an automated CREATE INDEX. Had autovacuum
only been willing to yield once or twice, then it probably would have
been fine -- the situation probably would have worked itself out
naturally. That's the best outcome you can hope for.
--
Peter Geoghegan
On Mon, Jan 9, 2023 at 12:58 PM Peter Geoghegan wrote:
> On Mon, Jan 9, 2023 at 11:57 AM Andres Freund wrote:
> > Afaict we'll need to backpatch this all the way?
>
> I thought that we probably wouldn't need to, at first. But I now think
> that we really have to.
At
On Wed, Jan 11, 2023 at 4:44 PM Andres Freund wrote:
> > Attached patch fixes up these issues. It's almost totally mechanical.
>
> Looks better, thanks!
Pushed that just now.
Thanks
--
Peter Geoghegan
On Mon, Jan 9, 2023 at 2:18 PM Peter Geoghegan wrote:
> I'll try to get back to it this week.
Attached patch fixes up these issues. It's almost totally mechanical.
(Ended up using "git diff --color-moved=dimmed-zebra
--color-moved-ws=ignore-all-space" with this, per you
d/crashed through process of elimination instead.
>
> s/by now//?
Did it that way in the commit I pushed just now.
Thanks
--
Peter Geoghegan
al of
diversity that we need to be considered. For example, the WAL records
used by each individual index access method are all very similar. In
fact the most important index AM WAL records used by each index AM
(e.g. insert, delete, vacuum) have virtually the same format as each
other already.
--
Peter Geoghegan
ebugging it seems to have been a
heapam thing, just because there's a lot more that can go wrong with
pruning, which is spread across many different places.
--
Peter Geoghegan
get_xid_status() need a reference to these rules?
>
> Don't think so? Whad made you ask?
Just the fact that it seems to more or less follow the protocol
described at the top of heapam_visibility.c. Not very important,
though.
--
Peter Geoghegan
v2-0001-Improve-TransactionIdDidAbort
s a fair point.
My vote goes to "REUSE_BUFFERS", then.
--
Peter Geoghegan
on quite a bit where available.
--
Peter Geoghegan
g the failsafe mode,
> similar to [auto]vacuum cost delays getting disabled
> (c.f. lazy_check_wraparound_failsafe()). If things are bad enough that we're
> soon going to shut down, we want to be aggressive.
+1
--
Peter Geoghegan
On Tue, Jan 10, 2023 at 4:39 PM Peter Geoghegan wrote:
> * Run VACUUM FREEZE. We need FREEZE in order to be able to hit the
> relevant visibilitymap_set() call site (the one that passes
> VISIBILITYMAP_ALL_FROZEN as its flags, without also passing
> VISIBILITYMAP_ALL_VISIB
On Tue, Jan 10, 2023 at 12:08 PM Peter Geoghegan wrote:
> Actually, FreezeMultiXactId() can fully remove an xmax that has some
> member XIDs >= OldestXmin, provided FRM_NOOP processing isn't
> possible, at least when no individual member is still running. Doesn't
> ha
ge being
all-visible, as if it might take a dissenting view that needs to be
taken into consideration (obviously we know what's going on with the
page because we just scanned it ourselves, and determined that it was
at least all-visible).
--
Peter Geoghegan
On Tue, Jan 10, 2023 at 11:47 AM Peter Geoghegan wrote:
> In summary, I think that there is currently no way that we can have
> the VM (or the PD_ALL_VISIBLE flag) concurrently unset, while leaving
> the page all_frozen. It can happen and leave the page all_visible, but
> not all_fr
due to these very fine details. (Assuming I haven't
missed another path to the problem with aborted Multis or something,
but looks like I haven't.)
--
Peter Geoghegan
On Mon, Jan 9, 2023 at 12:58 PM Peter Geoghegan wrote:
> I didn't realize that affected visibilitymap_set() calls could
> generate useless set-VM WAL records until you pointed it out. That's
> far more likely to happen than the race condition that I described --
> it has not
hink that it
easily could be a bit annoying. How hard would it be to invent a
general mechanism to control the verbosity of what we'll show for each
WAL record?
--
Peter Geoghegan
ard"
autovacuum is launched on a table whose relfrozenxid age is 1.5
billion, it'll just be a regular dead tuples/inserted tuples
autovacuum, with the same old familiar locking characteristics as
today.
--
Peter Geoghegan
onId arguments
before now, which was 100% guaranteed to be a waste of cycles. I saw
no need to wait more than a few days for a +1, given that this
particular issue was so completely clear cut.
--
Peter Geoghegan
eak?
They're not too weak. I'm not sure why the memcpy() was used. I see
your point; it makes you wonder if it must be necessary, which then
seems to call into question why it's okay to access the main array as
an array. I can change this detail, too.
I'll try to get back to it this week.
--
Peter Geoghegan
cope because
it's part of the same overall problem of updating the visibility map
based on potentially stale information. It makes zero sense to check
with the visibility map before updating it when we already know that
the page is all-visible. I mean, are we trying to avoid the work of
needlessly updating the visibility map in cases where its state was
corrupt, but then became uncorrupt (relative to the heap page) by
mistake?
--
Peter Geoghegan
erm
approach. We will very likely need to keep all_visible_according_to_vm
as a cache for performance reasons for as long as we have
SKIP_PAGES_THRESHOLD.
Can we just update all_visible_according_to_vm using
PageIsAllVisible(), without making all_visible_according_to_vm
significantly less useful a
On Sun, Jan 8, 2023 at 6:43 PM Peter Geoghegan wrote:
> On further reflection even v2 won't repair the page-level
> PD_ALL_VISIBLE flag in passing in this scenario. ISTM that on HEAD we
> might actually leave the all-frozen bit set in the VM, while both the
> all-visible bit a
On Sun, Jan 8, 2023 at 4:27 PM Peter Geoghegan wrote:
> We're vulnerable to allowing "all-frozen but not all-visible"
> inconsistencies because of two factors: this business with not passing
> VISIBILITYMAP_ALL_VISIBLE along with VISIBILITYMAP_ALL_FROZEN to
> visibility
On Thu, Dec 29, 2022 at 7:01 PM Peter Geoghegan wrote:
> Attached is v2, which is just to fix bitrot.
Attached is v3. We no longer apply vacuum_failsafe_age when
determining the cutoff for antiwraparound autovacuuming -- the new
approach is a bit simpler.
This is a fairly small change over
e. It just makes it hard to review the
> patch.
I didn't think that it was that big of a deal to tweak the style of
one or two details in and around lazy_vacuum_heap_rel() in passing,
for consistency with lazy_scan_heap(), since the patch already needs
to do some of that. I do take your point, though.
--
Peter Geoghegan
On Mon, Jan 2, 2023 at 10:31 AM Peter Geoghegan wrote:
> Would be helpful if I could get a +1 on
> v1-0002-Never-just-set-the-all-frozen-bit-in-VM.patch, which is
> somewhat more substantial than the others.
There has been no response on this thread for over a full week at this
point. I
rules?
FWIW, I found an existing comment about this rule in the call to
TransactionIdAbortTree() from RecordTransactionAbort() -- which took
me quite a while to find. So you might have been remembering that
comment before.
--
Peter Geoghegan
automatically make the user aware of the issues that surface around
> XID wraparound. Retaining the explainer for XID wraparound in the docs
> seems like a decent idea - it may be moved, but please don't delete
> it.
We do need to stop telling users to enter single user mode. It's quite
simply obsolete, bad advice, and has been since Postgres 14. It's the
worst thing that you could do, in fact.
--
Peter Geoghegan
ong the wait will be -- all bets are off. Could be
a day, a week, a month -- who knows? The application itself is the
crucial factor here, and in general the application can do whatever it
wants to do -- that is the reality. So we should be willing to kick
the can down the road in almost all cases -- that is actually the
responsible thing to do under the circumstances. We need to get on
with freezing every other page in the table!
There just cannot be very many pages that can't be cleanup locked at
any given time, so waiting indefinitely is a very drastic measure in
response to a problem that is quite likely to go away on its own. A
problem that waiting doesn't really solve anyway. Maybe the only thing
that will work is waiting for a very long time, but we have nothing to
lose (and everything to gain) by waiting to wait.
--
Peter Geoghegan
On Tue, Jan 3, 2023 at 10:52 PM Peter Geoghegan wrote:
> > And it'd make sense to have
> > the explanation of why TransactionIdDidAbort() isn't the same as
> > !TransactionIdDidCommit(), even for !TransactionIdIsInProgress() xacts, near
> > the explanation f
On Thu, Jan 5, 2023 at 3:27 PM Peter Geoghegan wrote:
> This particular error message is from the hardening added to Postgres
> 15 in commit e7428a99. So it's not surprising that Michail didn't see
> the same error on 14.
Reproduced this on HEAD locally (no docker), without an
is particular error message is from the hardening added to Postgres
15 in commit e7428a99. So it's not surprising that Michail didn't see
the same error on 14.
--
Peter Geoghegan
On Wed, Jan 4, 2023 at 10:59 PM Amit Kapila wrote:
> You are an extremely valuable person for this project and I wish that
> you continue working with the same enthusiasm.
Thank you, Amit. Knowing that my efforts are appreciated by colleagues
does make it easier to persevere.
--
Peter Geoghegan
y -- a good outcome
for everybody. If you and Robert can find a way to accommodate that,
then in all likelihood we won't need to have any more heated and
protracted arguments like the one from early in 2022. I will be quite
happy to get back to working on B-Tree, likely the skip scan work.
[1]
https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples
--
Peter Geoghegan
ould be doing that don't seem to lead to these situations.
[1]
https://postgr.es/m/cah2-wznungszf8v6osgjac5aysb3cz6hw6mlm30x0d65cms...@mail.gmail.com
--
Peter Geoghegan
e
> thing to change at the same time as something unrelated, particularly without
> even mentioning it?
I changed it because the new order is idiomatic. I didn't think that
this was particularly worth mentioning, or even subtle. The logic from
heap_execute_freeze_tuple() only performs simple in-place
modifications.
--
Peter Geoghegan
ind reorganizing these
other comments, or making the comment over TransactionIdDidAbort()
mostly just point to the other comments.
--
Peter Geoghegan
explicit aborts.
That's quite a stretch. There are numerous comments that pretty much
imply that TransactionIdDidCommit/TransactionIdDidAbort are very
similar, for example any discussion of how you need to call
TransactionIdIsInProgress first before calling either of the other
two.
--
Peter Geoghegan
On Tue, Jan 3, 2023 at 8:29 PM Peter Geoghegan wrote:
> I find this astonishing. Why isn't there a prominent comment that
> advertises that TransactionIdDidAbort() just doesn't work reliably?
I pushed a fix for this now.
We should add a comment about this issue to TransactionIdD
ing about similar changes in any
future commit messages, in the unlikely event that I ever end up
moving MarkBufferDirty() around in some existing critical section in
the future.
--
Peter Geoghegan
(Pruning -committers from the list, since cross-posting to -hackers
resulted in this being held up for moderation.)
On Tue, Jan 3, 2023 at 5:15 PM Peter Geoghegan wrote:
>
> On Tue, Jan 3, 2023 at 4:54 PM Andres Freund wrote:
> > There's some changes from Transactio
On Sat, Dec 31, 2022 at 12:45 PM Peter Geoghegan wrote:
> On Sat, Dec 31, 2022 at 11:46 AM Jeff Davis wrote:
> > "We have no freeze plans to execute, so there's no cost to following
> > the freeze path. This is important in the case where the page is
> > entirely f
On Sat, Dec 31, 2022 at 4:53 PM Peter Geoghegan wrote:
> The first patch makes sure that the snapshotConflictHorizon cutoff
> (XID cutoff for recovery conflicts) is never a special XID, unless
> that XID is InvalidTransactionId, which is interpreted as a
> snapshotConflictHorizon val
etween
the first and second heap pass, which seems like a clear
maintainability win -- everybody can pass the
already-pinned/already-setup vmbuffer by value.
--
Peter Geoghegan
v1-0001-Avoid-special-XIDs-in-snapshotConflictHorizon-val.patch
Description: Binary data
v1-0002-Never-just-set-the-all-frozen-bit-in-VM.patch
Description: Binary data
On Sat, Dec 31, 2022 at 11:46 AM Jeff Davis wrote:
> On Fri, 2022-12-30 at 16:58 -0800, Peter Geoghegan wrote:
> > Following the path of freezing a page is *always* valid, by
> > definition. Including when there are zero freeze plans to execute, or
> > even zero tuples to
On Fri, Dec 30, 2022 at 1:12 PM Peter Geoghegan wrote:
> > "Nominal freezing" is happening when there are no freeze plans at all.
> > I get that it's to manage control flow so that the right thing happens
> > later. But I think it should be defined in terms of
often the case that "freezing the page" will perform
maximally aggressive freezing, in the sense that it does precisely
what a VACUUM FREEZE would do given the same page (in any Postgres
version).
--
Peter Geoghegan
On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan wrote:
> On Thu, Dec 29, 2022 at 12:00 PM Andres Freund wrote:
> > > I could just move the same tests from heap_prepare_freeze_tuple() to
> > > heap_freeze_execute_prepared(), without changing any of the details.
> &g
On Fri, Nov 25, 2022 at 2:47 PM Peter Geoghegan wrote:
> Attached WIP patch invents the idea of a regular autovacuum that is
> tasked with advancing relfrozenxid -- which is really just another
> trigger criteria, reported on in the server log in its autovacuum
> reports.
Attached
On Thu, Dec 29, 2022 at 12:50 PM Peter Geoghegan wrote:
> On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan wrote:
> > > It seems somewhat wrong that we discard all the work that
> > > heap_prepare_freeze_tuple() did. Yes, we force freezing to actually
> > > happen
On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan wrote:
> > It seems somewhat wrong that we discard all the work that
> > heap_prepare_freeze_tuple() did. Yes, we force freezing to actually happen
> > in
> > a bunch of important cases (e.g. creating a new multixac
eze based on
the same observation about the cost. (It already does a certain amount
of this kind of thing, in fact.)
--
Peter Geoghegan
v2-0001-Check-xmin-xmax-commit-status-when-freezing-execu.patch
Description: Binary data
agine that the one-element cache works alright in some scenarios,
but then suddenly doesn't work so well, even though not very much has
changed. Behavior like that makes the problems difficult to analyze,
and easy to miss. I'm suspicious of that.
--
Peter Geoghegan
ink of it as a general sanity check.
The important call to avoid with page-level freezing is the xmin call to
TransactionIdDidCommit(), not the xmax call. The xmax call only occurs
when VACUUM prepares to freeze a tuple that was updated by an updater
(not a locker) that aborted. While the xmin calls will now take place with most
unfrozen tuples.
--
Peter Geoghegan
ge_is_all_visible() just because hint bits couldn't
be set earlier on, back when lazy_scan_prune() processed the same page
during VACUUM's initial heap pass.
--
Peter Geoghegan
ntation) shows that this patch
eliminates 100% of all relevant calls to TransactionIdDidCommit(), for
both the freeze_xmin and the freeze_xmax callsites.
--
Peter Geoghegan
v1-0001-Avoid-unnecessary-clog-lookups-when-freezing.patch
Description: Binary data
ion is wrong, and simply needs to be removed.
Thanks for the report
--
Peter Geoghegan
ast in performance sensitive code.
--
Peter Geoghegan
need to take action here. Perhaps rename the
> variable.
Andres was the one that suggested this name, actually. I initially
just called it "freeze", but I think that Andres had it right.
> I think 0001+0002 are about ready.
Great. I plan on committing 0001 in the next few days. Committing 0002
might take a bit longer.
Thanks
--
Peter Geoghegan
On Tue, Dec 20, 2022 at 7:15 PM Peter Geoghegan wrote:
> On Tue, Dec 20, 2022 at 5:44 PM Jeff Davis wrote:
> > Next, the 'freeze_required' field suggests that it's more involved in
> > the control flow that causes freezing than it actually is. All it does
> >
601 - 700 of 2339 matches
Mail list logo