On Wed, Apr 10, 2024 at 03:28:22PM +0530, Amit Kapila wrote:
> I can understand this comment as I am aware of this code but not sure
> it would be equally easy for the people first time looking at this
> code. One may try to find the equivalent assertion in
> _hash_freeovflpage(). The alternative
On Wed, Apr 10, 2024 at 1:27 PM Michael Paquier wrote:
>
> On Tue, Apr 09, 2024 at 10:25:36AM +, Hayato Kuroda (Fujitsu) wrote:
> >> On Fri, Apr 05, 2024 at 06:22:58AM +, Hayato Kuroda (Fujitsu) wrote:
> >> I'm slightly annoyed by the fact that there is no check on
> >>
On Tue, Apr 09, 2024 at 10:25:36AM +, Hayato Kuroda (Fujitsu) wrote:
>> On Fri, Apr 05, 2024 at 06:22:58AM +, Hayato Kuroda (Fujitsu) wrote:
>> I'm slightly annoyed by the fact that there is no check on
>> is_prim_bucket_same_wrt for buffer 1 in the BLK_NEEDS_REDO case to
>> show the
Dear Michael,
> On Fri, Apr 05, 2024 at 06:22:58AM +, Hayato Kuroda (Fujitsu) wrote:
> > Thanks for pointing out. Yes, we fixed a behavior by the commit aa5edbe37,
> > but we missed the redo case. I made a fix patch based on the suggestion [1].
>
> + boolmod_buf = false;
>
>
On Fri, Apr 05, 2024 at 06:22:58AM +, Hayato Kuroda (Fujitsu) wrote:
> Thanks for pointing out. Yes, we fixed a behavior by the commit aa5edbe37,
> but we missed the redo case. I made a fix patch based on the suggestion [1].
+ boolmod_buf = false;
Perhaps you could name that
On Fri, Apr 05, 2024 at 01:26:29PM +0900, Michael Paquier wrote:
> On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:
>> On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:
>> > Thanks for the report and looking into it. Pushed!
>>
>> Thanks Amit for the commit and
Dear Michael,
> There is still some divergence between the code path of
> _hash_freeovflpage() and the replay in hash_xlog_squeeze_page() when
> squeezing a page: we do not set the LSN of the write buffer if
> (xlrec.ntups <= 0 && xlrec.is_prim_bucket_same_wrt &&
> !xlrec.is_prev_bucket_same_wrt)
On Fri, Apr 5, 2024 at 9:56 AM Michael Paquier wrote:
>
> On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:
> > On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:
> > > Thanks for the report and looking into it. Pushed!
> >
> > Thanks Amit for the commit and Kuroda-san
On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:
> On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:
> > Thanks for the report and looking into it. Pushed!
>
> Thanks Amit for the commit and Kuroda-san for the patch!
I have been pinged about this thread and I should
On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:
> Thanks for the report and looking into it. Pushed!
Thanks Amit for the commit and Kuroda-san for the patch!
--
Michael
signature.asc
Description: PGP signature
On Mon, Feb 5, 2024 at 1:33 PM Michael Paquier wrote:
>
> On Mon, Feb 05, 2024 at 07:57:09AM +, Hayato Kuroda (Fujitsu) wrote:
> > You are right. Based on the previous discussions, PageSetLSN() must be
> > called
> > after the MakeBufferDirty(). REGBUF_NO_CHANGE has been introduced for
> >
On Mon, Feb 05, 2024 at 07:57:09AM +, Hayato Kuroda (Fujitsu) wrote:
> You are right. Based on the previous discussions, PageSetLSN() must be called
> after the MakeBufferDirty(). REGBUF_NO_CHANGE has been introduced for skipping
> these requirements. Definitevely, no_change buffers must not
Dear Amit,
>
> @@ -692,6 +697,9 @@ _hash_freeovflpage(Relation rel, Buffer bucketbuf,
> Buffer ovflbuf,
> if (!xlrec.is_prev_bucket_same_wrt)
> wbuf_flags |= REGBUF_NO_CHANGE;
> XLogRegisterBuffer(1, wbuf, wbuf_flags);
> +
> + /* Track the registration status for later use */
> +
On Mon, Feb 5, 2024 at 10:00 AM Hayato Kuroda (Fujitsu)
wrote:
>
> >
> > Amit, this has been applied as of 861f86beea1c, and I got pinged about
> > the fact this triggers inconsistencies because we always set the LSN
> > of the write buffer (wbuf in _hash_freeovflpage) but
> >
Dear Michael, Amit,
>
> Amit, this has been applied as of 861f86beea1c, and I got pinged about
> the fact this triggers inconsistencies because we always set the LSN
> of the write buffer (wbuf in _hash_freeovflpage) but
> XLogRegisterBuffer() would *not* be called when the two following
>
On Fri, Feb 2, 2024 at 12:40 PM Michael Paquier wrote:
>
> Amit, this has been applied as of 861f86beea1c, and I got pinged about
> the fact this triggers inconsistencies because we always set the LSN
> of the write buffer (wbuf in _hash_freeovflpage) but
> XLogRegisterBuffer() would *not* be
On Fri, Dec 01, 2023 at 03:27:33PM +0530, Amit Kapila wrote:
> Pushed!
Amit, this has been applied as of 861f86beea1c, and I got pinged about
the fact this triggers inconsistencies because we always set the LSN
of the write buffer (wbuf in _hash_freeovflpage) but
XLogRegisterBuffer() would *not*
On Thu, Nov 30, 2023 at 4:30 PM Alexander Lakhin wrote:
>
> 30.11.2023 10:28, Hayato Kuroda (Fujitsu) wrote:
> > Again, good catch. Here is my analysis and fix patch.
> > I think it is sufficient to add an initialization for writebuf.
>
> I agree with the change.
>
Pushed!
--
With Regards,
Dear Amit,
> I think it is acceptable especially if it increases code
> coverage. Can you once check that?
PSA the screen shot. I did "PROVE_TESTS="t/027*" make check" in
src/test/recovery, and
generated a report.
Line 661 was not hit in the HEAD, but the screen showed that it was executed.
Hello Kuroda-san,
30.11.2023 10:28, Hayato Kuroda (Fujitsu) wrote:
Again, good catch. Here is my analysis and fix patch.
I think it is sufficient to add an initialization for writebuf.
I agree with the change. It aligns hash_xlog_squeeze_page() with
hash_xlog_move_page_contents() in regard to
On Thu, Nov 30, 2023 at 12:58 PM Hayato Kuroda (Fujitsu)
wrote:
>
> >
> > Good catch, thank you for reporting! I will investigate more about it and
> > post my
> > analysis.
> >
>
> Again, good catch. Here is my analysis and fix patch.
> I think it is sufficient to add an initialization for
Dear Alexander,
>
> Good catch, thank you for reporting! I will investigate more about it and
> post my
> analysis.
>
Again, good catch. Here is my analysis and fix patch.
I think it is sufficient to add an initialization for writebuf.
In the reported case, neither is_prim_bucket_same_wrt nor
Dear Alexander,
>
> I've discovered that that patch introduced a code path leading to an
> uninitialized memory access.
> With the following addition to hash_index.sql:
> -- Fill overflow pages by "dead" tuples.
> BEGIN;
> INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1,
Hello,
13.11.2023 20:21, Robert Haas wrote:
On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)
wrote:
Moved.
I see that this patch was committed, but I'm not very convinced that
the approach is correct. The comment says this:
I've discovered that that patch introduced a code path
On Mon, Nov 13, 2023 at 10:51 PM Robert Haas wrote:
>
> On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)
> wrote:
> > Moved.
>
> I see that this patch was committed, but I'm not very convinced that
> the approach is correct. The comment says this:
>
> + /*
> +* A
On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)
wrote:
> Moved.
I see that this patch was committed, but I'm not very convinced that
the approach is correct. The comment says this:
+ /*
+* A write buffer needs to be registered even if no tuples are
+*
Dear Amit,
Thanks for reviewing! PSA new version patch.
> > > Next we should add some test codes. I will continue considering but please
> post
> > > anything
> > > If you have idea.
> >
> > And I did, PSA the patch. This patch adds two parts in hash_index.sql.
> >
> > In the first part, the
On Fri, Nov 10, 2023 at 9:17 AM Hayato Kuroda (Fujitsu)
wrote:
>
> > Next we should add some test codes. I will continue considering but please
> > post
> > anything
> > If you have idea.
>
> And I did, PSA the patch. This patch adds two parts in hash_index.sql.
>
> In the first part, the
Dear hackers,
> Next we should add some test codes. I will continue considering but please
> post
> anything
> If you have idea.
And I did, PSA the patch. This patch adds two parts in hash_index.sql.
In the first part, the primary bucket page is filled by live tuples and some
overflow
pages
Dear hackers,
> Dear Amit, Michael,
>
> Thanks for making the patch!
>
> > Why register wbuf at all if there are no tuples to add and it is not
> > the same as bucketbuf? Also, I think this won't be correct if prevbuf
> > and wrtbuf are the same and also we have no tuples to add to wbuf. I
> >
Dear Amit, Michael,
Thanks for making the patch!
> Why register wbuf at all if there are no tuples to add and it is not
> the same as bucketbuf? Also, I think this won't be correct if prevbuf
> and wrtbuf are the same and also we have no tuples to add to wbuf. I
> have attached a naive and crude
On Wed, Nov 1, 2023 at 12:54 PM Michael Paquier wrote:
>
> On Mon, Oct 30, 2023 at 03:54:39PM +0530, Amit Kapila wrote:
> > On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier wrote:
> >> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:
> >> > Yes, we need it to exclude any concurrent
On Mon, Oct 30, 2023 at 03:54:39PM +0530, Amit Kapila wrote:
> On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier wrote:
>> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:
>> > Yes, we need it to exclude any concurrent in-progress scans that could
>> > return incorrect tuples during
On Mon, Oct 30, 2023 at 11:15 PM Amit Kapila wrote:
> My understanding is the same. It is possible that my wording is not
> clear. Let me try to clarify again, Michael asked: "do we need the
> cleanup lock on the write buffer even if there are no tuples, and even
> if primary bucket and the write
On Mon, Oct 30, 2023 at 7:13 PM Robert Haas wrote:
>
> On Sat, Oct 28, 2023 at 6:15 AM Amit Kapila wrote:
> > > Hmm. So my question is: do we need the cleanup lock on the write
> > > buffer even if there are no tuples, and even if primary bucket and the
> > > write bucket are the same?
> >
> >
On Sat, Oct 28, 2023 at 6:15 AM Amit Kapila wrote:
> > Hmm. So my question is: do we need the cleanup lock on the write
> > buffer even if there are no tuples, and even if primary bucket and the
> > write bucket are the same?
>
> Yes, we need it to exclude any concurrent in-progress scans that
On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier wrote:
>
> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:
> > Yes, we need it to exclude any concurrent in-progress scans that could
> > return incorrect tuples during bucket squeeze operation.
>
> Thanks. So I assume that we should
On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:
> Yes, we need it to exclude any concurrent in-progress scans that could
> return incorrect tuples during bucket squeeze operation.
Thanks. So I assume that we should just set REGBUF_NO_CHANGE when the
primary and write buffers are the
On Fri, Oct 27, 2023 at 5:45 AM Michael Paquier wrote:
>
> On Thu, Oct 26, 2023 at 09:40:09AM -0400, Robert Haas wrote:
> > Because of this, it is possible for bucketbuf, prevbuf, and wbuf to be
> > the same (your first scenario) but the second scenario you mention
> > (nextbuf == wbuf) should
On Thu, Oct 26, 2023 at 09:40:09AM -0400, Robert Haas wrote:
> Because of this, it is possible for bucketbuf, prevbuf, and wbuf to be
> the same (your first scenario) but the second scenario you mention
> (nextbuf == wbuf) should be impossible.
Okay..
> It seems to me that maybe we shouldn't
On Thu, Oct 26, 2023 at 3:31 AM Michael Paquier wrote:
> Hmm. Looking at hash_xlog_squeeze_page(), it looks like wbuf is
> expected to always be registered. So, you're right here: it should be
> OK to be less aggressive with setting the page LSN on wbuf if ntups is
> 0, but there's more to it?
On Wed, Oct 25, 2023 at 10:06:23PM -0700, Jeff Davis wrote:
> Thank you for the report! I was able to reproduce and observe that the
> buffer is not marked dirty.
>
> The call (hashovfl.c:671):
>
> XLogRegisterBuffer(1, wbuf, REGBUF_STANDARD)
>
> is followed unconditionally by:
>
>
On Thu, 2023-10-26 at 07:00 +0300, Alexander Lakhin wrote:
> It looks like the buffer is not dirty in the problematic call.
Thank you for the report! I was able to reproduce and observe that the
buffer is not marked dirty.
The call (hashovfl.c:671):
XLogRegisterBuffer(1, wbuf,
Hello hackers,
24.10.2023 03:45, Jeff Davis wrote:
Committed.
I've encountered a case with a hash index, when an assert added by the
commit fails:
CREATE TABLE t (i INT);
CREATE INDEX hi ON t USING HASH (i);
INSERT INTO t SELECT 1 FROM generate_series(1, 1000);
BEGIN;
INSERT INTO t SELECT 1
On Mon, 2023-10-23 at 10:14 -0400, Robert Haas wrote:
> I think that would be a bug. I think it would be OK to just change
> the
> page LSN and nothing else, but that requires calling
> MarkBufferDirty()
> at some point. If you only call PageSetLSN() and never
> MarkBufferDirty(), that sounds
On Fri, Oct 20, 2023 at 12:28 PM Jeff Davis wrote:
> I still have a question though: if a buffer is exclusive-locked,
> unmodified and clean, and then the caller registers it and later does
> PageSetLSN (just as if it were dirty), is that a definite bug?
>
> There are a couple callsites where the
On Thu, 2023-10-19 at 16:12 -0400, Robert Haas wrote:
> For what it's worth, though, I think it would be better to
> just make these cases exceptions to your Assert
OK, I'll probably commit something like v4 then.
I still have a question though: if a buffer is exclusive-locked,
unmodified and
On Tue, Oct 17, 2023 at 12:38 PM Jeff Davis wrote:
> I meant: are those cleanup operations frequent enough that dirtying
> those buffers in that case would matter?
Honestly, I'm not sure. Probably not? I mean, hashbucketcleanup()
seems to only be called during vacuum or a bucket split, and I
On Tue, 2023-10-17 at 08:41 -0400, Robert Haas wrote:
> Sorry, I'm not sure I understand the question. Are you asking whether
> dirtying buffers unnecessarily might be slower than not doing that?
I meant: are those cleanup operations frequent enough that dirtying
those buffers in that case would
On Mon, Oct 16, 2023 at 7:31 PM Jeff Davis wrote:
> Another option might be to just change the hash indexing code to follow
> the correct protocol, locking and calling MarkBufferDirty() in those 3
> call sites. Marking the buffer dirty is easy, but making sure that it's
> locked might require
On Wed, 2023-10-11 at 14:53 +0300, Heikki Linnakangas wrote:
>
> The comment suggests that you don't need to hold an exclusive lock
> when
> you call this, but there's an assertion that you do.
Reworded.
> Is it a new requirement that you must hold an exclusive lock on the
> buffer when you
On Wed, Oct 11, 2023 at 7:53 AM Heikki Linnakangas wrote:
> > + * Buffer must be pinned and exclusive-locked. (If caller does not hold
> > + * exclusive lock, then the result may be stale before it's returned.)
> The comment suggests that you don't need to hold an exclusive lock when
> you call
On 10/10/2023 22:57, Jeff Davis wrote:
On Thu, 2023-09-28 at 12:05 -0700, Jeff Davis wrote:
Also, I ran into some problems with GIN that might require a bit more
refactoring in ginPlaceToPage(). Perhaps we could consider
REGBUF_NO_CHANGE a general bypass of the Assert(BufferIsDirty()), and
use
I committed and backported 0001 (the GenericXLogFinish() fix but not
the Assert).
Strangely I didn't see the -committers email come through yet. If
anyone notices anything wrong, please let me know before the final v11
release.
On Thu, 2023-09-28 at 12:05 -0700, Jeff Davis wrote:
> Also, I ran
On Wed, 2023-09-27 at 18:57 +0300, Heikki Linnakangas wrote:
> We could define a new REGBUF_NO_CHANGE flag to signal that the buffer
> is
> registered just for locking purposes, and not actually modified by
> the
> WAL record.
Out of curiosity I also added a trial assert (not intending to
On 27/09/2023 18:47, Robert Haas wrote:
On Wed, Sep 27, 2023 at 11:03 AM Jeff Davis wrote:
So it looks like it's intentionally registering a clean buffer so that
it can take a cleanup lock for reasons other than cleaning (or even
modiying) the page. I would think that there's a better way of
On Wed, Sep 27, 2023 at 11:03 AM Jeff Davis wrote:
> So it looks like it's intentionally registering a clean buffer so that
> it can take a cleanup lock for reasons other than cleaning (or even
> modiying) the page. I would think that there's a better way of
> accomplishing that goal, so perhaps
On Wed, 2023-09-27 at 10:36 -0400, Robert Haas wrote:
> On Tue, Sep 26, 2023 at 9:36 PM Jeff Davis wrote:
> > That site is pretty trivial to fix, but there are also a couple
> > places
> > in hash.c and hashovfl.c that are registering a clean page and it's
> > not
> > clear to me exactly what's
On Tue, Sep 26, 2023 at 9:36 PM Jeff Davis wrote:
> That site is pretty trivial to fix, but there are also a couple places
> in hash.c and hashovfl.c that are registering a clean page and it's not
> clear to me exactly what's going on.
Huh, I wonder if that's just a bug. Do you know where
On Wed, 2023-09-27 at 00:14 +0300, Heikki Linnakangas wrote:
> Looks correct. You now loop through all the block IDs three times,
> however. I wonder if that is measurably slower, but even if it's not,
> was there a reason you wanted to move the XLogRegisterBuffer() calls
> to
> a separate loop?
On 26/09/2023 22:32, Jeff Davis wrote:
On Mon, 2023-09-25 at 13:04 +0300, Heikki Linnakangas wrote:
Yes, that's a problem.
Patch attached. I rearranged the code a bit to follow the expected
pattern of: write, mark dirty, WAL, set LSN. I think computing the
deltas could also be moved earlier,
On Mon, 2023-09-25 at 13:04 +0300, Heikki Linnakangas wrote:
> Yes, that's a problem.
Patch attached. I rearranged the code a bit to follow the expected
pattern of: write, mark dirty, WAL, set LSN. I think computing the
deltas could also be moved earlier, outside of the critical section,
but I'm
On 22/09/2023 23:52, Jeff Davis wrote:
src/backend/transam/README says:
...
4. Mark the shared buffer(s) as dirty with MarkBufferDirty(). (This
must happen before the WAL record is inserted; see notes in
SyncOneBuffer().)
...
But GenericXLogFinish() does this:
...
/*
src/backend/transam/README says:
...
4. Mark the shared buffer(s) as dirty with MarkBufferDirty(). (This
must happen before the WAL record is inserted; see notes in
SyncOneBuffer().)
...
But GenericXLogFinish() does this:
...
/* Insert xlog record */
lsn =
64 matches
Mail list logo