On 9/27/21, 11:06 AM, "Bossart, Nathan" wrote:
> On 9/24/21, 9:29 AM, "Robert Haas" wrote:
>> So what I am inclined to do is commit
>> v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch.
>> However, v6-0001-Do-fewer-directory-scans-of-arch
On 10/4/21, 8:19 PM, "Stephen Frost" wrote:
> It's also been discussed, at least around the water cooler (as it were
> in pandemic times- aka our internal slack channels..) that the existing
> archive command might be reimplemented as an extension using these. Not
> sure if that's really necessar
On 10/4/21, 7:08 PM, "Stephen Frost" wrote:
> I really think we need to stop addressing roles explicitly as
> 'superuser' vs. 'non-superuser', because a non-superuser role can be
> GRANT'd a superuser role, which makes that distinction really not
> sensible. This has continued to be a problem and
On 10/4/21, 7:21 PM, "Stephen Frost" wrote:
> This has something we've contemplated quite a bit and the last thing
> that I'd want to have is a requirement to configure a whole bunch of
> additional parameters to enable this. Why do we need to have some many
> new GUCs? I would have thought we'd
On 9/27/21, 11:16 AM, "Mark Dilger" wrote:
> On Sep 21, 2021, at 12:58 PM, Andrew Dunstan wrote:
>> I do like the basic thrust of reducing the power of CREATEROLE. There's
>> an old legal maxim I learned in my distant youth that says "nemo dat
>> quod non habet" - Nobody can give something they d
On 9/30/21, 3:44 PM, "Peter Geoghegan" wrote:
> On Wed, Sep 29, 2021 at 3:32 PM Peter Geoghegan wrote:
>> I decided to run a simple experiment, to give us some idea of what
>> benefits my proposal gives users: I ran "make installcheck" on a newly
>> initdb'd database (master branch), and then wit
On 10/1/21, 12:08 PM, "Andrey Borodin" wrote:
> 30 сент. 2021 г., в 09:47, Bossart, Nathan написал(а):
>> I tested the sample archive_command in the docs against the sample
>> archive_library implementation in the patch, and I saw about a 50%
>> speedup. (The arc
On 9/29/21, 9:49 PM, "Bossart, Nathan" wrote:
> I'm sure there are other ways to approach this, but I thought I'd give
> it a try to see what was possible and to get the conversation started.
BTW I am also considering the background worker approach that was
menti
On 9/22/21, 10:03 PM, "Amul Sul" wrote:
> On Tue, Sep 21, 2021 at 9:43 PM Bossart, Nathan wrote:
>> Shouldn't we update the time in update_controlfile()?
>
> If you see the callers of update_controlfile() except for
> RewriteControlFile() no one else updates the
On 9/20/21, 10:07 PM, "Amul Sul" wrote:
> On Tue, Sep 21, 2021 at 4:44 AM Bossart, Nathan wrote:
>> On 9/19/21, 11:07 PM, "Amul Sul" wrote:
>> > I have one additional concern about the way we update the control
>> > file, at every place where doi
On 9/20/21, 7:29 PM, "Michael Paquier" wrote:
> On Tue, Sep 21, 2021 at 12:08:22AM +0000, Bossart, Nathan wrote:
>> Should we also initialize the shared memory GUCs in bootstrap and
>> single-user mode? I think I missed this in bd17880.
>
> Why would we n
On 9/20/21, 6:48 PM, "Michael Paquier" wrote:
> Thanks. I have gone through the last patch this morning, did some
> tests on all the platforms I have at hand (including Linux) and
> finished by applying it after doing some small tweaks. First, I have
> finished by extending GetHugePageSize() to
Should we also initialize the shared memory GUCs in bootstrap and
single-user mode? I think I missed this in bd17880.
Nathan
diff --git a/src/backend/bootstrap/bootstrap.c
b/src/backend/bootstrap/bootstrap.c
index 48615c0ebc..4c4cf44871 100644
--- a/src/backend/bootstrap/bootstrap.c
+++ b/src/b
On 9/19/21, 11:07 PM, "Amul Sul" wrote:
> +1, since skipping ControlFileLock for the DBState update is not the
> right thing, let's have two different functions as per your suggestion
> -- did the same in the attached version, thanks.
I see that the attached patch reorders the call to UpdateContr
On 9/17/21, 1:32 PM, "Alvaro Herrera" wrote:
> On 2021-Sep-17, Bossart, Nathan wrote:
>
>> I gave the patch a read-through. I'm wondering if the
>> XLOG_OVERWRITE_CONTRECORD records are actually necessary. IIUC we
>> will set XLP_FIRST_IS_ABORTED_PARTIAL o
On 9/17/21, 10:35 AM, "Alvaro Herrera" wrote:
> On 2021-Sep-17, Bossart, Nathan wrote:
>
>> On 9/17/21, 9:37 AM, "Alvaro Herrera" wrote:
>
>> > If you have some time to try your reproducers with this new proposed
>> > fix, I would appreciate
On 9/17/21, 9:37 AM, "Alvaro Herrera" wrote:
> OK, this version is much more palatable, because here we verify that the
> OVERWRITE_CONTRECORD we replay matches the record that was lost. Also,
> I wrote a test script that creates such a broken record (by the simple
> expedient of deleting the WAL
On 9/15/21, 4:47 AM, "Amul Sul" wrote:
> On Wed, Sep 15, 2021 at 12:52 AM Bossart, Nathan wrote:
>> It looks like ebdf5bf intentionally made sure that we hold
>> ControlFileLock while updating SharedRecoveryInProgress (now
>> SharedRecoveryState after 4e87c48).
On 9/14/21, 8:06 PM, "Michael Paquier" wrote:
> Attached is a refreshed patch (commit message is the same as v9 for
> now), with some minor tweaks and the tests.
>
> Thoughts?
LGTM
+This can be used on a running server for most parameters. However,
+the server must be shut down
On 9/13/21, 11:06 PM, "Amul Sul" wrote:
> The patch is straightforward but the only concern is that in
> StartupXLOG(), SharedRecoveryState now gets updated only with spin
> lock; earlier it also had ControlFileLock in addition to that. AFAICU,
> I don't see any problem there, since until the star
On 9/14/21, 9:18 AM, "Bossart, Nathan" wrote:
> This is an interesting idea, but the "else" block here seems prone to
> race conditions. I think we'd have to hold arch_lck to prevent that.
> But as I mentioned above, if we are okay with depending on the
> fal
On 9/13/21, 5:49 PM, "Kyotaro Horiguchi" wrote:
> At Tue, 14 Sep 2021 00:30:22 +0000, "Bossart, Nathan"
> wrote in
>> On 9/13/21, 1:25 PM, "Tom Lane" wrote:
>> > Seems like "huge_pages_needed_for_shared_memory&q
On 9/14/21, 7:23 AM, "Dipesh Pandit" wrote:
> I agree that when we are creating a .ready file we should compare
> the current .ready file with the last .ready file to check if this file is
> created out of order. We can store the state of the last .ready file
> in shared memory and compare it w
On 9/13/21, 1:25 PM, "Tom Lane" wrote:
> Seems like "huge_pages_needed_for_shared_memory" would be sufficient.
I think we are down to either shared_memory_size_in_huge_pages or
huge_pages_needed_for_shared_memory. Robert's argument against
huge_pages_needed_for_shared_memory was that it might so
On 9/13/21, 1:14 PM, "Robert Haas" wrote:
> On Thu, Sep 2, 2021 at 5:52 PM Bossart, Nathan wrote:
>> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist
>> yet. The following directory scan ends up finding 11.ready. Just
>> before we u
On 9/13/21, 8:59 AM, "Robert Haas" wrote:
> On Fri, Sep 10, 2021 at 7:43 PM Bossart, Nathan wrote:
>> I think that's an improvement. The only other idea I have at the
>> moment is num_huge_pages_required_for_shared_memory.
>
> Hmm, that to me sounds like m
On 9/11/21, 1:31 AM, "Bharath Rupireddy"
wrote:
> We have two static check_permissions functions (one in slotfuncs.c
> another in logicalfuncs.c) with the same name and same code for
> checking the privileges for using replication slots. Why can't we have
> a single function CheckReplicationSlotP
On 9/10/21, 1:02 PM, "Robert Haas" wrote:
> On Thu, Sep 9, 2021 at 5:53 PM Bossart, Nathan wrote:
>> I think it might be clearer to
>> somehow indicate that the value is essentially the size of the main
>> shared memory area in terms of the huge page size, but
On 9/10/21, 10:12 AM, "Robert Haas" wrote:
> If on the other hand you imagine a system that's not very busy, say 1
> WAL file being archived every 10 seconds, then using a batch size of
> 30 would very significantly delay removal of old files. However, on
> this system, batching probably isn't rea
On 9/10/21, 8:22 AM, "Robert Haas" wrote:
> On Fri, Sep 10, 2021 at 10:19 AM Julien Rouhaud wrote:
>> Those approaches don't really seems mutually exclusive? In both case
>> you will need to internally track the status of each WAL file and
>> handle non contiguous file sequences. In case of par
On 9/9/21, 7:03 PM, "Michael Paquier" wrote:
> As far as the behavior is documented, I'd be fine with the approach to
> keep the code in its simplest shape. I agree that the message is
> confusing, still it is not wrong either as we try to query a run-time
> parameter, but we need the lock.
That
On 9/8/21, 6:49 AM, "Bharath Rupireddy"
wrote:
> How about we create a new extension, called pg_walinspect (synonymous
> to pageinspect extension) with a bunch of SQL-callable functions that
> get the raw WAL records or stats out of a running postgres database
> instance in a more structured way
On 9/7/21, 11:38 PM, "Julien Rouhaud" wrote:
> On Wed, Sep 8, 2021 at 6:36 AM Bossart, Nathan wrote:
>>
>> I'd like to gauge interest in parallelizing the archiver process.
>> [...]
>> Based on previous threads I've seen, I believe many in the com
On 9/8/21, 10:49 AM, "Dipesh Pandit" wrote:
> Updated log level to DEBUG3 and rebased the patch. PFA patch.
Thanks for the new patch.
+ * by checking the availability of next WAL file. "xlogState" specifies the
+ * segment number and timeline ID corresponding to the next WAL file.
"xlogState" p
On 9/7/21, 8:50 PM, "Michael Paquier" wrote:
> Switched the variable name to shared_memory_size_mb for easier
> grepping, moved it to a more correct location with the other read-only
> GUCS, and applied 0002. Well, 0001 here.
Thanks! And thanks for cleaning up the small mistake in aa37a43.
> +
Hi hackers,
I'd like to gauge interest in parallelizing the archiver process.
From a quick scan, I was only able to find one recent thread [0] that
brought up this topic, and ISTM the conventional wisdom is to use a
backup utility like pgBackRest that does things in parallel behind-
the-scenes. M
On 9/4/21, 10:26 AM, "Alvaro Herrera" wrote:
> Attached are the same patches as last night, except I added a test for
> XLOG_DEBUG where pertinent. (The elog(PANIC) is not made conditional on
> that, since it's a cross-check rather than informative.) Also fixed the
> silly pg_rewind mistake I ma
On 9/7/21, 11:31 AM, "Robert Haas" wrote:
> I guess we still have to pick one or the other, but I don't really
> know how to do that, since both methods seem to be relatively fine,
> and the scenarios where one is better than the other all feel a little
> bit contrived. I guess if no clear consens
On 9/7/21, 10:54 AM, "Robert Haas" wrote:
> I guess what I don't understand about the multiple-files-per-dirctory
> scan implementation is what happens when something happens that would
> require the keep-trying-the-next-file approach to perform a forced
> scan. It seems to me that you still need
On 9/7/21, 1:42 AM, "Kyotaro Horiguchi" wrote:
> I was thinking that the multple-files approch would work efficiently
> but the the patch still runs directory scans every 64 files. As
> Robert mentioned it is still O(N^2). I'm not sure the reason for the
> limit, but if it were to lower memory c
On 9/6/21, 11:24 PM, "Kyotaro Horiguchi" wrote:
> At Fri, 3 Sep 2021 17:46:05 +0000, "Bossart, Nathan"
> wrote in
>> On 9/2/21, 10:12 PM, "Kyotaro Horiguchi" wrote:
>> > I noticed that postgres -C shared_memory_size showed 137 (= 144703488)
&g
On 9/5/21, 7:28 PM, "Michael Paquier" wrote:
> On Fri, Sep 03, 2021 at 05:36:43PM +0000, Bossart, Nathan wrote:
>> On 9/2/21, 6:46 PM, "Michael Paquier" wrote:
>>> 0001, that refactors the calculation of the shmem size into a
>>> different routine,
On 9/5/21, 12:14 PM, "Tom Lane" wrote:
> "Bossart, Nathan" writes:
>> Is there a specific style you have in mind for this change, or is your
>> point that the CASE statement should only be reserved for when
>> is_default_acl is true?
>
> I'd be
On 9/5/21, 10:08 AM, "Tom Lane" wrote:
> "Bossart, Nathan" writes:
>> I've attached a quick hack that seems to fix this by adjusting the
>> pg_dump query to use NULL instead of acldefault() for non-global
>> entries. I'm posting this early in
On 9/2/21, 10:12 PM, "Kyotaro Horiguchi" wrote:
> By the way I noticed that postgres -C huge_page_size shows 0, which I
> think should have the number used for the calculation if we show
> huge_page_required.
I would agree with this if huge_page_size was a runtime-computed GUC,
but since it's int
On 9/2/21, 11:30 AM, "Magnus Hagander" wrote:
> I had a customer point out to me that we're inconsistent in how we
> spell read-only. Turns out we're not as inconsistent as I initially
> thought :), but that they did manage to spot the one actual log
> message we have that writes it differently th
On 9/2/21, 6:22 AM, "Dipesh Pandit" wrote:
> I agree that multiple-files-pre-readdir is cleaner and has the resilience of
> the
> current implementation. However, I have a few suggestion on keep-trying-the
> -next-file approach patch shared in previous thread.
Which approach do you think we shou
On 9/2/21, 12:54 AM, "Michael Paquier" wrote:
> Thanks. Anyway, we don't really need huge_pages_required on Windows,
> do we? The following docs of Windows tell what do to when using large
> pages:
> https://docs.microsoft.com/en-us/windows/win32/memory/large-page-support
>
> The backend code do
On 9/1/21, 10:06 AM, "Andres Freund" wrote:
> On 2021-09-01 15:01:43 +0900, Fujii Masao wrote:
>> Thanks for clarifying that! Unless I misunderstand that, when recovery ends
>> at a partially-flushed segment-spanning record, it sets
>> XLP_FIRST_IS_ABORTED_PARTIAL in the next segment before starti
On 8/31/21, 10:21 AM, "Andres Freund" wrote:
> What would trigger the flushing? We don't write out partially filled pages
> unless
> a) we're explicitly flushing an LSN on the partial page (e.g. because a
>synchronous commit record resides on it)
> b) there's an async commit (i.e. commit with
On 8/31/21, 12:44 AM, "Andres Freund" wrote:
> If there's now a flush request including all of s3, we'll have the following
> sequence of notifies:
>
> NotifySegmentsReadyForArchive(s1)
> nothing happens, smaller than s1+10
>
> NotifySegmentsReadyForArchive(s2)
> earliestSegBoundary = s4
> earlies
On 8/25/21, 4:11 AM, "Dipesh Pandit" wrote:
> Please find attached patch v11.
Apologies for the delay. I still intend to review this.
Nathan
On 8/30/21, 3:40 PM, "Bossart, Nathan" wrote:
> On 8/30/21, 2:06 PM, "Andres Freund" wrote:
>> Although, the more I think about, the more I am confused about the trailing
>> if (XLogArchivingActive())
>> NotifySegmentsRead
On 8/30/21, 2:06 PM, "Andres Freund" wrote:
> When were you thinking of doing the latch sets? Adding a latch set for every
> XLogWrite() wouldn't be cheap either. Both because syscalls under a lock
> aren't free and because waking up walsender even more often isn't free (we
> already have a few th
On 8/30/21, 12:52 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-28, Andres Freund wrote:
>
>> While rebasing the aio patchset ontop of HEAD I noticed that this commit
>> added
>> another atomic operation to XLogWrite() with archiving enabled. The WAL write
>> path is really quite hot, and at
On 8/27/21, 7:27 AM, "Daniel Gustafsson" wrote:
> I took another look at this today and pushed it after verifying with a
> pgindent
> run. Thanks!
Thank you!
Nathan
On 8/25/21, 5:40 PM, "Kyotaro Horiguchi" wrote:
> At Wed, 25 Aug 2021 18:18:59 +0000, "Bossart, Nathan"
> wrote in
>> Let's say we have the following situation (F = flush, E = earliest
>> registered boundary, and L = latest registered boundary), and le
On 8/25/21, 11:01 AM, "Fujii Masao" wrote:
> If LogwrtResult.Flush >= EndPos, which means that another process already
> has flushed the record concurrently and updated XLogCtl->LogwrtResult.Flush.
> This situation also means that that another process called
> NotifySegmentsReadyForArchive(LogwrtR
On 8/25/21, 5:33 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-24, Bossart, Nathan wrote:
>
>> If moving RegisterSegmentBoundary() is sufficient to prevent the flush
>> pointer from advancing before we register the boundary, I bet we could
>> also remove
On 8/24/21, 4:03 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-24, Bossart, Nathan wrote:
>
>> I wonder if we need to move the call to RegisterSegmentBoundary() to
>> somewhere before WALInsertLockRelease() for this to work as intended.
>> Right now, boun
On 8/24/21, 12:09 PM, "Robert Haas" wrote:
> I can't quite decide whether the problems we're worrying about here
> are real issues or just kind of hypothetical. I mean, today, it seems
> to be possible that we fail to mark some file ready for archiving,
> emit a log message, and then a huge amount
On 8/23/21, 3:53 PM, "Alvaro Herrera" wrote:
> As mentioned in the course of thread [1], we're missing a fix for
> streaming replication to avoid sending records that the primary hasn't
> fully flushed yet. This patch is a first attempt at fixing that problem
> by retreating the LSN reported as F
On 8/24/21, 5:31 AM, "Dipesh Pandit" wrote:
>> > I've been looking at the v9 patch with fresh eyes, and I still think
>> > we should be able to force the directory scan as needed in
>> > XLogArchiveNotify(). Unless the file to archive is a regular WAL file
>> > that is > our stored location in ar
On 8/23/21, 10:49 AM, "Robert Haas" wrote:
> On Mon, Aug 23, 2021 at 11:50 AM Bossart, Nathan wrote:
>> To handle a "cheating" archive command, I'd probably need to add a
>> stat() for every time pgarch_readyXLog() returned something from
>> arc
On 8/23/21, 12:55 PM, "alvhe...@alvh.no-ip.org" wrote:
> Thanks. I've pushed this now to all live branches.
Thank you! I appreciate the thorough reviews. Should we make a new
thread for the streaming replication fix?
Nathan
On 8/23/21, 10:31 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-23, alvhe...@alvh.no-ip.org wrote:
>
>> The only way .ready files are created is that XLogNotifyWrite() is
>> called. For regular WAL files during regular operation, that only
>> happens in XLogNotifyWriteSeg(). That, in turn,
On 8/23/21, 9:33 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-23, Bossart, Nathan wrote:
>
>> Hm. My expectation would be that if there are no registrations, we
>> cannot create .ready files for the flushed segments. The scenario
>> where I can see th
On 8/23/21, 8:50 AM, "alvhe...@alvh.no-ip.org" wrote:
> ... while reading the resulting code after backpatching to all branches,
> I realized that if there are no registrations whatsoever, then archiving
> won't do anything, which surely is the wrong thing to do. The correct
> behavior should be
On 8/23/21, 6:42 AM, "Robert Haas" wrote:
> On Sun, Aug 22, 2021 at 10:31 PM Bossart, Nathan wrote:
>> I ran this again on a bigger machine with 200K WAL files pending
>> archive. The v9 patch took ~5.5 minutes, the patch I sent took ~8
>> minutes, and the e
On 8/21/21, 9:29 PM, "Bossart, Nathan" wrote:
> I was curious about this, so I wrote a patch (attached) to store
> multiple files per directory scan and tested it against the latest
> patch in this thread (v9) [0]. Specifically, I set archive_command to
> 'false',
On 8/20/21, 4:52 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-20, Bossart, Nathan wrote:
>
>> I was looking at moving the function calls out of the spinlock region.
>> I don't think the functions are doing anything too expensive, and they
>> help
On 8/20/21, 4:00 PM, "alvhe...@alvh.no-ip.org" wrote:
> Attached is v14 which uses a separate spinlock.
This looks good to me.
I was looking at moving the function calls out of the spinlock region.
I don't think the functions are doing anything too expensive, and they
help clean up NotifySegment
On 8/20/21, 10:08 AM, "Robert Haas" wrote:
> On Fri, Aug 20, 2021 at 12:36 PM Bossart, Nathan wrote:
>> If a record spans multiple segments, we only register one segment
>> boundary. For example, if I insert a record that starts at segment
>> number 1 and stops
On 8/20/21, 8:29 AM, "Robert Haas" wrote:
> On Fri, Aug 20, 2021 at 10:50 AM alvhe...@alvh.no-ip.org
> wrote:
>> 1. We use a hash table in shared memory. That's great. The part that's
>>not so great is that in both places where we read items from it, we
>>have to iterate in some way. T
On 8/19/21, 5:42 AM, "Dipesh Pandit" wrote:
>> Should we have XLogArchiveNotify(), writeTimeLineHistory(), and
>> writeTimeLineHistoryFile() enable the directory scan instead? Else,
>> we have to exhaustively cover all such code paths, which may be
>> difficult to maintain. Another reason I am b
gArch->dirScan)
+PgArch->dirScan = false;
Why do we need to check that the flag is set before we reset it? I
think we could just always reset it since we are about to do a
directory scan anyway.
On 8/18/21, 7:25 AM, "Robert Haas" wrote:
> On Tue, Aug 17, 2021 at 4:19 PM Bo
On 8/18/21, 10:48 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-18, alvhe...@alvh.no-ip.org wrote:
>
>> I realize this means there's a contradiction with my previous argument,
>> in that synchronous transaction commit calls XLogWrite at some point, so
>> we *are* putting the client-connected
On 8/18/21, 10:06 AM, "alvhe...@alvh.no-ip.org" wrote:
> So that comment suggests that we should give the responsibility to bgwriter.
> This seems good enough to me. I suppose if bgwriter has a long run of
> buffers to write it could take a little bit of time (a few hundred
> milliseconds?) but I
On 8/17/21, 2:13 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-17, Bossart, Nathan wrote:
>
>> The main reason for registering the boundaries in XLogInsertRecord()
>> is that it has the required information about the WAL record
>> boundaries. I do n
On 8/17/21, 12:11 PM, "Bossart, Nathan" wrote:
> On 8/17/21, 11:28 AM, "Robert Haas" wrote:
>> I can't actually see that there's any kind of hard synchronization
>> requirement here at all. What we're trying to do is guarantee that if
>>
On 8/17/21, 11:28 AM, "Robert Haas" wrote:
> I can't actually see that there's any kind of hard synchronization
> requirement here at all. What we're trying to do is guarantee that if
> the timeline changes, we'll pick up the timeline history for the new
> timeline next, and that if files are arch
On 8/17/21, 10:44 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-17, Bossart, Nathan wrote:
>> I've been thinking about the next steps for this one, too. ISTM we'll
>> need to basically assume that the flush pointer jumps along record
>> boundaries
On 8/17/21, 5:53 AM, "Dipesh Pandit" wrote:
>> I personally don't think it's necessary to use an atomic here. A
>> spinlock or LWLock would probably work just fine, as contention seems
>> unlikely. If we use a lock, we also don't have to worry about memory
>> barriers.
>
> History file should be
On 5/30/21, 10:22 AM, "Tom Lane" wrote:
> We can do a lot better, by exploiting what we know about the usage
> patterns of invalidation requests. New requests are always added to
> the latest sublist, and the only management actions are (1) merge
> latest sublist into next-to-latest sublist, or (
On 8/15/21, 9:52 PM, "Bossart, Nathan" wrote:
> + * Perform a full directory scan to identify the next log segment. There
> + * may be one of the following scenarios which may require us to
> perform a
> + * full directory scan.
> ...
> + * - The
+* This .ready file is created out of order, notify archiver to perform
+* a full directory scan to archive corresponding WAL file.
+*/
+ StatusFilePath(archiveStatusPath, xlog, ".ready");
+ if (stat(archiveStatusPath, &stat_buf) == 0)
+ PgArchEnabl
On 8/9/21, 4:05 PM, "Zhihong Yu" wrote:
> -extern void CreateSharedMemoryAndSemaphores(void);
> +extern Size CreateSharedMemoryAndSemaphores(bool size_only);
>
> Should the parameter be enum / bitmask so that future addition would not
> change the method signature ?
I don't have a strong opinion
On 8/5/21, 6:26 PM, "Kyotaro Horiguchi" wrote:
> It works the current way always at the first iteration of
> pgarch_ArchiveCopyLoop() becuse in the last iteration of
> pgarch_ArchiveCopyLoop(), pgarch_readyXlog() erases the last
> anticipated segment. The shortcut works only when
> pgarch_Archive
On 8/4/21, 9:05 PM, "Kyotaro Horiguchi" wrote:
> By the way about the v3 patch,
>
> +#define InvalidXLogSegNo ((XLogSegNo) 0x)
>
> Like InvalidXLogRecPtr, the first valid segment number is 1 so we can
> use 0 as InvalidXLogSegNo.
It's been a while since I wrote this, but if
On 8/2/21, 7:37 PM, "Kyotaro Horiguchi" wrote:
> I'm afraid that using hash to store boundary info is too much. Isn't a
> ring buffer enough for this use? In that case it is enough to
> remember only the end LSN of the segment spanning records. It is easy
> to expand the buffer if needed.
I agr
+ /*
+* Perform a full directory scan to identify the next log segment. There
+* may be one of the following scenarios which may require us to
perform a
+* full directory scan.
+*
+* 1. This is the first cycle since archiver has started and there is no
On 8/2/21, 2:42 PM, "Alvaro Herrera" wrote:
> I think it's okay to make lastNotifiedSeg protected by just info_lck,
> and RecordBoundaryMap protected by just ArchNotifyLock. It's okay to
> acquire the spinlock inside the lwlock-protected area, as long as we
> make sure never to do the opposite.
On 8/2/21, 3:42 PM, "Andres Freund" wrote:
> I've wondered, independent of this thread, about not making MaxBackends
> externally visible, and requiring a function call to access it. It should be
> easier to find cases of misuse if we errored out when accessed at the wrong
> time. And we could use
On 8/2/21, 3:12 PM, "Andres Freund" wrote:
> I think this is overblown. We already size resources *after*
> shared_preload_libraries' _PG_init() runs, because that's the whole point of
> shared_preload_libraries. What's proposed in this thread is to *disallow*
> increasing resource usage in s_p_l
On 8/2/21, 1:37 PM, "Andres Freund" wrote:
> On 2021-08-02 16:06:15 -0400, Robert Haas wrote:
>> I think this is a good idea. Anyone object?
>
> I'm not so sure it's a good idea. I've seen several shared_preload_library
> using extensions that adjust some GUCs (e.g. max_prepared_transactions)
> be
On 7/31/21, 9:12 AM, "Alvaro Herrera" wrote:
> On 2021-Jul-31, Bossart, Nathan wrote:
>
>> On 7/30/21, 4:52 PM, "Alvaro Herrera" wrote:
>
>> > I noticed that XLogCtl->lastNotifiedSeg is protected by both the
>> > info_lck and ArchNotify
On 7/30/21, 4:52 PM, "Alvaro Herrera" wrote:
> I think that creating files out of order might be problematic. On the
> archiver side, pgarch_readyXlog() expects to return the oldest
> archivable file; but if we create a newer segment's .ready file first,
> it is possible that a directory scan wou
On 7/30/21, 3:23 PM, "Alvaro Herrera" wrote:
> That's great to hear. I'll give 0001 a look again.
Much appreciated.
Nathan
On 7/30/21, 11:34 AM, "Alvaro Herrera" wrote:
> Hmm ... I'm not sure we're prepared to backpatch this kind of change.
> It seems a bit too disruptive to how replay works. I think patch we
> should be focusing solely on patch 0001 to surgically fix the precise
> bug you see. Does patch 0002 exist
On 7/30/21, 2:22 AM, "Daniel Gustafsson" wrote:
> LGTM. I took the liberty to rephrase the "and exit" part of the initdb help
> output match the other exiting options in there. Barring objections, I think
> this is ready.
LGTM. Thanks!
Nathan
201 - 300 of 453 matches
Mail list logo