On 9/10/21, 10:12 AM, "Robert Haas" <robertmh...@gmail.com> wrote:
> If on the other hand you imagine a system that's not very busy, say 1
> WAL file being archived every 10 seconds, then using a batch size of
> 30 would very significantly delay removal of old files. However, on
> this system, batching probably isn't really needed. The rate of WAL
> file generation is low enough that if you pay the startup cost of your
> archive_command for every file, you're probably still doing just fine.
>
> Probably, any kind of parallelism or batching needs to take this kind
> of time-based thinking into account. For batching, the rate at which
> files are generated should affect the batch size. For parallelism, it
> should affect the number of processes used.

I was thinking that archive_batch_size would be the maximum batch
size.  If the archiver only finds a single file to archive, that's all
it'd send to the archive command.  If it finds more, it'd send up to
archive_batch_size to the command.

Nathan

Reply via email to