On Thu, Feb 26, 2026 at 1:07 PM Ajin Cherian <[email protected]> wrote:
>
> On Tue, Feb 24, 2026 at 5:17 PM Amit Kapila <[email protected]> wrote:
> >
> > Can we find some cheap
> > way to detect if sequencesync worker is present or not? Can you think
> > some other way to not incur the cost of traversing the worker array
> > and also detect sequence worker exit without much delay?
> >
>
> Added this.
>
> > Also, shouldn't we need to invoke AcceptInvalidationMessages() as we
> > are doing in apply worker when not in a remote transaction? I think it
> > will be required to get local_sequence definition changes , if any.
>
> Changed.
>
> Thanks Hou-san for helping me with these changes.
> I also did some performance testing on HEAD to see how long REFRESH
> SEQUENCES takes for a large number of sequences.
> I ran these on a 2× Intel Xeon E5-2699 v4 (22 cores each, 44 cores
> total / 88 threads) 512 GB RAM. I didn't see much value in
> differentiating between cases where half the sequences were different
> or all the sequences were different as REFRESH SEQUENCES updates all
> sequences after changing the state of all of them to INIT, it doesn't
> matter if they drifted or not.
>
> On HEAD:
> time to sync 10000 sequences: 1.080s (1080ms)
> time to sync 100000 sequences: 12.069s (12069ms)
> time to sync 1000000 sequences: 139.414s (139414ms)
>
> testing script attached (pass in the number of sequences as a run time
> parameter).

Hi Ajin,
Thanks for sharing the performance results. I ran the same tests using
your scripts on a different machine with the configuration:
 - Chip: Apple M4 Pro, 14 CPU cores
 - RAM: 24 GB
 - Postgres installation on pg_Head - commit 77c7a17a6e5

For these tests, I used shared_buffers = 4GB. The time taken for 1M
sequences is increased significantly:
  time to sync 10000 sequences: .994s (994ms)
  time to sync 100000 sequences: 11.032s (11032ms)
  time to sync 1000000 sequences: 426.850s (426850ms)

I also tested with shared_buffers = 8GB, and the time for 1M sequences
was 441.794s (441794 ms)

--
Thanks,
Nisha


Reply via email to