On Fri, Feb 13, 2026 at 5:11 AM Lukas Fittl <[email protected]> wrote: > > On Thu, Feb 12, 2026 at 4:41 PM Andres Freund <[email protected]> wrote: > > I wonder if pg_test_timing should have a small loop with a fixed count to > > determine the timing without all the overhead the existing loop has... > > I agree that using a fixed count in pg_test_timing would be helpful to > measure just the timing gathering itself, vs the translation into > nanoseconds.
I haven't looked at the code here yet, but when using plain rdtsc on modern CPUs one sees much more overhead from just the fact that the code is there than from calling the rdtsc instruction, and the overhead can vary by orders of magnitude based on how complex the work is that is timed. I discovered this when I timed the (then-)new dead tid lookups in the Vacuum in Pg 17 and saw significantly larger overhead per lookup when the lookups themselves were slower, i.e. a case where the lookups were done in random order (inded was on created on a column filled with random()) So while just a tight loop of N million rtdsc calls will give you the lower limit, it is likely not very representative of actual overhead.
