On Mon, Aug 8, 2022 at 12:27 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > BTW, that commit really should have updated the explanation at the top of > instr_time.h: > > * This file provides an abstraction layer to hide portability issues in > * interval timing. On Unix we use clock_gettime() if available, else > * gettimeofday(). On Windows, gettimeofday() gives a low-precision result > * so we must use QueryPerformanceCounter() instead. These macros also give > * some breathing room to use other high-precision-timing APIs. > > Updating the second sentence is easy enough, but as for the third, > I wonder if it's still true in view of 24c3ce8f1. Should we revisit > whether to use gettimeofday vs. QueryPerformanceCounter? At the very > least I suspect it's no longer about "low precision", but about which > API is faster.
Yeah, that's not true anymore, and QueryPerformanceCounter() is faster than GetSystemTimePreciseAsFileTime()[1], but there doesn't really seem to be any point in mentioning that or gettimeofday() at all here. I propose to cut it down to just: * This file provides an abstraction layer to hide portability issues in - * interval timing. On Unix we use clock_gettime() if available, else - * gettimeofday(). On Windows, gettimeofday() gives a low-precision result - * so we must use QueryPerformanceCounter() instead. These macros also give - * some breathing room to use other high-precision-timing APIs. + * interval timing. On Unix we use clock_gettime(), and on Windows we use + * QueryPerformanceCounter(). These macros also give some breathing room to + * use other high-precision-timing APIs. FWIW I expect this stuff to get whacked around some more for v16[2]. [1] https://devblogs.microsoft.com/oldnewthing/20170921-00/?p=97057 [2] https://commitfest.postgresql.org/39/3751/