On Fri, Jul 17, 2015 at 6:05 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Peter Geoghegan <p...@heroku.com> writes:
> > I've heard that clock_gettime() with CLOCK_REALTIME_COARSE, or with
> > CLOCK_MONOTONIC_COARSE can have significantly lower overhead than
> > gettimeofday().
>
> It can, but it also has *much* lower precision, typically 1ms or so.
>

I've write simple benchmark of QueryPerformanceCounter() for Windows. The
source code is following.

#include <stdio.h>
#include <windows.h>
#include <Winbase.h>

int _main(int argc, char* argv[])
{
LARGE_INTEGER start, freq, current;
long count = 0;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&start);
current = start;
while (current.QuadPart < start.QuadPart + freq.QuadPart)
{
QueryPerformanceCounter(&current);
count++;
}
printf("QueryPerformanceCounter() per second:  %lu\n", count);
return 0;
}

On my virtual machine in runs 1532746 QueryPerformanceCounter() per second.
In a contrast my MacBook can natively run 26260236 gettimeofday() per
second.
So, performance of PostgreSQL instr_time.h can vary in more than order of
magnitude. It's possible that we can found systems where measurements of
time are even much slower.
In general, there could be some systems where accurate measurements of time
intervals is impossible or slow. That means we should provide them some
different solution like sampling. But does it means we should force
majority of systems use sampling which is both slower and less accurate for
them? Could we end up with different options for user?

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Reply via email to