HI guys. I'm doing some tests about clock_gettime. And I found that clock_gettime will be affected by hwclock. It makes clock_gettime slip advance some milliseconds.
Actually, each line prints out every 1ms. $ ./a.out -r CLOCK_MONOTONIC 130 ↵ Using delay=1 ms between loop. Using clock=CLOCK_MONOTONIC. Clock resolution sec=0 nsec=1 Initial time sec=1621884 nsec=285113956 [delay=1ms] Slip time: 0 s 32 ms <---------hwclock [delay=1ms] Slip time: 0 s 16 ms <---------hwclock >From perf: $ perf record -F 999 hwclock # To display the perf.data header info, please use --header/--header-only options. # # Samples: 22 of event 'cpu-clock' # Event count (approx.): 22022022 # # Overhead Command Shared Object Symbol # ........ ....... ................. ............................... # 77.27% hwclock [kernel.kallsyms] [k] native_read_tsc 13.64% hwclock [kernel.kallsyms] [k] delay_tsc 4.55% hwclock [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore 4.55% hwclock libc-2.17.so [.] __strftime_l $perf record -F 999 ./a.out # To display the perf.data header info, please use --header/--header-only options. # # Samples: 7K of event 'cpu-clock' # Event count (approx.): 79010100220 # # Overhead Command Shared Object Symbol # ........ ....... ................. ............................... # 28.18% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore 20.64% a.out [vdso] [.] __vdso_clock_gettime 18.46% a.out [kernel.kallsyms] [k] native_read_tsc 9.81% a.out a.out [.] busy_loop 4.62% a.out a.out [.] calc_1ms 2.15% a.out libc-2.17.so [.] clock_gettime 2.02% a.out a.out [.] overhead_clock I thought there is a lock contention. However, when I ran two a.out, the output is correct, not like hwclock. Anyone knows why? Thanks.