sys/lib/libc/sys/t_timerfd.c timerfd_block is intermittently failing on the i386 test bed. I added some diagnostic prints to see exactly what the timing failure is, which yielded:
t_timerfd.c:198: then=1368.605876566 now=1369.462708087 delta=0.856831521 https://releng.netbsd.org/b5reports/i386/2023/2023.07.08.19.10.00/test.html#lib_libc_sys_t_timerfd_timerfd_block delta is supposed to be at least 1sec, because the test does: const struct itimerspec its = { .it_value = { .tv_sec = 1, .tv_nsec = 0 }, .it_interval = { .tv_sec = 0, .tv_nsec = 0 }, }; ATF_REQUIRE(clock_gettime(CLOCK_MONOTONIC, &then) == 0); ATF_REQUIRE(timerfd_settime(fd, 0, &its, NULL) == 0); ATF_REQUIRE(timerfd_read(fd, &val) == 0); ATF_REQUIRE(clock_gettime(CLOCK_MONOTONIC, &now) == 0); That is, it checks its watch, asks to be woken in >=1sec, and then checks its watch again; the test then verifies that the watch has indeed advanced by >=1sec. This is very puzzling to me -- I would expect delta to be >2x the sleep time because of the usual hz=100-guest-on-hz=100-host problem (https://gnats.netbsd.org/43997), and certainly not less than 1sec! One possible discrepancy is that clock_gettime(CLOCK_MONOTONIC) uses nanouptime, which returns results with the resolution of the timecounter; whereas timerfd_settime uses getnanouptime, which returns reuslts with the (most likely much coarser) resolution of the hardclock interrupt, hz. But I don't see how that could explain a <1sec delta on an emulated system where I would expect to see ~2sec deltas! And this can't even be explained by sampling error within a tick, because the test is failing with a delta of 0.85 which is _five_ ticks (at 100 Hz) away.