https://issues.dlang.org/show_bug.cgi?id=13433

--- Comment #9 from Jonathan M Davis <jmdavisp...@gmx.com> ---
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_MRG/2/html/Realtime_Reference_Guide/sect-Realtime_Tuning_Guide-Timestamping-POSIX_Clocks.html#Realtime_Reference_Guide-Timestamping-COARSE_Clocks

says 1ms, but you could use clock_getres to get the actual resoluton of the
COARSE clock. IIRC, I also ran into somewhere online that had a table comparing
the various options and OSes, but I can't find it now.

The main reason the coarse clock seems like a wonky idea to me is that if
you're getting the time quickly enough for it to matter, then you're doing it
at way faster than 1 ms, so you're going to keep getting the same time over and
over again. If that really doesn't matter for a particular use case, then
obviously the coarse clock could be a good choice, but that strikes me as a
rather odd use case where you don't actually care about the clock being slower
than the rate you're asking for the time. But adding support for it doesn't
really hurt anything as far as I can tell, and if it makes sense for std.log,
then great, though if you're logging hundreds of thousands of messages a
second, then I'd strongly argue that you're logging so much that it borders on
useless. So, it seems to me that the test that's triggering this enhancement
request seems fairly contrived rather than demonstrating much of anything
practical.

--

Reply via email to