The way the logic in clock_nanosleep() is written, the minimum delay
ends up being 2 such ticks. I don't remember why and I can't seem to
find it in the code right now, but I know this because I checked into
it recently and found out that that's how it works.

See https://cwiki.apache.org/confluence/display/NUTTX/Short+Time+Delays

This is a translation.  It does not effect the accuracy, it effects the mean delay.  The accuracy is still 10 MS.  The quantization error will lie in the range of 0 to +10 MS.  If you did not add one tick, the error would be in the range of -1 to 0 MS which is unacceptable.

It does not make sense to change the tick interval to a higher
resolution (shorter time) because then the OS will spend a
significantly increasing amount of time in useless interrupts etc.

Unless you use Tickless mode then it is easy to get very high resolution (1 uS range) with no CPU overhead.

50 ns is still probably out of reach of the system timer.

Reply via email to