Damion,

THe ntpd in ntp-dev has been run in simulation with tick = 10 ms and done amazingly well. The low order nonsignificant bits are se to a random fuzz that apparently averages out just fine.

Dave

Damion de Soto wrote:
Brian Utterback wrote:

Yes and no. If your system supports either clock_gettime or getclock,
then the code does not bother with the random bitstring, since there
are only two unused bits to set. Not worth the trouble.


Thanks, but I have a system here that has very low resolution system clock,
ntpd correctly detects this via default_get_precision() as:
Feb 13 07:01:31 ntpd[59]: precision = 10000.000 usec

I have clock_gettime() available to me, but the nanoseconds values will be mostly wrong, since 10ms only gives me 7 bits of precision. This means all 64bits of the fractional seconds in the Transmit Timestamp are nearly always the same.


Has no-one else ever run into this before?

Regards,



_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to