Damion,
The ONLY function of the random fuzz is to fill the timestamp bits below
the microsecond, and this ONLY with the kernel syscall that returns time
in microseconds. Do NOT interpret the agenda in any other way. There is
no benefit whatsoever to fuzz the bits below the nanosecond.
Dave
Damion de Soto wrote:
Hi Brian, Danny, David.
So, currently the low-order bits are only randomised if the system doesn't
have HAVE_CLOCK_GETTIME || HAVE_GETCLOCK.
They should always be randomised to the calculated precision of the system.
(incidently, in the two libraries i've just looked in - glibc & uClibc -
the clock_gettime() function just calls gettimeofday() anyway - so they
definately shouldn't be treated differently)
I tried to work out an easy way to always add random fuzz to our current
degree of system precision, but then i ran into the second problem:
the system clock precision is calculated in default_get_precision() in
ntp_proto.c - this function uses consecutive calls to get_systime() to
calculate the minimum tick difference. Won't this be always wrong if
the random fuzz code in get_systime() is used, and give a much higher
precision than is really available?
regards,
_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions