In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote: > As I said in my message to you, Windows shows dispersion 10s, which is
Which is a realistic figure for an undisciplined local clock and an improvement on the reference implementation. The reference implementation makes the assumption, which represents a very small minority of cases these days, that a server running the local clock driver is being disciplined, but not by ntpd. Maybe the reference implementation should add a new clock driver for this, minority case, and make the local clock driver report a realistic root dispersion! (It would make the number one FAQ for the next few years! One would need tinker options to disable root dispersion checks to allow isolated systems to continue to synchronise to each other.) > Windows shows a precision of 6, which will cause the ntpd server to Which is, I believe, the correct value for the W32Time implementations, as they only read the time to the clock interrupt frequency resolution. Most Unix and Unix-like kernels interpolate clock interrupts by either reading the residue in the counter-time register, or using the TSC counter on modern Intel processors, to interpolate. (The Windows port of the reference implementation uses the TSC, but, because it is not supported by kernel code, I believe that the result is not entirely reliable on a loaded system.) So, whilst neither value is good, they reasonably accurately represent the quality of the time that you are actually getting, and the root dispersion mitigates the excessively low stratum number being used. _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
