Mischanko, Edward T wrote:
On 2013-05-21, Mischanko, Edward T <edward.mischa...@arcelormittal.com>
wrote:
My concern is that too much data is being thrown away when polling
above 256 seconds and that allows excessive wandering of my clock.
If too much data is being thrown away, it would be because the poll
adjust algorithm has chosen too high a poll interval (or you have too
high a minpoll) for the noise statistics.
[Mischanko, Edward T]
This is exactly what I am saying!! This needs to be corrected; I consider it a
bug.
How would you modify the algorithm that chooses the poll interval.
Remember that getting too low a poll interval will make ntpd track short
term variations in network latency and prevent it getting an accurate
estimate of the crystal frequency error.
Please provide a detailed specification of your proposed algorithm.
Too much assumption is made that everyone will have the perfect computer and
the perfect network when configuring these various filters. What works on
A lot of allowances have been made for the real world. The only
allowance that has not been made is that it assumes that crystal
frequency errors are essentially random, and thus can get into
difficulty if a (normally very cheap) motherboard crystal is in a
thermal environment that exhibits non-gaussian behaviour.
the blackboard does not always work in reality. My computer has a
-19 precision but it can't keep time inside 1 millisecond with default
Settings; go figure.
I figure that you don't understand the various latencies that exists in
the system and network, and the difficulties of interpolating clock
ticks on Windows. The precision figure, for Windows, is almost
certainly optimistic because of the hacks ntpd has to go through to
interpolate the clock in user space.
The reason for the difficulty in interpolating is that Windows does not
achieve anything like the timing resolution in the Windows API
architecture. Earlier versions of Windows NT could only supply time to
applications to a resolution of 16ms (before ntpd started doing user
space interpolation, NT 3.5 would return an ntpd precision of -6).
Intermediate ones had a variable resolution, depending on the use of
multi-media timers, but I don't think it was better than 1ms. I'm not
sure of the exact parameters to the latest generation; but I don't think
ordinary applications will be seeing anything close to 2 microsecond
resolution. (Changes of multimedia timer rates, causes upsets.)
Basically, you may find that the main error component in the time
supplied to ordiary Windows applications is due to the limitations of
the Windows time reading APIs.
Also, I don't remember your ever describing a test rig that is capable
of comparing the system time on your servers to true time. Offset is
not offset from true time. Without special instrumentation, or an
accurate theoretical model, you cannot assume that high NTP offsets
represent errors in the machine time keeping.
_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions