Richard B. Gilbert wrote:
David L. Mills wrote:

Richard,

I can't claim preconition, as the current NTP timestamp format was invented in 1978 when nominal accuracies were in the 16-ms range. However, the resolution limit of 232 picoseconds is likely to be exceeded when the CPU clock rate approaches 4 GHz, which might not be long off.


I suppose that even a 2 GHz machine could slice time into 500 picosecond increments. But I was thinking in terms of the ability to set a clock that accurately. There's no way that I can think of that it could be done over a network using today's technology. I'm seeing a ~4us delays on my 100 Mb full duplex LAN. I think that means I can't pass time from machine A to machine B over my LAN without an uncertainty of ~2us. The error is probably less than that but probably is the best we can say.

So you could get delta time measurments with 232 picosecond resolution but getting absolute time accurately with that precsion is not going to be easy.

If you can get _repeatable_, at least some of the time, 4 us delays, then you can use the same statistical methods which NTP is already using to get absolute accuracy close to an order of magnitude better, i.e. half a us or so.

The real limiter will be the need for (a) a really good local clock source, i.e. better than the current 10 cent (?) quartz crystals, and (b) a hardware method to measure the interrupt latency.

Poul-Henning of FreeBSD and NTP fame did both on his _very_ good NTP servers, it might be possible that some new motherboards will include a timing facility to handle (b).

Terje

--
- <[EMAIL PROTECTED]>
"almost all programming can be viewed as an exercise in caching"

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to