I am currently involved in a research project requiring tight time synchronization between nodes on an ethernet LAN. We require pairwise time offsets between nodes to be accurate by less than 1ms. As a simple test scenario, we set up a Linux machine with 2.6.16 kernel as our ntp server and use the local clock as a reference:
------- /etc/ntp.conf at server ------------------- restrict 127.0.0.1 nomodify driftfile /var/lib/ntp/ntp.drift server 127.127.1.1 prefer fudge 127.127.1.1 stratum 0 refid NIST Two clients connect to this server through a 10Mbps hub, and get synced. ------- on one of the client ------- # ntptime ntp_gettime() returns code 0 (OK) time c8164ed5.d3251000 Wed, May 17 2006 21:39:33.824, (.824784), maximum error 41499 us, estimated error 2 us ntp_adjtime() returns code 0 (OK) modes 0x0 (), offset -1.000 us, frequency 212.242 ppm, interval 4 s, maximum error 41499 us, estimated error 2 us, status 0x1 (PLL), time constant 0, precision 1.000 us, tolerance 512 ppm, pps frequency 0.000 ppm, stability 512.000 ppm, jitter 200.000 us, intervals 0, jitter exceeded 0, stability exceeded 0, errors 0 Clearly, I am able to achieve an "offset" of 1us and an "estimated error" of 2us, which looks extremely good. But strangely, the "maximum error" field gives an error of about 42ms. When I compare time stamps between the server and client (at the MAC layer, to cancel out effects of latency at higher layers), I see a gap of almost 60ms. Does anybody have any idea of this kind of problem ? http://groups.google.com/group/comp.protocols.time.ntp/browse_thread/thread/3343cc5fec1b6597/effe74a33bb03cf2?q=accuracy&rnum=1#effe74a33bb03cf2 The above thread seems to indicate that the error is due to the unstable cpu clock being used as a reference. But still, a gap of 60ms seems inordinately large. Thanks, Ajit. _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
