[EMAIL PROTECTED] wrote:

I am currently involved in a research project requiring tight time
synchronization between nodes on an ethernet LAN. We require pairwise
time offsets between nodes to be accurate by less than 1ms. As a simple
test scenario, we set up a Linux machine with 2.6.16 kernel as our ntp
server and use the local clock as a reference:

------- /etc/ntp.conf at server -------------------
restrict 127.0.0.1 nomodify
driftfile /var/lib/ntp/ntp.drift
server 127.127.1.1 prefer
fudge 127.127.1.1 stratum 0 refid NIST

Two clients connect to this server through a 10Mbps hub, and get
synced.

------- on one of the client -------
# ntptime
ntp_gettime() returns code 0 (OK)
  time c8164ed5.d3251000  Wed, May 17 2006 21:39:33.824, (.824784),
  maximum error 41499 us, estimated error 2 us
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset -1.000 us, frequency 212.242 ppm, interval 4 s,
  maximum error 41499 us, estimated error 2 us,
  status 0x1 (PLL),
  time constant 0, precision 1.000 us, tolerance 512 ppm,
  pps frequency 0.000 ppm, stability 512.000 ppm, jitter 200.000 us,
  intervals 0, jitter exceeded 0, stability exceeded 0, errors 0


Clearly, I am able to achieve an "offset" of 1us and an "estimated
error" of 2us, which looks extremely good. But strangely, the "maximum
error" field gives an error of about 42ms. When I compare time stamps
between the server and client (at the MAC layer, to cancel out effects
of latency at higher layers), I see a gap of almost 60ms. Does anybody
have any idea of this kind of problem ?

http://groups.google.com/group/comp.protocols.time.ntp/browse_thread/thread/3343cc5fec1b6597/effe74a33bb03cf2?q=accuracy&rnum=1#effe74a33bb03cf2

The above thread seems to indicate that the error is due to the
unstable cpu clock being used as a reference. But still, a gap of 60ms
seems inordinately large.

Thanks,
Ajit.


I've seen estimated errors of 1 microsecond and maximum errors of one or two milliseconds on my LAN using a server with a GPS reference clock. This morning, for reasons unknown, the performance is terrible (by my standards). Sunblok, my server, has an estimated error of 1 microsecond and a maximum error of 7,875 microseconds. Sunspot, normally the client with tightest synchronization is showing an estimated error of 11,152 microseconds and a maximum error of 33,308
microseconds.

Your local clock is probably fairly stable but not accurate. What really kills tight synchronization is using a server that is "clock hopping" between four or five internet servers. The internet servers may be spread over four or five milliseconds, but every time your server "clock hops", the target moves by a millisecond or two.

I'm not sure that I understand how you are measuring this 60 millisecond gap at the MAC layer.

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to