Jim Cromie wrote:


Ive hacked together a clocksource driver which uses the new GENERIC_TIME
API which is part of linu 2.6.17-rc3-mm1, and Im looking for advice on how/whether
I can use ntpd to 'test' it.

Ive been running it for past 12 hrs, slaved to my laptop thru a null-ethernet cable,
and to a stratum 2 clock across a wireless interface.

Ive been logging its 'performance' thru a dumb little script:

while true; do
   echo
   uptime
   ntpq -p
   sleep 60
done

heres a tiny chunk:


17:04:37 up 11:52,  1 user,  load average: 0.00, 0.00, 0.00
remote refid st t when poll reach delay offset jitter ============================================================================== +harpo 216.82.75.146 3 u 948 1024 377 1.315 1.033 3.081 *entry.verboten. 130.149.17.21 2 u 1010 1024 377 119.114 -1.645 0.086 LOCAL(0) LOCAL(0) 13 l 29 64 377 0.000 0.000 0.004

17:05:43 up 11:53,  1 user,  load average: 0.00, 0.00, 0.00
remote refid st t when poll reach delay offset jitter ============================================================================== +harpo 216.82.75.146 3 u 1014 1024 377 1.315 1.033 3.081 *entry.verboten. 130.149.17.21 2 u 53 1024 377 133.653 -3.814 555.714 LOCAL(0) LOCAL(0) 13 l 24 64 377 0.000 0.000 0.004


Im seeing steps in some of the numbers thats disturbing, and Id like to find
whether its normal, artifacts of sampling or polling rate, or a sign of
something amiss.

the 2 samples above show a large jump in the jitter from the distant source. This probably doesnt mean anything, since its subject to the vagaries of the network
across unknowable routes, but Im seeing them elsewhere too.

Forex, heres a grep of the laptop peer sync-data

+harpo 216.82.75.146 3 u 933 1024 377 1.086 4.115 0.610 +harpo 216.82.75.146 3 u 994 1024 377 1.086 4.115 0.610 +harpo 216.82.75.146 3 u 32 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 93 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 154 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 214 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 275 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 336 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 396 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 462 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 523 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 584 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 644 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 705 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 766 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 827 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 888 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 948 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 1014 1024 377 1.315 1.033 3.081 +harpo 216.82.75.146 3 u 52 1024 377 1.435 -0.553 1.586 +harpo 216.82.75.146 3 u 112 1024 377 1.435 -0.553 1.586 +harpo 216.82.75.146 3 u 173 1024 377 1.435 -0.553 1.586


It looks a lot like the offset,jitter fields step only when the when field rolls over,
thus it looks like an artifact, is that a correct inference ?

Your offset and jitter fields are recalculated at each poll interval. You are polling every 1024 seconds so it takes about 17 minutes to get new values. The fact that you are polling at 1024 second intervals suggests that ntpd is very satisfied with its selected soure(s).

In addition, Id like to use the /var/lib/ntp/ntp.drift
to compute a correction for the free-running frequency Ive told the driver
about the crystal.  Is that possible / sensible / wise ?

Ntpd will correct the frequency as long as it's running. If it loses contact with its time source(s) it will continue to use the last value it calculated.

cat /var/lib/ntp/ntp.drift
-46.177


While Im asking, does anyone here know where pics/graphs of rms jitter energy vs freq are available for a variety of timing-sources, including those that show up in PCs
and server boxes.

Does the jitter number on
LOCAL(0) LOCAL(0) 13 l 17 64 377 0.000 0.000 0.004
have a real meaning wrt the frequency noise in the PC's clock ?


The jitter, as I understand it, is a measure of the phase noise in the time received from the server. Think of sending packets over the internet at EXACT 1 second intervals. Do you think that those packets will arrive at their destination at the same EXACT 1 second intervals at which they were sent? (If you do, I want some of whatever you're smoking) :-) The network introduces errors by introducing small variations in transit times to and from the server and jitter is a measure of that error. See RFC 1305 for a discussion of the math; as a mathematician, I can usually count to twenty with my shoes on. :-)



_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to