Joe,

On 19/03/14 11:55, Joe Gwinn wrote:
In article <5328aaa6.70...@rubidium.dyndns.org>, Magnus Danielson
<mag...@rubidium.dyndns.org> wrote:

On 18/03/14 01:36, Joe Gwinn wrote:
In article <5327757e.5040...@rubidium.dyndns.org>, Magnus Danielson
<mag...@rubidium.dyndns.org> wrote:

Is that formal enough for you?

It may be.  This I did know, and would seem to suffice, but I recall a
triumphant comment from Dr. Mills in one of his documentation pieces.
Which I cannot recall well enough to find.  It may be the above
analysis that was being referred to, or something else.

I can't recall. The above I came up with myself some 10 years ago or so.


When I awoke the day after writing the above, I saw two problems with
the above analysis.

First is that with added message-exchange volleys, one does not get
added variables and equations, one instead gets repeats of the
equations one already has.  If there is no noise, the added volleys
convey no new information.  If there is noise, multiple volleys allows
one to average random noise out.

True. What does happen over time is:
1) Clocks drift away from each other due to systematics and noises
2) The path delay shifts, sometimes because of physical distance shifts,
but also due to shift of day and season.

These require continuous tracking to handle

Second is that what is proven is that a specific message-exchange
protocol cannot work, not that there is no possible protocol that can
work.

The above analysis only assumes a way to measure some form of signal.
The same equations is valid for TWTFTT as for NTP, PTP or whatever uses the two-way time-transfer. What will differ is they way they convey the information and the noise-sources they see.

Will see if I can find Dave's reference.

I hit pay dirt yesterday, while searching for data on outliers in 1588
systems.   Dave's reference may well be in the references of the
following article.

"Fundamental Limits on Synchronizing Clocks Over Networks", Nikolaos M.
Freris, Scott R. Graham, and P. R. Kumar, IEEE Trans on Automatic
Control, v.56, n.6, June 2011, pages 1352-1364.

Sounds like an interesting article. Always interesting to see different peoples view of fundamental limits.

I also took the next step, which is to treat d_AB and d_BA as random
variables with differing means and variances (due to interference from
asymmetrical background traffic), and trace this to the effect on clock
sync.  It isn't pretty on anything like a nanosecond scale.  The
required level of isolation between PTP traffic and background traffic
is quite stringent.

It's even worse when you get into packet networks, as the delays contain
noise sources of variable mean and variable deviation, besides being
asymmetrical. NTP combats some of that, but doesn't get deep enough due
to too low packet rate. PTP may do it, but it's not in the standard so
it will be propritary algorithms. The PTP standard is a protocol
framework. ITU have spent time to fill in more of the empty spots.

Yes.  In closed networks, the biggest cause of asymmetry I've found is
interference between NTP traffic and heavy background traffic in the
operating system kernels of the hosts running application code.
Another big hitter was background backups via NFS (Network File
System).  The network switches were not the problem.  What greatly
helps is to have a LAN for the heavy applications traffic, and a
different LAN for NTP and the like, forcing different paths in the OS
kernel to be taken.

If you can get your NIC to hardware time-stamp your NTP, you will clean things up a lot.

Cheers,
Magnus
_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to