What is the correct error bound between client and server? In the RFC
documentation I read one thing and in many theads another. I'm
confused so please help me. Which is correct?

1) In the documentation for RFC1305 p. 102 it can be read that the
true  offset between
client and server must lie somewhere in the correctness  interval,
defined by
I=[theta - delta/2 - epsilon, theta + delta/2 + epsilon]

2) And in threads and websites I usuallly see the statement that the
error is bounded with half the roundtrip. And it sure look like that
when scatter diagram(wedge plots) with offset as a function rtt are
viewed.

-------
To bound the error to the root of the synchronization subnet. Here,
upper-case variables are used relative to the primary reference source
(s), i.e., via a peer to the root of the synchronization subnet.

Since offset, dispersion and delay(rtt) are all additive, you easily
sum up all variables from the primary server to server i and achieve
OFFSET sub i, ROOT DISPERSION sub i and ROOT DELAY sub i to the root.

The synchronization distance, sometimes called the root distance, is
calculated with DELTA/2 + EPSILON and represents the biggest
statistical error. So the true offset relative to a primary reference
server must be contained in the interval [OFFSET- SYNC.DIST, OFFSET
+SYNC.DIST.]

If the 1) alternative is right, all the other would be much more
consistently.

Thanks in advance

_______________________________________________
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to