Thanks Jason. Root Dispersion sounds like the one, although it seems to be a bit conservative. Pls, let me know when you find out whether it's a double-sided edge (+/-) or not.
Juyong. "Jason Rabel" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Taken from RFC-1305: > > Root Dispersion is the number indicating the maximum error relative to the > primary reference source at the root of the synchronization subnet, in > seconds. > > So depending on your stratum, it *should* (from how I read it) add up all > the errors from you and up the chain of servers to the stratum 1 server. > I'll have to verify this when I get home tonight. > > One thing I'm not sure about is for instance: if your root dispersion is > 40ms, I'm not sure if it mean +/- 40ms (a total of 80ms) or +/- 20ms (a > total of 40ms). Dr. Mills or someone on the NTP development team can > probably clear this up. > > You can poll for root dispersion quite easily via another program, and if > it > exceeds your bounds then you can do whatever. > > Jason > > > >>I guessed so since it wouldn't be possible under the network environments > as >>you pointed out. >> >>Then, my question is how to detect a case with absolute time inaccuracy >>beyond a certain limit? Is the Root Dispersion the best indicator? How > about >>Offset and RTT? Also, I'm wondering whether there is any way to find out > the >>status of network connection such as software and hardware delays---are >>there any parameters about them? >> >>Basically, I'm trying to find or combine parameters to detect a certain >>outage case where absolute time inaccuracy exceeds my limit. Let me know >>if >>anyone has experience on this. >> >>Thanks, >>Juyong > > _______________________________________________ > questions mailing list > [email protected] > https://lists.ntp.isc.org/mailman/listinfo/questions > _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
