Matthew Dillon wrote:
:>     I'm going to do some work on dntpd to try to correct two unrelated
:> issues. First I'll try to have it detect a bad time source when :> several are available. Second, I'll have it re-run the DNS lookup
:>     if a server stops responding and I'll have it detect duplicate IPs.
    I don't think you can quantify the accuracy of a median offset verses
a statistically good offset returned by a single server, so no.

I don't understand you there.  What is the definition of a "good offset"?  
Without knowledge of the real time, we have a set of offsets, in the best case all about 
the same.  There the median does not hurt.  If we have offsets which are spaced more, 
which one should we choose?  Probabaly one not on the outer edges of the distribution -- 
the median again gives good results.

Or maybe I am missunderstanding what you're telling me.

Ah, I've read your code now.  I see we might be talking about different metrics for 
"good offsets":

1. stable offset from the server, i.e. few jitter
2. correct time from the server

you are checking for (1) in client_check().  The quorum check might be sufficient for 
(2), however when declaring a server as "insane", we might be loosing 
information.  Of course the +/- 30 seconds tolerance right now is way too high, best 
would be sub-second.

The point is, the best jitter-free time source is worth nothing if it is off 
one second.  Yes, when running an ntpd which also does frequency corrections, I 
want to have an exact time:  otherwise, I could simply run ntpdate every hour 
from cron.

So how do we select the best time source?  First, it needs to be the right 
offset, and this needs to be jitter-free.  So what I think we could do is the 
following:

1. Strip insane servers, i.e. those who are way off the average.
2. Select the median offset of the remaining servers' samples (not averages)
3. Now however, we might have selected a jittery source, so search up
  and down to find a "better" server:  one which has the best samples.  I'd try
  a sum of quadratic differences to the selected "best temptative offset (2)"
  to select the best server (jittery servers will more likey drop out unless
  they are significantly closer to the offset).
4. using this selected server, I'd take the median (never the average, averages
  smear errors) of the samples of this server.

Maybe we should run some traces on the received packets and then evaluate 
different algorithms (best tracking the real time using a radio clock in 
parallel).

cheers
 simon

--
Serve - BSD     +++  RENT this banner advert  +++    ASCII Ribbon   /"\
Work - Mac      +++  space for low €€€ NOW!1  +++      Campaign     \ /
Party Enjoy Relax   |   http://dragonflybsd.org      Against  HTML   \
Dude 2c 2 the max   !   http://golden-apple.biz       Mail + News   / \

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to