On Fri, Dec 5, 2014 at 9:37 AM, Charles Swiger <cswi...@mac.com> wrote:

> I also make sure that my
> timeservers are running in temperature-controlled environments so that
> such daily drifts you mention are minimized.


I'm starting to think that people answering questions are unsure of the
real question so they make a number of assumptions.  If you care about
sub-millisecond time then you need to say that and the question should be
answered in that context.  I suspect most of the questions here refer to
sub-second accuracy and most of the elaboration is unneeded.  If all your
external clocks fail I suspect the typical user can depend on the
disciplined virtual clock for days.

For almost all of human history, the sun or the "fixed celestial heavens"
> have provided the most accurate time reference available.  Even today,
> we add (or subtract, in theory) leap seconds in order to keep UTC and UT1
> aligned to better than a second courtesy of IERS.
>
> Yes, the USNO, CERN, and so forth now do have sufficiently high quality
> atomic clocks which have better timekeeping precision than celestial
> observations.
>

I think there's some confusion here.  Search for BIPM paper clock or read <
http://www.ggos-portal.org/lang_en/GGOS-Portal/EN/Topics/Services/BIPM/BIPM.html
>


> Such a point is orthogonal to the notion of how to measure a local clock
>

I think this is an interesting question.  How does one get high resolution
measurements of the error in the virtual clock maintained with NTP (or
Chrony)?  I thought it was done with purpose built systems.  I don't expect
a random version of Linux on generic hardware to be able to maintain the
clock at nanosecond scale.
_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to