On Thursday 06 December 2007 17:38, Carlos E. R. wrote:
> The Thursday 2007-12-06 at 17:26 -0800, Randall R Schulz wrote:
> > On Thursday 06 December 2007 17:16, Carlos E. R. wrote:
> >> ...
> >>
> >> Please, remember that the system time does not use the cmos clock
> >> and battery at all. ...
> >
> > "... at all ...?" I don't think this is really true, is it?
> >
> > When the system starts up, ...
>
> I know that. I actually wrote a howto on that ;-)
>
> What I mean is that during normal system use it is not used at all.
> It is read on boot, and written on halt (and I think on NTP stop, by
> the script, not the daemon).
>
> > Thereafter, the Linux kernel updates its time based on a timer
> > interrupt, also generated by local hardware, of course. These
> > timers are, as has been noted, not particularly accurate and often
> > exhibit considerable drift over even moderate real-time intervals.
>
> Not really. I have been using this same machine without permanent
> network, and thus, no NTP, for years, and the clock drift was about a
> second or two per day.

Then you have the luck of getting a pretty good crystal. That's just a 
matter of luck. One second per day is 0.0000116 or 0.00116% error.


> > Likewise, if the system cannot contact an NTP server, it has a
> > reasonable guess as to the current time, and it makes do with that.
>
> It should be able to keep accurate time for hours, even days. This
> was so with previous suse versions, but not with 10.3. It drifts
> minutes in half an hour. This is unthinkable!

Clearly this represents a gross hardware failure or a similarly extreme 
software problem.

You might want to try to quantify the error to a few decimal places. I 
had an early instance of the very first Macintosh, and it had a 
similarly excessive clock drift problem. I don't remember the details 
now (it was twenty-some years ago) but I realized that the problem was 
something like every 2^8 seconds it added a second. Since 2^8 seconds 
is only 4 minutes 16 seconds, this was a very blatant error!

But the real point was that it was clear that a single-bit glitch on a 
predictable interval was responsible for the empirical error I 
witnessed.


> --
> Cheers,
>         Carlos E. R.


Randall Schulz
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to