On Thu, 8 May 2014, Alan Somers wrote:

On Wed, May 7, 2014 at 9:39 PM, Bruce Evans <b...@optusnet.com.au> wrote:
On Wed, 7 May 2014, Jilles Tjoelker wrote:

On Wed, May 07, 2014 at 12:10:31PM -0600, Alan Somers wrote:

On Tue, May 6, 2014 at 9:47 PM, Bruce Evans <b...@optusnet.com.au> wrote:

On Tue, 6 May 2014, Alan Somers wrote:
[snip]

+       st.start = tv.tv_sec + tv.tv_nsec * 1.0e-9;
}


The floating point addition starts losing precision after 8388608
seconds (slightly more than 97 days, a plausible uptime for a server).
It is better to subtract the timespecs to avoid this issue.

No, it is better to use floating point for results that only need to
be approximate.  Especially when the inputs are approximate and the
final approximation doesn't need to be very accurate.

Floating point is good for all timespec and timeval calculations,
except in the kernel where it is unavailable.  timespecs and timevals
are mostly used for timeouts, and the kernel isn't very careful about
exact timeouts.  Short timeouts have inherent large inaccuracy due
to interrupt granularity and latency.  Long timeouts can be relatively
more accurate, but only if the kernel is careful about them.  It is
only careful in some places.

No, Jilles is right.  The problem isn't that dd uses doubles; it's
that dd converts longs to doubles _before_ subtracting the values.
That causes rounding if the tv_sec values are large.  If the
implementation of CLOCK_MONOTONIC ever changed to measure time since
the Epoch, or something similar, then the rounding error would be
extremely significant.  Better to subtract the timespecs, then convert
to double.

Nowhere near a problem.  Conversion from long to double is exact up
to 2**53 seconds or 285 megayears.  So it works for seconds since the
POSIX Epoch although not for seconds since the big bang.  It would be
a problem for floats.

Actually, I didn't notice the fragile conversion order.

With microseconds, the precision of a double is sufficient for 272
years, so that calculation is probably acceptable.

Actually 285 years.  A measly 0.285 years in nanoseconds before exactness
is lost.  The subtraction problem also affects tv_nsec, but is not
very serious since it wouldn't hurt to ignore the nanoseconds part of
a runtime of 0.285 years.  We are 54 years from the Epoch now.  That is
a bit more than 0.285, so we lose a significant part of the nanoseconds
precision.  But we get microseconds precision for 285 years since the
Epoch.  About 5 times that now.

...
Bugs in the boot time can be fixed more easily than by micro-adjusting
the monotonic clock.  Just keep the initial boot time (except adjust it
when it was initially local instead of UTC) and frob the real time
using a different variable.  Export both variables so that applications
can compensate for the frobbing at the cost of some complexity.  E.g.,
in uptime(1):

        clock_gettime(CLOCK_UPTIME, &ts);
        /*
         * Actually, do the compensations in the kernel for CLOCK_UPTIME.
         * It doesn't need to be monotonic.  But suppose it is the same
         * as the unfixed CLOCK_MONOTONIC and compensate here.
         *
         * Also fix the bogus variable name 'tp'.
         */
        sysctl_mumble(&boottime);
        sysctl_mumble(&frobbed_boottime);
        uptime = ts.tv_sec +- (boottime.tv_sec - frobbed_boottime.tv_sec);

Note that the compensation may go backwards, so this method doesn't work
in general for monotonic times.  However, it can be used if the compensation
is non-negative or relatively small negative.  dd could use this method.
It already has to fix up for zero times and still has parts of the old
method that fixes up for negative times.  Note that the compensation may
be very large across a suspension.  You might start dd, SIGSTOP it, suspend
the system and restart everything a day later.  The compensation would be
about 1 day.  The average from this wouldn't be very useful, but it would
be the same as if dd was stopped for a day but the system was not suspended.

Wouldn't it be simpler just for the kernel to adjust CLOCK_MONOTONIC
to add suspension time?

That works for forward jumps like ones for suspension.

BTW, I do a similar adjustment for "suspension" that is actually sitting
in ddb.  I only step the real time.  My accuracy is better than 10
ppm.  Good enough for an ntp server.  The stepping is done in rtcintr()
on seconds rollover interrupts 3 seconds after ddb has exited and
remained inactive, occurding to measurements made every 64 seconds
rollover interrupt (the delay is to prevent adjustments while single
stepping).

Nothing much notices this stepping, but for monotonic time you have
to do something about scheduled timeouts.  Strictly, stepping the
monotonic clock forward by a lot should cause many timeouts to expire.
This is the correct behaviour, but it won't happen automatially, and
it would cause thundering herds.  Cron should have similar scheduling
problems after the realtime clock is stepped, but I don't use it much.

Bruce
_______________________________________________
svn-src-head@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to