On 2005-08-16 12:39, "Brano Tich‡" <[EMAIL PROTECTED]> wrote:
> A related question:
> I think it was stated, that the time will be some floating-point number.
> Will its precision be predetermined or will it be system-dependent?
> (Or maybe the precision is no-issue -- it could be important in comparisons,
> but one can argue one should always specify the smallest unit when comparing
> times. Only issues left are intervals; I vaguely remember something about
> losing precision when subtracting two close floating-point numbers.)

Floating point has the advantage of great range without great storage
requirements; you can represent times far in the past or future, but the
further away from time 0 you go, the less resolution you have.  Which is
usually perfect for real-life applications.

For example, given a 64-bit IEEE double-precision floating point
representation with 1-second units, you can represent time between points
about 10^308 seconds (3x10^300 years) before and after the epoch - which is
about 10^290 times the oldest estimates of the age of the universe.  But at
those extremes, the time *between* representable points is a whopping
2x10^292 seconds (6x10^284 years).

At the other extreme, you have precision down to 1/10^308 of a second on
either side of the epoch itself.

More generally, the numbers are quite reasonable.  For example, for about 30
years on either side of the epoch you have resolution to at least .1
microsecond.



Reply via email to