I vote for double-precision floating-point. Since double precision is good to 10^-15, that allows times to be specified to a precision of about 3 microseconds for the next century, and to a precision of 30 microseconds for the next millennium. Anyone who wants more precision than that is likely to have implemented his/her own library. Further, the trade between distance from the epoch and time resolution goes in the correct direction -- e.g. nobody wants microsecond precision when considering times 1,000,000 years in the past. Finally, floating point encapsulates nicely the practical nonexistence of simultaneity ( "$t1 == $t2" makes no sense without a measure of the allowed slop; integer times force an implied slop scale. )

On Aug 16, 2005, at 10:39 AM, Brano Tichý wrote:

<delurk>

A related question:
I think it was stated, that the time will be some floating-point number.
Will its precision be predetermined or will it be system-dependent?
(Or maybe the precision is no-issue -- it could be important in comparisons, but one can argue one should always specify the smallest unit when comparing times. Only issues left are intervals; I vaguely remember something about losing precision when subtracting two close floating-point numbers.)

I ask because I stumbled across uuu time (http://www.unununium.org/ articles/uuutime) when I was looking for explanation of UTC/TAI/ *J*D et al. It is counted from 2000-01-01 TAI in microseconds and stored in signed 64 bit integer.


brano

</delurk>



Reply via email to