On Wed, Mar 26, 2003 at 09:54:11AM -0500, John Siracusa wrote:
> On 3/26/03 12:33 AM, Dave Rolsky wrote:
> 
> > But if it's another attribute it needs a precision.  I don't want to call
> > it "fractional seconds" and let each user decide, because that kills
> > inter-operability.
> 
> Fractional seconds can simply be the most general form of the attribute.
> You can then truncate/pad (or round) that value to produce all
> fixed-precision "derived" attributes: milliseconds, nanoseconds, etc.  But
> "fractional seconds" should simply be an integer (or BigInt, if necessary)
> value that comes after the decimal point.  That's the most general
> implementation, and doesn't close any doors for the future, as far as I can
> tell.  You may also think it's the most "useless" implementation, but it'd
> help me, at least :)

Me too.

And it reminds me of this (possibly irrelevant) precedent...

LIBRARY
     Standard C Library (libc, -lc)
 
SYNOPSIS
     #include <sys/time.h>
 
     int
     gettimeofday(struct timeval *tp, struct timezone *tzp);
 
     int
     settimeofday(const struct timeval *tp, const struct timezone *tzp);
 
DESCRIPTION
     Note: timezone is no longer used; this information is kept outside the kernel.
 
     The system's notion of the current Greenwich time and the current time
     zone is obtained with the gettimeofday() call, and set with the
     settimeofday() call.  The time is expressed in seconds and microseconds
     since midnight (0 hour), January 1, 1970.  The resolution of the system
     clock is hardware dependent, and the time may be updated continuously or
     in ``ticks''.  If tp or tzp is NULL, the associated time information will
     not be returned or set.
 
     The structures pointed to by tp and tzp are defined in <sys/time.h> as:
 
     struct timeval {
             long    tv_sec;         /* seconds since Jan. 1, 1970 */
             long    tv_usec;        /* and microseconds */
     };

The gettimeofday interface was designed when microseconds were a
long period, and for system clocks using microseconds is quite reasonable.

For DateTime it seems reasonable to adopt the highest resolution
that'll fit in a 32 bit int.  I think I'm right in saying that's nanoseconds.

Tim.

Reply via email to