I've heard roughly the same story from another Murray Hill-ite. What
I find surprising about it is that most of the people involved in
Unics, later Unix, had worked on Multics. Multics, at least by the
1980s when I was using it, represented time as a FIXED BIN(71) (ie a
double-word quantity), counting in microseconds from 00:00 1-1-1900.
It was explicitly signed, so it could handle dates tens of millions of
years in each direction. It needed microsecond resolution, if not
accuracy, because the clock was locked so that no two processes could
obtains the same timestamp and therefore it could be used directly to
produce unique strings. For this reason the clock also had to be
monotonic, which made for entertainment every autumn, as although the
OS supported timezones being different for processes, it didn't
support summer time correctly.
ian
Multics ran on a GE/Honeywell 36-bit machine and the PDP-11
was a 16-bit machine. Both used a double word for time.
There was no microsecond resolution possible on UNIX. The
minimum quantum of time was 1/60th of a second. At least be
thankful time was 32-bits in UNIX and not 16; there were only
a few places in the kernel where a long was even used; time
was one of them:
int cputype; /* type of cpu =40, 45, or 70 */
int lbolt; /* time of day in 60th not in time */
long time; /* time in sec from 1970 */
long tout; /* time of day of next sleep */
Everything else was an int or register. In those days the UNIX
kernel code for the time system call was all of this:
gtime()
{
u.u_ar0[R0] = time.hiword;
u.u_ar0[R1] = time.loword;
}
The extent of timekeeping was that the lbolt (as in lightning bolt)
variable was incremented every 60 Hz by a line frequency
if(++lbolt >= HZ) {
lbolt =- HZ;
++time;
That's all there was to timekeeping in UNIX; 3 lines of code.
/tvb
_______________________________________________
LEAPSECS mailing list
LEAPSECS@leapsecond.com
http://six.pairlist.net/mailman/listinfo/leapsecs