>   Ah.  Well.  Obviously, I didn't "get it".  Maybe the joke's been
> made too often, and isn't funny anymore.  Or maybe I like jokes, but
> when they're used in place of of serious discussion, instead of along
> with it, I get irritated.  Meh.

I am truly sorry to have irritated you.

>   In the case of the C library, the intent was to have the library
> routine return the last two digits of the year.  It did that not to
> save storage, but because *that's how people think*.
> 

Well, I was not standing right beside Ken and Dennis when they created
the code (if it was them, it could have been someone else) so I do not
know what they were thinking.  But I was writing code on 4 Kilo-word
computers at Western Electric in 1969, and later on 1 MB "mainframe"
computers at Aetna Life and Casualty in 1973, and I can tell you that a
main consideration of using just the last two digits of the year was to
save storage.  It sounds stupid in retrospect, and in retrospect it
seems that it was, but to repeat the "19" of "19xx" millions of times in
records was considered "wasteful".  Yes, I remember issues in century
marks coming up even back then, because there were people born in "65",
and with the two digits it was assumed (by the computer) that it was
1965 and that they were only 10 years old and dying of Alzheimer's in
1975.  As it turned out they were actually born in 1865 and were dying
of Alzheimer's at 110.  So people knew, even in 1973, that the century
mark would be an issue.  But they still stubbornly tried to use the two
digits to save space.  It was not just "how they thought".

I explained why I thought Dennis and Ken might have ignored the issue,
and perhaps the other reason they ignored it was that they never
expected Unix to take off the way it did.  Unix was a research vehicle
for them and a "hobby".  They were neither looking forward to 2000 nor
back to 1900, and most of the types of calculations that they were doing
would not be dependent on
those time frames.  Certainly it was not needed for "Space Wars" or
"Hunt the Wumpus", nor for troff, etc.

Lotus 1-2-3, on the other hand, was written as a commercial product from
the get-go, and it was also not the original spread-sheet.  "Visicalc"
and "Supercalc" and a bunch of other ones were out there, and were being
used to do things like calculate 30 year mortgages, even in 1983.  But
the Lotus programmer probably looked at the year 2000, saw that it was a
leap-year and assumed that 1900 was too.  Again, I do not know what went
on in the mind of the programmer who wrote the code.  But by 1983, they
definitely knew the consequences of the "multi-century spread".  And
they should have known how to calculate leap-years.

Yet I agree with you, this is all "water over the dam".  And since we do
not know what really went on in the mind of the programmers and product
managers at that fateful spread-sheet company, we can only collectively
shake our fingers at them.

>   It actually sounds like OOXML isn't as bad as the C standard,
> because at least OOXML gives you the *option* of not incorporating the
> bug.
> 

I believe that you are looking at it from the viewpoint of the user, and
yes, they have an option of what they use.

>From the viewpoint of the implementer of a future product, the product
now has to implement both "options" to be considered supporting the
"standard".  This is, of course, a "Long stupid debate on OOXML and the
year 1900".  But it is actually a larger debate between "compatibility
of the past" and the way we should handle things in the future.  Should
a new standard actually propagate something that is so inherently wrong?
Or should that "wrongness" be handled another way?

Another recurring theme in the ECMA standard was that parts were
identified by "As-it-happens-in".  Should a new standard simply say "It
should be the way that it happens in 'Lotus 1-2-3'", or should it spell
out what happens in 'Lotus 1-2-3'" so that anyone can implement it
without having to have the source code for Lotus?

I would prefer that a standard reflect the way the calendar really
works, and the path forward, and that particular products interested in
compatibility implement extensions or compatibility modes to accept
legacy documents.  These implementations would flag these "non-standard"
extensions the same way that non-standard extensions were flagged in
implementations of languages.  Coders (or in this case spread-sheet
writers) could then stay away from them.

It might result in the same thing at first, products that would take in
both the "correct" implementation, and the "incorrect" implementation,
but it would tend to give direction for producing new documents over
time.

There might also then be a movement to flag documents that were not
standard and convert them to something standard.

Maybe someday someone will create a update to the standard for Unix
which will have six digits for the year, and then we will be set for a
good long time.

md



_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to