Larry,

I was not belittling the effort it took to make sure Y2K was not a disaster. I 
was making the comparison of how you could start to emulate the problem in 
advance just as you can easily test older python code with newer ones.

My work those days was in places where we did not do much of what you point out 
such as a hardcoded 19 or using a condensed date format. I had a cousin die 
recently at 105 and his brother is now 103 and clearly they would not fit well 
in a 2-digit date field as they might be placed in kindergarten. Weird 
assumptions were often made because saving space in memory or on tape was 
considered essential.

The forthcoming UNIX 2038 problem will, paradoxically happen on January 19. I 
wonder what they will do long before then. Will they just add a byte or four or 
256 and then make a date measurable in picoseconds? Or will they start using a 
number format that can easily deal with 1 Million B.C. and 5 Billion A.D. just 
in case we escape earth before it co-locates with the expanding sun.

This may be a place where the unlimited integer in python may be useful but 
with some hidden gotchas as I can think of places where longer ints are not 
trivially supported.

You actually are sort of making my point clearer. The Y2K problem was 
"partially" due to the fact that code people wrote was not expected to be 
around very long and yet some sort of inertia kept plenty around and even when 
new code came along, it tried to emulate the old. I can well understand people 
holding on to what they know, especially when the new stuff is likely to keep 
changing. Some of that legacy code probably is very un-pythonic and a 
relatively minimal translation from how it was done in C, when that is 
possible. Arguably such code might be easier to port as it may use just 2% of 
what python makes available. No functional programming, no classes/objects, 
just ASCII text and while loops. 😊

Remember when file names were very limited? DOS was 8 by 3. UNIX was  once 14 
with the optional period anywhere including at the beginning. People often made 
somewhat compressed names and a two-character year was all you had room for. I 
have data I sometimes work with containing hundreds of variables with 
8-character names that drive me nuts as they are not meaningful to me. I often 
read them in and rename them. How would you encode say a test score taken on 
December 22, 2013? "12222013" would use up the entire allotment.

Back to python, I see lots of mission creep mixed in with nice features to the 
point where it stopped being a particularly simple language many releases ago. 
Some of that may be part of what makes upgrading some code harder. Many 
additions include seemingly endless dunder variables that enable some 
protocols. I mean __ADD_ may seem a reasonable way to define what happens when 
a binary "+" operation is defined, but then you have left/right versions of it 
as well as autoincrement  as well as symbols with similar and sometimes 
confusing meanings and many that do not correspond to symbols in the language 
such as __ITER__ and __NEXT__ whose presence drives an added iteration 
protocol.  The zoo can be confusing and new stuff keeps being added including 
in modules outside the formal language. It is great and can separate the women 
from the girls, so to speak. 

Lots of nice, convenient and powerful functionality can be added by placing 
such methods or variables in objects directly or in associated objects. But I 
see it as a tad clearer in a language like R where you can name an operator in 
some form of quotes. I do not recommend redefining the plus operator as I am 
about to do but note base R does not add using "+" and you generally use the 
paste() function. (Ignore the prompt of "> " in the example below.)

> "hello" + "world"
Error in "hello" + "world" : non-numeric argument to binary operator
> '+' <- function(x, y) paste(x, y, sep="+++")
> "hello" + "world"
[1] "hello+++world"
> "one" + "two" + "three"
[1] "one+++two+++three"

The point is you know what is being changed rather than having to identify what 
a "word" like  __iadd__ means or that when it is absent, the regular __add__ is 
used. And you can extend the language this way quite a bit like by making 
operators between percent signs like "%*%" or "%matrixmult%" or whatever. Heck, 
there are many versions of pipes made using "%>%" or other such invented 
symbols that make writing some code easier and clearer. 

Not saying the language is overall better or that some features would be a good 
idea to graft on now. Just that some things may be more intuitive and as python 
keeps adding to a namespace, the meaning of a class keeps getting more 
amorphous and sometimes has so much loaded into it that it fills multiple 
pages. I would not show most beginners too much of python at first, or just ask 
them to take some things on faith for a while. A language that initially 
claimed to be designed to do things pretty much ONE way has miserably failed as 
I can do almost anything a dozen ways. That is NOT a bad thing. As long as the 
goal is for someone to learn how to do something in any one way, it is great. 
If you want them to be able to read existing code and modify it, it can be a 
headache especially when people abuse language features. And yes, I am an 
abuser in that sense.

-----Original Message-----
From: Python-list <python-list-bounces+avigross=verizon....@python.org> On 
Behalf Of Larry Martell
Sent: Wednesday, January 16, 2019 10:46 PM
To: Python <python-list@python.org>
Subject: Re: Pythonic Y2K

On Wed, Jan 16, 2019 at 9:35 PM Avi Gross <avigr...@verizon.net> wrote:
>
> Chris,
>
> The comparison to Y2K was not a great one. I am not sure what people 
> did in advance, but all it took was to set the clock forward on a test 
> system and look for anomalies. Not everything would be found but it gave some 
> hints.

Clearly you did not live through that. I did and I got over 2 years of real 
work from it. Companies hired me to check their code and find their Y2K 
exposures. Things like a hard coded '19' being added to a 2 digit year. Or code 
that only allocated 2 bytes for the year. I could go on and on. At one client I 
had I found over 4,000 places in their code that needed to be modified. And 
there was no widespread use of VMs that you could easily and quickly spin up 
for testing. It was a real problem but because of many people like me, it was 
dealt with.
Now the next thing to deal with is the Jan. 19, 2038 problem. I'll be
80 then, but probably still writing code. Call me if you need me.
--
https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to