On 17/01/19 4:45 PM, Larry Martell wrote:
On Wed, Jan 16, 2019 at 9:35 PM Avi Gross <avigr...@verizon.net> wrote:

Chris,

The comparison to Y2K was not a great one. I am not sure what people did in
advance, but all it took was to set the clock forward on a test system and
look for anomalies. Not everything would be found but it gave some hints.

Clearly you did not live through that. I did and I got over 2 years of
real work from it. Companies hired me to check their code and find
their Y2K exposures. Things like a hard coded '19' being added to a 2
digit year. Or code that only allocated 2 bytes for the year. I could
go on and on. At one client I had I found over 4,000 places in their
code that needed to be modified. And there was no widespread use of
VMs that you could easily and quickly spin up for testing. It was a
real problem but because of many people like me, it was dealt with.
Now the next thing to deal with is the Jan. 19, 2038 problem. I'll be
80 then, but probably still writing code. Call me if you need me.


Same.

The easy part was finding the hardware, configuring identical systems, and changing the date-era. Remember that we pre-dated TDD, so we pretty much re-designed entire testing suites! The most difficult work was with the oldest systems - for which there was no/little/worthless documentation, and usually no dev staff with 'memory'.

Then there were the faults in OpSys and systems programs on which we could supposedly rely - I still have a couple of certificates somewhere, for diagnosing faults which MSFT had not found... The difficulty of multi-layer fault-finding is an order of magnitude more difficult than Python debugging alone!

I'm told there are fewer and fewer COBOL programmers around, and those that survive can command higher rates as a consequence. Would going 'back' to that be regarded as "up" skilling?

Does this imply that there might one day be a premium chargeable by Py2.n coders?

--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to