I'd like to second that statement.

I recall only two times when there were significant problems with new compilers and when we did large tests on new compiler versions. One was a new version of one of the first C/370 compilers (when we discovered some questionable pieces of our code that needed to be fixed, because the new compiler didn't accept them any more), and the other was the introduction of the first Enterprise PL/1 compiler - which replaced the OS PL/1 V2.3, IIRC. There were even some discussions with IBM, because some semanctics were different in the two PL/1 compilers - which was fixed by IBM after some negotiations. But that was only on the first migration,
no problem with later EP PL/1 migrations.

All other compiler version changes worked without problems. In the last few years, we change the C compilers with the z/OS releases and don't even test them. The tests go with the normal regression tests of our insurance math package, which we
run every week.

Kind regards

Bernd



Am 03.06.2013 15:44, schrieb Joel C. Ewing:
Our experience over multiple decades was that actual compiler or language run-time bugs or undocumented changes that affected our code were so rare that it was the last thing to consider when application development was having code problems. Subtle mis-use of the langauge or just logic bugs accounted for 99.99%+ of problems encountered. I recall a few subtle undocumented changes in three decades in either the compiler or run-time environment which caused some minor grief to applications until they were resolved or code modified, but the only cases which required or justified a massive recompile of everything were a very few cases where significant documented changes in compiler syntax and semantics made it clearly necessary.

It is IBM's job, not that of individual installations, to do regression testing of the compiler and language run-time environments across version changes and maintenance that is not intended to change syntax or semantics, and at least in our experience any bugs that escaped their testing were so subtle that UNLESS you re-compiled everything you would have a low probability of encountering them before someone else hit them and a resolution was available. We tested new compiler versions to validate basic functionality, but this was always just to be sure there wasn't some obvious installation configuration error on our part. Recompiling everything for each new compiler version only proves a clean compile is possible, not that the semantic behavior of the code has not been changed in some subtle way that may not surface until 6 months later. It always seemed to us that the much saner approach with a new compiler level was to only force its use with new development and on-going maintenance, so that any problems would be manifested in code that was already being closely monitored for unexpected behavior; and any subtle issues that weren't' immediately apparent would be confined to that code, rather than unnecessarily exposing all in-house applications.

One of the big advantages to MVS has always been the deliberate design for upward compatibility, precisely so you don't have to recompile and retest all applications every time there is major maintenance to the Operating System or its component parts, which would increase programming costs by a significant factor. It is absolutely essential to have adequate program management in place to guarantee that for every application program running you also have the matching program source. It is not essential, and just asking for problems, to attempt to recompile everything whenever there is a compiler version or maintenance change, unless that change has documented compatibility issues with old source code and previously compiled code.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to