From: "Keith Moe" <ke...@sbcglobal.net>
Sent: Tuesday, January 30, 2018 11:08 AM


One of the downsides to such great optimization is the added difficulty in 
debugging.

Such optimisations are rarely requested during debugging,
when all the facilities of the compiler - such as subscript bounds checking,
check for uninitialized variables, etc, are employed.

Programs will often have code that leaves footprints and saves various values
in work areas for diagnostic reasons.

They might, but at run time, should an error occur in a PL/I program
the error-recovery unit can print the values of any requested variables.

Also, for diagnostic purposes (either during debugging or during production),
values can be written to a file.

 Many optimization algorithms will detect
that these area are "never referenced" after being set and eliminate
the code that sets or stores these values.

See previous note.

Another optimization that makes debugging difficult is the inlining of 
subroutines
that are only called in one place to save the overhead of the linkage.
 But the generated mapping of the source to the generated machine code/assembler
does not match the original source statements to the generated machine code.

I suppose it is possible, but the PL/I optimising compilers clearly
show where any moved code is placed.  This is shown in the
assembler listing.

Sure, there are various tricks that can be done to prevent such optimization,

Such as not requesting optimising.

but that partially defeats the value of using a high level language when you have to think about how to defeat it.

When you spend a lot of time debugging problems occurring in customer production environments, life can be difficult.

See earlier note about printing values from the error-recovery unit.

Optimization is great until it isn't!

Keith Moe
Lead Developer
BMC Software, Inc.


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

Reply via email to