Timothy -

Two points if I may.

The compiler has no way of knowing whether it is spending $10 to save $.01,
absent a parameter "we intend to run this program 'n' times before it is
next compiled." Is this a year-end program, a one-shot fix, or a frequent
CICS transaction? I think given OPT(max possible valid value) a compiler
should do the best job it can.

And "memory is managed in "4 KB, 1 MB, or 2 GB page sizes anyway" cuts both
ways. One implication is that *one* extra byte in an optimized program could
cost 2 GB. I think you have to operate on the basis that one byte costs one
byte, even if what that means is that statistically one byte has a one in
two billion chance of costing 2 GB.

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Timothy Sipples
Sent: Thursday, October 02, 2014 10:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Enterprise COBOL v5.1 Implemented?

>MOVE "BUBBA" to WS-DESC.

How about this example instead:

MOVE "Version 6.3.2. Copyright 2014 BUBBA Coders Inc. Licensed to GUMP Inc."
to WS-DESC.

With the stipulation that this second example may not be brilliant, should
the optimizer remove that "unreachable" "watermarking" literal?

Keep in mind also that optimizers have their own performance characteristics
to respect. If you're spending $10 to save $0.01, that's
(also) a waste of resources. Much like compression algorithms, optimizers
ought not do everything possible. They should only do what's "prudent,"
"wise," and "reasonable" to do -- no more. There's also some non-zero risk
that if an optimizer takes low comparative reward action it could do so
improperly -- that there's an optimizer bug, in other words. Bugs can be
expensive, too.

A new optimizer is about the present and future. Today's COBOL compiler is
no longer tasked with optimizing for System/370 machines. It's a fair
generalization that reducing compiled code size by X bytes is much, much
less important in 2014 than it was in 1982 where X is constant, ceteris
paribus. It's also a fair assumption that it will get even less important
over time. And you've also got to consider that everything gets rounded up
to 4 KB, 1 MB, or 2 GB page sizes anyway (and minimum storage block sizes,
relatedly) with the smaller page sizes progressively falling by the wayside
over time. Bytes as bytes "don't matter" to a first, second, and probably
third order approximation. (Maybe the optimizer only takes action if the
next page increment is approached, and then only enough to remain within the
lowest reasonably achievable page count but no more?)

To net it out, for at least a few reasons I don't think this example
demonstrates that an optimizer isn't properly doing its job. In fact, if the
optimizer were *always* taking preemptive action here you might have
discovered a poor optimizer! The run-time world is nuanced, complicated, and
certainly not static. A sensible optimizer knows when *not* to take action
just as much as when, and this one might be such an example.
"Minimize the compiled code's memory footprint, in bytes" isn't actually a
sensible priority objective for a (modern) optimizer with the possible
exception of an optimizer targeting a comparatively memory constrained
environment, e.g. an embedded "smart card" processor.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to