This most interesting discussion of the variability of CPU consumed by
the same unit of work requires me to ask the following question:

How is the normal application programmer who is tasked with reducing the
CPU consumed by an application process (let's say a complex batch
process for the sake of argument) supposed to judge whether he has
improved the process or not?

Granted, there are frequently obvious performance design flaws that will
significantly impact CPU consumption (sequential searches on large
in-storage tables vs. binary searches for one simple example), but once
the obvious performance improvements have been made, how does a normal
programmer accurately measure change in the performance of a process
when CPU variability may completely mask any effect that source-code
changes may or may not have had?

More to the point, how does a programmer show management what the
improvement will be from a given set of changes when the results may be
completely unrepeatable, depending on system load?  Application
programmers do not (usually) have the luxury of dedicated machines on
which to run performance tests.

Add to that the fact that test environments for developers infrequently
have the same priority as production environments, and so are subject to
even more delays and therefore cache-misses, along with all the other
performance-killers mentioned in this thread.  How can a programmer
accurately predict for management approval what the production
performance improvement will be when the test environment is so much
more poorly served than production?  When even successive runs on the
same test machine at the same time of day produce large and
unexplainable differences in performance?

Please correct me if I am wrong, but much of this discussion leads me to
believe that we are saying that there is essentially no reasonable way
to predict or model performance for a given application process.  That
just seems so *wrong* to me.  Measuring and accurately reporting
application performance improvement should NOT require an advanced
degree in hardware engineering and being able to design around the
impact of missed cache-lines.

I am hoping I am wrong about my interpretation of this information.
Counter-examples and/or corrections to my understanding of the subject
gratefully accepted.

Peter
This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to