Ted MacNEIL wrote:
Also, for the poster that asked about CPU usage.
Who cares? This entire CICS sub-system is using less than 5% of the processor.
The only one being impacted is this sub-system.
CICS cannot sustain that rate very long without response implications.

We need to know the cost per I/O to size the work effort to repair or remove.

the science center had done a whole lot of work on performance ...
http://www.garlic.com/~lynn/subtopic.html#545tech

lots of work on instrumenting much of the kernel activity and collecting the statistics on regular intervals (every ten minutes, 7x24 ... eventually with historical data that spanned decades).

it also did a lot of even modeling for characterizing system operation

and it did the apl analytical system model that eventually turned into the "performance predictor" on hone.

hone was an online (cp/cms based) time-sharing system that supported all the field, sales and marketing people in the world (major replicas of hone were positioned all around the world).
http://www.garlic.com/~lynn/subtopic.html#hone

eventually all system orders had to be processed by some hone application or another. the "performance predictor" was a "what-if" type application for sales ... you entered some amount of information about customer workload and configuration (typically in machine readable form) and asked "what-if" questions about changes in workload or configuration (type of stuff that is normally done by spreadsheet applications these tdays).

i used variation off the "performance predictor" was also used for selecting next configuration/workload in the automated benchmark process that I used for calibrating/validating the resource manager before product ship (one sequence involved 2000 benchmarks that took 3 months elapsed to
run)
http://www.garlic.com/~lynn/subtopic.html#benchmark

the performance monitoring, tuning, and management work also evolved into capacity planning.

there also were some number of instruction address sampling tools ... to approx. where processes were spending much of the time. One such tool was done in support of deciding what kernel functions to drop into microcode for endicott's microcode assists. the guy in palo alto that had done the apl microcode assist originally for the 370/145 ... did a psw instruction address sampling application in 145 microcode and we used it to help select what part of kernel pathlengths to drop into microcode for kernel performance assist. minor reference
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

However, another technology we used was to take interval activity data and run it thru multiple regression analysis ... assuming we accounted for the major activity variables, we were able to come up with pretty accurate cpu time correlation with different activities. Doesn't take a whole lot of effort ... just a far amount of recorded activity over range of activity and feed it into a good MRA routine. At the time, we primarily used MRA from the scientific subroutine library. There are some free packages you can find on the web ... but to get one that can handle both a large number of variables and large number of data points ... you may have to consider a commercial package.

In subsequent years, I've used MRA to find anomolous correlations in extremely large applications ... unusual activity correlations that conventional wisdom maintained couldn't possible be correct ... but actually accounted for major resource consumption. This is analogous to one of the other postings in this thread about finding things happening in a large application; that the responsible application people couldn't possibly believe was happening.

trivial things MRA may turn up is cpu use and i/o activity by major functions, cpu use per i/o, etc.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to