Background: Traditionally, we have been reporting cpu usage for our box normalized to 100% using the TYPE70PR SMF records. Unfortunately, when the model of the box changes via CoD (or whatever it is called), you cannot even *see* that the capacity has become bigger. Using the total cpu seconds consumed, what you can see is that more cpu seconds were used in the same length interval, either because the number of processors increased or because the processor speed increased. So I have been thinking of doing the 'cpu usage per box' graphic using total cpu consumed in each interval. Which raises the question "what is the limit of cpu seconds per interval?" Because that is what I need to show management to show how much more we would have needed.
We are running on a sub-capacity machine (and our new one will also be sub-capacity, meaning slower processors). So obviously I cannot use 60s*10*no.of.cps to determine the limit, since we will not achieve 600s cpu on one cp per 10minutes for general cps. I think. In addition, I wanted to avoid conversion to MSUs or MIPS (since I am always telling my management that those are meaningless). But for the new machine zPCR was done for our workload by IBM. In the comparison the actual MIPS of several z196 models were downgraded in their number of MIPS (to account for lpar overhead and workload mix, IBM calls it zPCR MIPS). Which seems to confirm my thinking above. So my question is a) if my thinking above is correct or flawed (and please set me straight if it is flawed). And b) how do I determine the maximum number of cpu seconds I can have in any 10-minute-interval at 100% load on the general cps? (I did search the archives, but did not really find anything that might be relevant.) Thanks for reading, Barbara ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html