Kees,

>- Is the WLM PI number the real problem, or is the performance actually
>bad? A bad report about a well running application is not the end of the
>world. 
We started looking at this when we had complaints about missing throughput and 
bad response times. Before that, we didn't even realize how bad the PI was.

>- If they all pop up at about the same time, how would you like to see a
>2 CP Lpar handle 65 tasks at (about) the same moment??? WLM or no WLM,
>here you have a real problem I think, even if the Lpar Weight were high
>enough.
Exactly my point: I think *someone* needs to take a real hard look at how this 
product was ported to z/OS. (I didn't mention that it was ported from the 'open 
world', did I?) I also think to 'play nice' in a z/OS world, it needs design 
changes, one of them being the use of WLM macros to actually define the start 
and the end of a transaction, just so I can use a response time goal. :-) 
In addition, there need to be much better guidelines on how to tune this 
application with the obvious things - like storage usage in LE, like what 
effect committing messages has (there were also lots of SSRBs - suspended 
somewhere in RRS/LOGR processing) and how often to do that. I am sure my 1.5s 
trace table only scratches the surface. Besides, a trace table is not the right 
tool to look at a problem like this.

>did you define them as CPU CRITICAL?
No. On the assumption that I can *see* that the DP of these 65 things is 
definitely higher than anything else on that lpar (aside from sysstc and the 
obvious supporters). Also, giving it one more processor didn't help the PI. 
Should I test that, anyway? 

My biggest concern with the high PI is actually that WLM will only try to help 
the service class every 3rd trip through WLM (so I was told). WLM doesn't look 
at it for two intervals. And the intervals are long, anyway (10s?). The WLM 
developer basically told me that it doesn't matter that WLM will not help the 
class for that long. This is what really go my hackles up, we just don't have a 
continuous workload, we have extreme spikes and are supposed to have good 
response times even in spike situations.
In order to achieve a continuous help for this class, I have to set the 
velocity artificially to 1% or less, just to influence the computing, if I can 
even define it that low. That probably gets me a whole lot of other problems. 
I have also been told (completely differnt guy in WLM) that it is necessary to 
define a resource group with a very high guaranteed minimum consumption 
whenever I go below 30% for an exvel goal. So far I have managed to avoid that 
by basically only using the importance for classification and the exvel always 
being 31 or higher. Which doesn't give me much room until I hit the 
'unachievable goal'. (Oh, I *was* told that Imp1, exvel 40% is too ambitious!)

Does that make my dilemma clearer?

Best regards and thanks, Barbara
-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to