You're absolutely right, however there are a few additional
considerations when examining TSO that are worth considering.

>1.  Duration is the amount of service that a period should consume 
>before going on the next period.  This is NOT service units per 
>second, but is total service units consumed.  Thus, your 750 service 
>units do not equate to clock seconds in any regard.  The 750 service 
>units are composed of CPU (SRB and TCB) service units, plus I/O 
>service units, plus (potentially) MSO service units.  These basic 
>categories of service are adjusted by the service coefficients (CPU, 
>IOC, MSO, SRB).  Those resulting service unit measures are basically 
>unrelated to elapsed clock time.

I don't believe that MSO has any legitimate value so I would discourage
this use, and most of the service coefficients should be set to 1 for
CPU and SRB.  If you also use .5 for IOC then the primary service
consumption will be based on CPU (for TSO first period).

In addition, given the short response times involved, the DUR can also
be interpreted (although I know this isn't the definition) as a service
rate.  In other words with a DUR = 750 and a response time goal of .5
seconds, then effectively the service rate shouldn't be appreciably
different than 1500 SU/sec. While this approach certainly would be more
skewed for longer running workloads, the short response times of first
period TSO allow this liberty to be taken.  As an example consider a DUR
of 30K for a response time goal of 1 second.  If we neglect MSO and IOC
for this example, then the DUR would be larger than the CP delivery
capability of most model processors.  In this case, the goal defined by
using this duration could never be reached because the consumption rate
could never be high enough to meet the response time objective.
Therefore, this definition would result in a dramatically larger number
of transactions remaining in first period and skewing the response time.
I would venture that the goal of 1 second would be routinely missed and
if a percentile were used it would need to be rather large.  The problem
is not that the goal is unachievable, but rather than too many longer
running transactions were being grouped into first period.

I'm not disagreeing with your assessment, but simply attempting to
provide another view.


>2.  There is not a direct relationship between service units consumed 
>and elapsed time of the transaction (consider a CPU burner versus
someone scrolling a PDS). 

>For that matter, the delays to TSO transactions often are more a
function of other workloads (especially 
>workloads running at a higher Goal Importance) than anything inherent
in the TSO transactions themselves.


Once again, I agree completely, however part of my point is that if a
significant number of transactions are "clumped" together and some other
group is in a substantially different grouping (ie: 65% < .5 sec and
30+% > 4 sec), then we only have two ways of interpreting this data.  If
the service consumption is the same, then the latter group must be
experiencing significantly larger delays or, the latter group is
consuming more service than the "faster" group is.  The consumption is
allowed because of the long duration which is why I suggested reducing
the duration to force a transition into second period.  Just as in the
previous example the duration makes sense only if all the transactions
are consuming service at that level.  However, if there is a significant
enough variation in the service consumed, then you will experience the
"clumping" into different groups which (to me) suggests different
workload characteristics.

Adam

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to