Maybe the question is not relevant to you, but it is to us. Since our
guests on the big z/VM system are TPF systems that are driven at machine
speeds by other virtual machines, a cpu utilization of 100% is never too
far from reality. In that case, the batch model is more realistic. The
only systems that use IFLs are either low utilization Linux workloads
(Z/TPF development) or ones that have a batch-like nature (driven by TPF
in an adjoining LPAR, frequently full throttle for extended periods).
The low utilization systems get the response time they need using 3
shared IFLs between 2 LPARs; they are not a problem. The others drive 7
or more dedicated IFLs at or near their limit. So, yes, I am more
concerned about the MP effect in two different environments that much
more closely resemble a batch environment than they do the response time
model of which you speak.


Regards, 
Richard Schuh 

 

> -----Original Message-----
> From: The IBM z/VM Operating System 
> [mailto:ib...@listserv.uark.edu] On Behalf Of Barton Robinson
> Sent: Wednesday, February 04, 2009 8:20 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: Correcting Statements From Marketing
> 
> If you build a response time model for processors, - AND you 
> have a target response time not to be exceeded, it is easy to 
> show that 1 processor responds worse at 80%, than two at 80%. 
> Equivalent response time is expected when the two processors 
> are at 90%.  So the source of the question is really "batch" 
> mentality vs the "response time" mentality.
> 
> MP effect comes from the batch mentality where thruput was 
> the only measure.  The batch mentality will always challenge 
> this, response time mentality should understand....  If you 
> care about response time in the Linux/zVm world, you don't 
> run at 100% most of the time.
> 
> So the only time the MP Effect question is relevant is when 
> both processors are running at 100%, which makes the question 
> not relevant on IFLs. From an accounting perspective, I guess 
> you could use the z/OS numbers, which would likely 
> under-charge the Linux user for CPU consumed, since using 
> those numbers a CPU second consumed is not charged as a full 
> CPU second.
> 
> Schuh, Richard wrote:
> > I would expect that some would challenge your conclusion 
> based on the 
> > idea that the MP effect does not even appear unless you are 
> running at 
> > or near capacity. If I have two cpus or IFLs and 1.1 cpu's worth of 
> > demand, will I notice the MP effect? Probably not. I 
> probably will see 
> > a better service level than when I was trying to service the same 
> > demand with only 1 cpu. The question is, if n tasks causes a single 
> > engine to run at 100%, will 2 engines be able to service 2n 
> tasks as 
> > well as 1 serviced n? I think that under normal circumstances, the 
> > answer is that the 2 engine machine will only be able to 
> service somewhat less than 2n.
> > 
> > Regards,
> > Richard Schuh
> > 
> >  
> > 
> >> -----Original Message-----
> >> From: The IBM z/VM Operating System
> >> [mailto:ib...@listserv.uark.edu] On Behalf Of Barton Robinson
> >> Sent: Tuesday, February 03, 2009 10:57 AM
> >> To: IBMVM@LISTSERV.UARK.EDU
> >> Subject: Re: Correcting Statements From Marketing
> >>
> >> Ok here's some heresy that I've presented to IBM and maybe was 
> >> communicated to their sales folks.  From a capacity planning and 
> >> service level perspective, adding a CPU gives you MORE 
> than 100%, not 
> >> less than.
> >> Really, BUT ONLY if you actually care about service levels.
> >>
> >>  From a service level perspective, i know that i can 
> provide on ONE 
> >> IFL a given service at 80% CPU utilization.  If I ADD an IFL, and 
> >> more work of a similar nature, I now have TWO IFLs, and I 
> know that I 
> >> can provide that SAME service at 180% CPU Util.
> >>
> >> So, I went from ONE IFL, to TWO IFLs, and increased my target CPU 
> >> utilization by 1.25 times.
> >>
> >> On z/OS if you just run at 100% all the time, and run 
> batch to soak 
> >> up cycles, then add a CPU and you don't get 100% of one 
> CPU more work 
> >> done.
> >>   That is the only time MP factors should matter.
> >>
> >> And this heresy is why it is much easier to deal with 
> installations 
> >> running multiple IFLs, because the performance will be better at 
> >> higher utilizations than single IFLs at lower 
> utilizations. Adding a 
> >> second IFL more than doubles your usable capacity. Adding a 3rd or 
> >> 4th is less dramatic.
> >>
> >>  From a historical perspective, we used to have the MASTER 
> PROCESSOR 
> >> effect where adding a CPU added much less capacity.
> >>  Installations today do not see this impact.
> >>
> >>
> >> Schuh, Richard wrote:
> >>> This got no response when posted under a different topic:
> >>>
> >>> "Yikes, We have someone from IBM Marketing now making the
> >> statement,
> >>> "I have confirmed...no MP factor with IFLs....". That is 
> the entire 
> >>> statement, all of the dots included. I did not replace
> >> anything with
> >>> ellipses. Somehow, that does not ring true. I mentioned that the 
> >>> rating of an IFL is the same as that of an ordinary CPU 
> and someone 
> >>> went to marketing for "the real answer". Perhaps they should have 
> >>> said, "No different MP factor for IFLs than for regular
> >> CPUs, they are
> >>> the same in that regard." That would make more sense. 
> >> Anyone from IBM
> >>> care to comment - you will probably be quoted."
> >>>
> >>> I am not considered an authority on the topic, especially when I 
> >>> disagree with an interpretation of a statement made by IBM
> >> marketing. 
> >>> I need to disabuse someone of their notion because it will
> >> affect the
> >>> capacity planning process. They do not seem to believe 
> that running 
> >>> the same O/S on two systems, one with n standard CPUs and 
> the other 
> >>> with the same number of IFLs will produce a result of equal
> >> MP effect.
> >>> Barton, you are also invited to respond. At least one of
> >> the people on
> >>> the other side of the fence will take your word for it.
> >>>
> >>> Regards,
> >>> Richard Schuh
> >>>
> >>>
> > 
> > 
> 

Reply via email to