We have a separation policy here. Development and test cannot touch the
production network and vice versa. The separation is by LPAR in that
there is one production LPAR running on a machine that is otherwise
development. Other than that, the separation is by cpu serial number. We
do not separate by data center, though. Our big, important apps have
their own playgrounds and are separated by miles of land and even oceans
from each other. The development box shares a data center with 2 big
apps plus some number of z/OS systems. 

Regards, 
Richard Schuh 


-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Marcy Cortes
Sent: Monday, May 21, 2007 3:22 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Performance Rules of Thumb ?

Some very good points Alan!

Some workload is just fine at 100%.   One thing to note about 100% is
how
far over it you are :) (and your perf monitor might help you figure that
out!)  Something is needing more when you are at 100 - whether it puts
up
with getting less than what it needs is certainly app dependent.  You
can
control some of that with priorities and then maybe set share absolute
limithard for the one that goes in a tight loop... If you are on top of
that... or maybe you could automate that ...  

We have a big important app that is very cpu intensive but also CPU
sensitive.  If it isn't able to process its tran within a certain time
amount, that tran is stood in for by another system and screens go red,
the
world is paged, and long running whose-fault conference calls start.
We
certainly don't want joe users's leaky java program testing to cause
that to
happen. So, those guys are on different z/VM lpars (actually, different
data
centers, but that's a whole 'nother isolate test from production story -
more z/OS's fuss than ours, but we play within the rules :).

The 60% is because we have 3 lpars, they want the work load of 1 to fit
on
the other 2 if one should go down (yes, VM systems do go down and HW
could
go down too :).  Someone here wants 6 lpars, then we can run them at
75-80%
and be able to have whitespace for 1.  Not sure that extreme is a good
idea
either, at least for the VM sysprogs who have enough to do as it is.

I'd be curious to know if others also use extreme separation or if they
find
they can get test/dev to play well with prod. ??


Marcy Cortes


"This message may contain confidential and/or privileged information.
If
you are not the addressee or authorized to receive this for the
addressee,
you must not use, copy, disclose, or take any action based on this
message
or any information herein.  If you have received this message in error,
please advise the sender immediately by reply e-mail and delete this
message.  Thank you for your cooperation."


-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Alan Ackerman
Sent: Monday, May 21, 2007 2:42 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Performance Rules of Thumb ?

On Mon, 21 May 2007 11:26:20 -0700, Lionel B. Dyck
<[EMAIL PROTECTED]>=
 
wrote:

>Thanks - you're points on YMMV is very appropo - I was just looking for

>=

>starting points.
>
>Lionel B. Dyck, Consultant/Specialist
>Enterprise Platform Services, Mainframe Engineering KP-IT Enterprise 
>Engineering, Client and Platform Engineering Services =

>(CAPES)
>925-926-5332 (8-473-5332) | E-Mail: [EMAIL PROTECTED]
>AIM: lbdyck | Yahoo IM: lbdyck
>Kaiser Service Credo: "Our cause is health. Our passion is service. 
>We?r=
e 
>here to make lives better.? 

I think rules of thumb are a bad starting point. They either give you
false
sense of security, or you waste time trying to track down non- existent
problems. 

We have run VM systems at 100% CPU utilization without problems -- but
it=
 
was because we had enough low-priority work to sop up the cycles. With =

a "lumpier" workload (such as Linux guests), you might get in trouble
somewhere between 80-100%. The 60% that someone cited seems extreme, but
=

if you need to hold cycles in reserve, then you do.

I/O and paging rates sustainable depend largely on the capability of the
=

I/O subsystem. But long before you approach that capacity, you will
start=
 
seeing problems due to queuing on a device. Adding more paging device
addresses may help, for example. Some queuing is hidden though -- it
exists
but you will never see it. 

Clearly, you cannot run at above 100% of the CPU capacity or the I/O
capacity. But it is very workload dependent on how close you can get to
=

100%.

Reply via email to