Barton wrote: 
"Using this measure, what do y'all run?"

Is there a Velocity screen  that adds them all up?  I don't want to
resort to excel :)

What I'm looking for is a fair way to determine who should be allocated
cost of the memory (not exactly chargeback) based on their impact to the
system.  And second, an objective number for management to say this
system needs more now.

But even the overcommit ratio isn't really a measure of the impact.
While 4:1 might be perfectly fine at 6pm when everyone has gone home but
a few, at noon it might not be.  Paging rate might more useful for
determining the pain point perhaps.



Marcy Cortes 

"This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation."


-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Barton Robinson
Sent: Tuesday, May 13, 2008 8:20 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Overcommit ratio

My use of the term "over-commit" is more simple with the objective of
setting a target that management understands. I don't include vdisk -
that is a moving target based on tuning and workload, as is the use of
CMM1.  The way I like to use the term is much higher level that doesn't
change based on workload.

I would use (Defined Guest Storage) / (CENTRAL + EXPANDED) (and people
that use MDC indiscriminately or vise versa need some perforance
assistance, but that is part of the tuning)

With this, I have the objective of managing to this target. So using CMM
(1) to reduce storage and the use of VDISK increases storage is the
tuning part.  And then I have a measurement that is compareable across
systems - especially important when virtual technologies are competing
and other virtual platforms don't/can't overcommit.  This is a serious
measure of technology and tuning ability as well. With current problems
in JAVA/Websphere, Domino and some other Tivoli applications, I've seen
the overcommit ratio attainable drop considerably. I used to expect 3 to
7 attainable, now some installations are barely able to attain 1.5.
This starts to make VMWARE where 1 is a good target look better - not in
our best interest.

And it gives me a measure of an installation's skill set (or ability to
tune based on tools of course).  It would be interesting to get the
numbers as i've defined for installations. Using this measure, what do
y'all run?




MARCY WROTE:

Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method?

Expanded storage?  Add it to central?

Nothing's simple anymore  :)

Marcy Cortes


Rob van der Heij wrote:

> On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman
<[EMAIL PROTECTED]> wrote:
> 
> 
>>The problem will be when you've allocated huge vdisks for all your
production systems based on the old "Swap = 2X main memory" ROT. In that
example - you're basically tripling your overcommit ratio by including
the vdisks. This also can have a large cost in terms of CP memory
structures to manage those things.
> 
> 
> I think you are confusing some things. In another universe there once 
> was a restriction of *max* twice the main memory as swap, but that was

> with another operating system to start with.
> 
> Linux needs swap space to allow over-commit within Linux itself. The 
> amount of swap space is determined by the applications you run and 
> their internal strategy to allocate virtual memory. That space is 
> normally not used by Linux.
> 
> 
>>The current guidance is a smallish vdisk for high priority swap space,
and a largish low priority real disk/minidisk for occasional use by
badly behaved apps.  Swapping to the vdisk is fine in normal operations,
swapping to the real disk should be unusual and rare.
> 
> 
> The unused swap disk should only be on real disk when you have no 
> monitoring set up. In that case when Linux does use it, things get so 
> slow that your users will call your manager to inform you about it.
> 
> The VDISK for swap that is being used actively by Linux during peak 
> periods is completely different. That's your tuning knob to 
> differentiate between production and development servers, for example.
> It reduces the idle footprint of the server at the expense of a small 
> overhead during the (less frequent) peak usage. That tuning determines

> the application latency and paging requirements.
> 
> I believe the "over-commit ratio" is a very simplified view of z/VM 
> memory management. It does not get much better by adding other 
> factors. Just use the sum of virtual machine and VDISK. And remember 
> to subtract any other things like MDC from your available main 
> storage.
> 
> Rob

Reply via email to