Re: Overcommit ratio

2008-05-15 Thread Barton Robinson
Stephen, you are doing great. Your workload must be Oracle, and not WAS, DB2 or Domino. If 
it is WAS, it must be old prior to performance enhancements. so don't upgrade it.


And the metric IS useful, you know if you add 4 more servers how much more mainframe 
storage you need. And your number gives a reference point to others to show what they 
could be doing if everything worked correctly.




Stephen Frazier wrote:

My overcommit ratio is about 5:1 not counting CMS users. If you count 
them it is more like 15:1. It seems to work fine. I don't think 
overcommit ratio is very useful for anything. It is two dependent on the 
kind of users you have to be meaningful.


Marcy Cortes wrote:


I keep hearing things like shouldn't be overcommitted in prod more than
2:1 or 3 or 4:1 in test.

How is that calculated?

Can I just take the (Pageable storage number  + Pages on DASD ) /
pageable storage number?



Marcy

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.





Re: Overcommit ratio

2008-05-15 Thread Stephen Frazier
Correct I do not have WAS, DB2, or Domino. My point exactly. What kind of users you have makes all 
the difference in overcommit ratio. If I added 4 WAS machines I would need a lot more storage than 4 
more MYSQL or Oracle machines.


Barton Robinson wrote:
Stephen, you are doing great. Your workload must be Oracle, and not WAS, 
DB2 or Domino. If it is WAS, it must be old prior to performance 
enhancements. so don't upgrade it.


And the metric IS useful, you know if you add 4 more servers how much 
more mainframe storage you need. And your number gives a reference point 
to others to show what they could be doing if everything worked correctly.




Stephen Frazier wrote:

My overcommit ratio is about 5:1 not counting CMS users. If you count 
them it is more like 15:1. It seems to work fine. I don't think 
overcommit ratio is very useful for anything. It is two dependent on 
the kind of users you have to be meaningful.


Marcy Cortes wrote:


I keep hearing things like shouldn't be overcommitted in prod more than
2:1 or 3 or 4:1 in test.

How is that calculated?

Can I just take the (Pageable storage number  + Pages on DASD ) /
pageable storage number?



Marcy

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.





--
Stephen Frazier
Information Technology Unit
Oklahoma Department of Corrections
3400 Martin Luther King
Oklahoma City, Ok, 73111-4298
Tel.: (405) 425-2549
Fax: (405) 425-2554
Pager: (405) 690-1828
email:  stevef%doc.state.ok.us


Re: Overcommit ratio

2008-05-13 Thread Rob van der Heij
On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman [EMAIL PROTECTED] wrote:

 The problem will be when you've allocated huge vdisks for all your production 
 systems based on the old Swap = 2X main memory ROT. In that example - 
 you're basically tripling your overcommit ratio by including the vdisks. This 
 also can have a large cost in terms of CP memory structures to manage those 
 things.

I think you are confusing some things. In another universe there once
was a restriction of *max* twice the main memory as swap, but that was
with another operating system to start with.

Linux needs swap space to allow over-commit within Linux itself. The
amount of swap space is determined by the applications you run and
their internal strategy to allocate virtual memory. That space is
normally not used by Linux.

 The current guidance is a smallish vdisk for high priority swap space, and a 
 largish low priority real disk/minidisk for occasional use by badly behaved 
 apps.  Swapping to the vdisk is fine in normal operations, swapping to the 
 real disk should be unusual and rare.

The unused swap disk should only be on real disk when you have no
monitoring set up. In that case when Linux does use it, things get so
slow that your users will call your manager to inform you about it.

The VDISK for swap that is being used actively by Linux during peak
periods is completely different. That's your tuning knob to
differentiate between production and development servers, for example.
It reduces the idle footprint of the server at the expense of a small
overhead during the (less frequent) peak usage. That tuning determines
the application latency and paging requirements.

I believe the over-commit ratio is a very simplified view of z/VM
memory management. It does not get much better by adding other
factors. Just use the sum of virtual machine and VDISK. And remember
to subtract any other things like MDC from your available main
storage.

Rob
-- 
Rob van der Heij
Velocity Software GmbH
http://velocitysoftware.com/


Re: Overcommit ratio

2008-05-13 Thread Barton Robinson
My use of the term over-commit is more simple with the objective of setting a target 
that management understands. I don't include vdisk - that is a moving target based on 
tuning and workload, as is the use of CMM1.  The way I like to use the term is much higher 
level that doesn't change based on workload.


I would use (Defined Guest Storage) / (CENTRAL + EXPANDED)
(and people that use MDC indiscriminately or vise versa need some perforance assistance, 
but that is part of the tuning)


With this, I have the objective of managing to this target. So using CMM (1) to reduce 
storage and the use of VDISK increases storage is the tuning part.  And then I have a 
measurement that is compareable across systems - especially important when virtual 
technologies are competing and other virtual platforms don't/can't overcommit.  This is a 
serious measure of technology and tuning ability as well. With current problems in 
JAVA/Websphere, Domino and some other Tivoli applications, I've seen the overcommit ratio 
attainable drop considerably. I used to expect 3 to 7 attainable, now some installations 
are barely able to attain 1.5.  This starts to make VMWARE where 1 is a good target look 
better - not in our best interest.


And it gives me a measure of an installation's skill set (or ability to tune based on 
tools of course).  It would be interesting to get the numbers as i've defined for 
installations. Using this measure, what do y'all run?





MARCY WROTE:

Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method?

Expanded storage?  Add it to central?

Nothing's simple anymore  :)

Marcy Cortes


Rob van der Heij wrote:


On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman [EMAIL PROTECTED] wrote:



The problem will be when you've allocated huge vdisks for all your production systems 
based on the old Swap = 2X main memory ROT. In that example - you're 
basically tripling your overcommit ratio by including the vdisks. This also can have a 
large cost in terms of CP memory structures to manage those things.



I think you are confusing some things. In another universe there once
was a restriction of *max* twice the main memory as swap, but that was
with another operating system to start with.

Linux needs swap space to allow over-commit within Linux itself. The
amount of swap space is determined by the applications you run and
their internal strategy to allocate virtual memory. That space is
normally not used by Linux.



The current guidance is a smallish vdisk for high priority swap space, and a 
largish low priority real disk/minidisk for occasional use by badly behaved 
apps.  Swapping to the vdisk is fine in normal operations, swapping to the real 
disk should be unusual and rare.



The unused swap disk should only be on real disk when you have no
monitoring set up. In that case when Linux does use it, things get so
slow that your users will call your manager to inform you about it.

The VDISK for swap that is being used actively by Linux during peak
periods is completely different. That's your tuning knob to
differentiate between production and development servers, for example.
It reduces the idle footprint of the server at the expense of a small
overhead during the (less frequent) peak usage. That tuning determines
the application latency and paging requirements.

I believe the over-commit ratio is a very simplified view of z/VM
memory management. It does not get much better by adding other
factors. Just use the sum of virtual machine and VDISK. And remember
to subtract any other things like MDC from your available main
storage.

Rob


Re: Overcommit ratio

2008-05-13 Thread Huegel, Thomas
My ratio is about 2.6 that represents a large (proportionately) number of CMS 
users. 

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Barton Robinson
Sent: Tuesday, May 13, 2008 10:20 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Overcommit ratio


My use of the term over-commit is more simple with the objective of setting a 
target 
that management understands. I don't include vdisk - that is a moving target 
based on 
tuning and workload, as is the use of CMM1.  The way I like to use the term is 
much higher 
level that doesn't change based on workload.

I would use (Defined Guest Storage) / (CENTRAL + EXPANDED)
(and people that use MDC indiscriminately or vise versa need some perforance 
assistance, 
but that is part of the tuning)

With this, I have the objective of managing to this target. So using CMM (1) to 
reduce 
storage and the use of VDISK increases storage is the tuning part.  And then I 
have a 
measurement that is compareable across systems - especially important when 
virtual 
technologies are competing and other virtual platforms don't/can't overcommit.  
This is a 
serious measure of technology and tuning ability as well. With current problems 
in 
JAVA/Websphere, Domino and some other Tivoli applications, I've seen the 
overcommit ratio 
attainable drop considerably. I used to expect 3 to 7 attainable, now some 
installations 
are barely able to attain 1.5.  This starts to make VMWARE where 1 is a good 
target look 
better - not in our best interest.

And it gives me a measure of an installation's skill set (or ability to tune 
based on 
tools of course).  It would be interesting to get the numbers as i've defined 
for 
installations. Using this measure, what do y'all run?




MARCY WROTE:

Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method?

Expanded storage?  Add it to central?

Nothing's simple anymore  :)

Marcy Cortes


Rob van der Heij wrote:

 On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman [EMAIL PROTECTED] wrote:
 
 
The problem will be when you've allocated huge vdisks for all your production 
systems based on the old Swap = 2X main memory ROT. In that example - 
you're basically tripling your overcommit ratio by including the vdisks. This 
also can have a large cost in terms of CP memory structures to manage those 
things.
 
 
 I think you are confusing some things. In another universe there once
 was a restriction of *max* twice the main memory as swap, but that was
 with another operating system to start with.
 
 Linux needs swap space to allow over-commit within Linux itself. The
 amount of swap space is determined by the applications you run and
 their internal strategy to allocate virtual memory. That space is
 normally not used by Linux.
 
 
The current guidance is a smallish vdisk for high priority swap space, and a 
largish low priority real disk/minidisk for occasional use by badly behaved 
apps.  Swapping to the vdisk is fine in normal operations, swapping to the 
real disk should be unusual and rare.
 
 
 The unused swap disk should only be on real disk when you have no
 monitoring set up. In that case when Linux does use it, things get so
 slow that your users will call your manager to inform you about it.
 
 The VDISK for swap that is being used actively by Linux during peak
 periods is completely different. That's your tuning knob to
 differentiate between production and development servers, for example.
 It reduces the idle footprint of the server at the expense of a small
 overhead during the (less frequent) peak usage. That tuning determines
 the application latency and paging requirements.
 
 I believe the over-commit ratio is a very simplified view of z/VM
 memory management. It does not get much better by adding other
 factors. Just use the sum of virtual machine and VDISK. And remember
 to subtract any other things like MDC from your available main
 storage.
 
 Rob


Re: Overcommit ratio

2008-05-13 Thread Rob van der Heij
On Tue, May 13, 2008 at 4:36 PM, Huegel, Thomas [EMAIL PROTECTED] wrote:

 My ratio is about 2.6 that represents a large (proportionately) number of CMS 
 users.

That's unfair. CMS users just take what they need, not what you let
them have...   You may be able to over-commit by 1000 when they behave
well.

Rob


Re: Overcommit ratio

2008-05-13 Thread RPN01
I only calculate it for the Linux images, for the reason that Rob states.
Our two systems are currently at 1.9:1 and 1.4:1. I found this number useful
enough that I have a rexx script that calculates it on the fly.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 5/13/08 10:09 AM, Rob van der Heij [EMAIL PROTECTED] wrote:

 On Tue, May 13, 2008 at 4:36 PM, Huegel, Thomas [EMAIL PROTECTED] wrote:
 
 My ratio is about 2.6 that represents a large (proportionately) number of CMS
 users.
 
 That's unfair. CMS users just take what they need, not what you let
 them have...   You may be able to over-commit by 1000 when they behave
 well.
 
 Rob


Re: Overcommit ratio

2008-05-13 Thread Barton Robinson
Ah yes, CMS is very different animal - it knows how to work well in a virtual environment. 
I think I remember numbers way above 20, so high nobody bothered to measure.




Huegel, Thomas wrote:

My ratio is about 2.6 that represents a large (proportionately) number of CMS users. 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Barton Robinson
Sent: Tuesday, May 13, 2008 10:20 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Overcommit ratio


My use of the term over-commit is more simple with the objective of setting a target 
that management understands. I don't include vdisk - that is a moving target based on 
tuning and workload, as is the use of CMM1.  The way I like to use the term is much higher 
level that doesn't change based on workload.


I would use (Defined Guest Storage) / (CENTRAL + EXPANDED)
(and people that use MDC indiscriminately or vise versa need some perforance assistance, 
but that is part of the tuning)


With this, I have the objective of managing to this target. So using CMM (1) to reduce 
storage and the use of VDISK increases storage is the tuning part.  And then I have a 
measurement that is compareable across systems - especially important when virtual 
technologies are competing and other virtual platforms don't/can't overcommit.  This is a 
serious measure of technology and tuning ability as well. With current problems in 
JAVA/Websphere, Domino and some other Tivoli applications, I've seen the overcommit ratio 
attainable drop considerably. I used to expect 3 to 7 attainable, now some installations 
are barely able to attain 1.5.  This starts to make VMWARE where 1 is a good target look 
better - not in our best interest.


And it gives me a measure of an installation's skill set (or ability to tune based on 
tools of course).  It would be interesting to get the numbers as i've defined for 
installations. Using this measure, what do y'all run?





MARCY WROTE:

Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method?

Expanded storage?  Add it to central?

Nothing's simple anymore  :)

Marcy Cortes


Rob van der Heij wrote:



On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman [EMAIL PROTECTED] wrote:




The problem will be when you've allocated huge vdisks for all your production systems 
based on the old Swap = 2X main memory ROT. In that example - you're 
basically tripling your overcommit ratio by including the vdisks. This also can have a 
large cost in terms of CP memory structures to manage those things.



I think you are confusing some things. In another universe there once
was a restriction of *max* twice the main memory as swap, but that was
with another operating system to start with.

Linux needs swap space to allow over-commit within Linux itself. The
amount of swap space is determined by the applications you run and
their internal strategy to allocate virtual memory. That space is
normally not used by Linux.




The current guidance is a smallish vdisk for high priority swap space, and a 
largish low priority real disk/minidisk for occasional use by badly behaved 
apps.  Swapping to the vdisk is fine in normal operations, swapping to the real 
disk should be unusual and rare.



The unused swap disk should only be on real disk when you have no
monitoring set up. In that case when Linux does use it, things get so
slow that your users will call your manager to inform you about it.

The VDISK for swap that is being used actively by Linux during peak
periods is completely different. That's your tuning knob to
differentiate between production and development servers, for example.
It reduces the idle footprint of the server at the expense of a small
overhead during the (less frequent) peak usage. That tuning determines
the application latency and paging requirements.

I believe the over-commit ratio is a very simplified view of z/VM
memory management. It does not get much better by adding other
factors. Just use the sum of virtual machine and VDISK. And remember
to subtract any other things like MDC from your available main
storage.

Rob






Re: Overcommit ratio

2008-05-13 Thread Marcy Cortes
Barton wrote: 
Using this measure, what do y'all run?

Is there a Velocity screen  that adds them all up?  I don't want to
resort to excel :)

What I'm looking for is a fair way to determine who should be allocated
cost of the memory (not exactly chargeback) based on their impact to the
system.  And second, an objective number for management to say this
system needs more now.

But even the overcommit ratio isn't really a measure of the impact.
While 4:1 might be perfectly fine at 6pm when everyone has gone home but
a few, at noon it might not be.  Paging rate might more useful for
determining the pain point perhaps.



Marcy Cortes 

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Barton Robinson
Sent: Tuesday, May 13, 2008 8:20 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Overcommit ratio

My use of the term over-commit is more simple with the objective of
setting a target that management understands. I don't include vdisk -
that is a moving target based on tuning and workload, as is the use of
CMM1.  The way I like to use the term is much higher level that doesn't
change based on workload.

I would use (Defined Guest Storage) / (CENTRAL + EXPANDED) (and people
that use MDC indiscriminately or vise versa need some perforance
assistance, but that is part of the tuning)

With this, I have the objective of managing to this target. So using CMM
(1) to reduce storage and the use of VDISK increases storage is the
tuning part.  And then I have a measurement that is compareable across
systems - especially important when virtual technologies are competing
and other virtual platforms don't/can't overcommit.  This is a serious
measure of technology and tuning ability as well. With current problems
in JAVA/Websphere, Domino and some other Tivoli applications, I've seen
the overcommit ratio attainable drop considerably. I used to expect 3 to
7 attainable, now some installations are barely able to attain 1.5.
This starts to make VMWARE where 1 is a good target look better - not in
our best interest.

And it gives me a measure of an installation's skill set (or ability to
tune based on tools of course).  It would be interesting to get the
numbers as i've defined for installations. Using this measure, what do
y'all run?




MARCY WROTE:

Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method?

Expanded storage?  Add it to central?

Nothing's simple anymore  :)

Marcy Cortes


Rob van der Heij wrote:

 On Tue, May 13, 2008 at 6:02 AM, Robert J Brenneman
[EMAIL PROTECTED] wrote:
 
 
The problem will be when you've allocated huge vdisks for all your
production systems based on the old Swap = 2X main memory ROT. In that
example - you're basically tripling your overcommit ratio by including
the vdisks. This also can have a large cost in terms of CP memory
structures to manage those things.
 
 
 I think you are confusing some things. In another universe there once 
 was a restriction of *max* twice the main memory as swap, but that was

 with another operating system to start with.
 
 Linux needs swap space to allow over-commit within Linux itself. The 
 amount of swap space is determined by the applications you run and 
 their internal strategy to allocate virtual memory. That space is 
 normally not used by Linux.
 
 
The current guidance is a smallish vdisk for high priority swap space,
and a largish low priority real disk/minidisk for occasional use by
badly behaved apps.  Swapping to the vdisk is fine in normal operations,
swapping to the real disk should be unusual and rare.
 
 
 The unused swap disk should only be on real disk when you have no 
 monitoring set up. In that case when Linux does use it, things get so 
 slow that your users will call your manager to inform you about it.
 
 The VDISK for swap that is being used actively by Linux during peak 
 periods is completely different. That's your tuning knob to 
 differentiate between production and development servers, for example.
 It reduces the idle footprint of the server at the expense of a small 
 overhead during the (less frequent) peak usage. That tuning determines

 the application latency and paging requirements.
 
 I believe the over-commit ratio is a very

Re: Overcommit ratio

2008-05-13 Thread Bruce Hayden
A few weeks ago, I put a package on the VM download library called
VIR2REAL, which is an exec that calculates the ratio on your running
system.  I just got an update out there today that gives you the ratio
for all users and also with out CMS users.  (I figure a CMS user is
one IPLed with a certain set of NSS names.)  I did not include
expanded storage in the calculation, but it would be pretty easy to
add.  See http://www.vm.ibm.com/download/packages/descript.cgi?VIR2REAL
for the package.

On Tue, May 13, 2008 at 11:18 AM, Marcy Cortes
[EMAIL PROTECTED] wrote:
 Barton wrote:
  Using this measure, what do y'all run?

  Is there a Velocity screen  that adds them all up?  I don't want to
  resort to excel :)

  What I'm looking for is a fair way to determine who should be allocated
  cost of the memory (not exactly chargeback) based on their impact to the
  system.  And second, an objective number for management to say this
  system needs more now.

  But even the overcommit ratio isn't really a measure of the impact.
  While 4:1 might be perfectly fine at 6pm when everyone has gone home but
  a few, at noon it might not be.  Paging rate might more useful for
  determining the pain point perhaps.




  Marcy Cortes



-- 
Bruce Hayden
Linux on System z Advanced Technical Support
IBM, Endicott, NY


Re: Overcommit ratio

2008-05-13 Thread Stephen Frazier
My overcommit ratio is about 5:1 not counting CMS users. If you count them it is more like 15:1. It 
seems to work fine. I don't think overcommit ratio is very useful for anything. It is two dependent 
on the kind of users you have to be meaningful.


Marcy Cortes wrote:

I keep hearing things like shouldn't be overcommitted in prod more than
2:1 or 3 or 4:1 in test.

How is that calculated?

Can I just take the (Pageable storage number  + Pages on DASD ) /
pageable storage number?



Marcy

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.


--
Stephen Frazier
Information Technology Unit
Oklahoma Department of Corrections
3400 Martin Luther King
Oklahoma City, Ok, 73111-4298
Tel.: (405) 425-2549
Fax: (405) 425-2554
Pager: (405) 690-1828
email:  stevef%doc.state.ok.us


Overcommit ratio

2008-05-12 Thread Marcy Cortes
I keep hearing things like shouldn't be overcommitted in prod more than
2:1 or 3 or 4:1 in test.

How is that calculated?

Can I just take the (Pageable storage number  + Pages on DASD ) /
pageable storage number?



Marcy

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.


Re: Overcommit ratio

2008-05-12 Thread Robert J Brenneman
Sum up the default virtual storage allocation for each running guest on the
system and divide that by the total amount of central storage.

Can I just take the (Pageable storage number  + Pages on DASD ) /
 pageable storage number?


That just gives you your overcommit ratio at this second - not your worst
case given the current definitions of everything running on the box. Your
calculation will not be pessimistic enough to allow for a system where
everyone decided that they needed all their memory, right now.

-- 
Jay Brenneman


Re: Overcommit ratio

2008-05-12 Thread Marcy Cortes
But what about the vdisk blocks?  Those really need to be taken into
consideration too.  If someone decided the right number for VM size was
too low, those vdisk blocks would be making up the impact to memory,
maybe a lot.


Marcy Cortes 

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.

 



From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Robert J Brenneman
Sent: Monday, May 12, 2008 6:24 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Overcommit ratio


Sum up the default virtual storage allocation for each running guest on
the system and divide that by the total amount of central storage. 



Can I just take the (Pageable storage number  + Pages on DASD )
/
pageable storage number?




That just gives you your overcommit ratio at this second - not your
worst case given the current definitions of everything running on the
box. Your calculation will not be pessimistic enough to allow for a
system where everyone decided that they needed all their memory, right
now. 

-- 
Jay Brenneman 


Re: Overcommit ratio

2008-05-12 Thread Robert J Brenneman
Errr... yeah - them too...

The problem will be when you've allocated huge vdisks for all your
production systems based on the old Swap = 2X main memory ROT. In that
example - you're basically tripling your overcommit ratio by including the
vdisks. This also can have a large cost in terms of CP memory structures to
manage those things.

The current guidance is a smallish vdisk for high priority swap space, and a
largish low priority real disk/minidisk for occasional use by badly behaved
apps.  Swapping to the vdisk is fine in normal operations, swapping to the
real disk should be unusual and rare.

So - overcommit ratio is calculated as follows:

( Sum ( guest virtual storage sizes ) + Sum ( vdisk sizes ) )  / central
storage

Anything else I've forgotten?

-- 
Jay Brenneman


Re: Overcommit ratio

2008-05-12 Thread Marcy Cortes
Well, only if the server uses them.

If you have a 1.5G server and it is using 1.5 Gig of swap space in VDISK
then it is an impact of 3G virtual, right?  If you have a 1.5G server
and it is not swapping, it's impact is 1.5G virtual.

So maybe more like (sum (guest virtual storage sizes) + sum (*used*
vdisk blocks) ) / central storage.
Wouldn't that be pretty simliar to number of pages on DASD method? 

Expanded storage?  Add it to central?  

Nothing's simple anymore :)

Marcy Cortes 

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.

 



From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Robert J Brenneman
Sent: Monday, May 12, 2008 9:03 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Overcommit ratio


Errr... yeah - them too...

The problem will be when you've allocated huge vdisks for all your
production systems based on the old Swap = 2X main memory ROT. In that
example - you're basically tripling your overcommit ratio by including
the vdisks. This also can have a large cost in terms of CP memory
structures to manage those things. 

The current guidance is a smallish vdisk for high priority swap space,
and a largish low priority real disk/minidisk for occasional use by
badly behaved apps.  Swapping to the vdisk is fine in normal operations,
swapping to the real disk should be unusual and rare. 

So - overcommit ratio is calculated as follows:

( Sum ( guest virtual storage sizes ) + Sum ( vdisk sizes ) )  / central
storage

Anything else I've forgotten?

-- 
Jay Brenneman