Using VMRM

2010-01-07 Thread Sterling James
Hello,
I'm starting to try to use VMRM to handle "meeting our business goals". 
I'm trying to;
1) Prioritizing workloads/guests; "most-loved", "not as much",  least , 
and bottom-feeders
2) setting VMRM's config file to match this above list.
Hoping that the highest will get as much as it needs (within reason), 
following down the list, and, if the CPU resources are sufficient,  all 
will get enough to accomplish their tasks satisfactorily. 
We do not use VMRM for CMM, or setting DASD velocity goals. I am testing 
this on a low activity system (test lpar) so the results may be shewed 
because due to the lack of competition for CPU resources.

The following is what I have seen about how VMRM determines when/how to 
adjust;

A sample monitor interval of less than 1 minute or more than 5 minutes is 
not recommended.

For CPU goals, all of the following must be true:
The user has a Relative share setting.
The user does not have LIMITHARD specified on the CPU SHARE setting.
The user is not already within 5% of the goal for the user.

The CPU velocity goal is the percentage of time that the workload should 
receive CPU resources when it is ready to consume them. This is computed 
by
taking the time the users are running and dividing the sum of time the 
users are running or waiting for CPU. The variable target must be an 
integer from 1 to
100.

If a user is within a reasonable percent of the target goal, they will be 
considered to have met the goal, and no adjustments will be made for this 
user

– Workloads are selected first based on importance value
– If a workload was selected in the last interval either for improvement 
or degradation, it is skipped and an attempt is made to select another 
– If there are workloads of equal importance, the workload farthest from 
its goal is selected
– Eligible users within a workload will have their SHARE or IOPRIORITY 
adjusted appropriately based on how far they are from the workload goal

• Individual users within selected workload may be adjusted based on 
calculations from monitor data
– User must have Relative Share and I/O Priority settings
– User does not have Limithard specified for CPU Share
– Sum of wait and run deltas is > current sample size of 5
– Sum of I/O and Outprioritized deltas is > current sample size of 5
– CPU actual = run delta / (run delta + wait delta) * 100
– DASD actual = IO delta / (IO delta + outprior delta) * 100
• If above criteria is met and user is not within 5% of goal, then they 
can be adjusted

How to adjust each user
– relvalue = (CPU goal / actual) * User current share

The current sample size of 5 is a threshold value. It used to determine if 
there is enough data to make decisions on for the user.

For example; with monitor settings of; 
INTERVAL1 MINUTES 
RATE 5.00 SECONDS 
a user would need to be active at least 5 of the 12 samples.


From my testing, VMRM does not provide the results that I expected.  After 
adjusting workloads, goals, and importance; VMRM sets relative shares for 
most-loved and bottom-feeders the same.   My view may be tainted by past 
experiences with WLM on zOS, but I assume this may be due to VMRM just 
looking at the last interval vs a loner period for sampling. Also, I found 
VMRM logs and the PerfToolKit's display very lacking in helpful data in 
this area.

Has anyone had a more positive experience with VMRM? Or am I missing 
something?
Thanks,



-
Please consider the environment before printing this email and any
attachments.

This e-mail and any attachments are intended only for the
individual or company to which it is addressed and may contain
information which is privileged, confidential and prohibited from
disclosure or unauthorized use under applicable law.  If you are
not the intended recipient of this e-mail, you are hereby notified
that any use, dissemination, or copying of this e-mail or the
information contained in this e-mail is strictly prohibited by the
sender.  If you have received this transmission in error, please
return the material received to the sender and delete all copies
from your system.

Fixed length field alignment degradation

2010-01-07 Thread Gary M. Dennis
The POPS For System z contains the following note:

"
Programming Note: For fixed-field-length operations with field lengths that
are a power of 2, significant performance degradation is possible when
storage operands are not positioned at addresses that are integral multiples
of the operand length. To improve performance, frequently used storage oper-
ands should be aligned on integral boundaries.

"

Does anyone know what "significant performance degradation" means in terms
of true machine overhead?

Is anyone aware of published degradation numbers for non-aligned fixed
length field operations?



--.  .-  .-.  -.--

Gary Dennis
Mantissa Corporation


0 ... living between the zeros... 0


Re: Using VMRM

2010-01-07 Thread Martin, Terry R. (CMS/CTR) (CTR)
Hi James,

 

I just went through this exercise along with actually talking to one of the 
original developers of VMRM. After much testing I found as you are finding that 
VMRM in terms of being a Workload Manager does not quite cut it. I too wanted 
to use it as a way to control the importance of workloads based on my business 
requirements but it just never gave me a consistent warm and fuzzy. So after 
speaking with the developer and hearing his take on it,  which was basically to 
use Relative and Absolute Share setting manually instead of depending on VMRM I 
decided to bag it. 

 

I did however express my opinion that with the advent of multiple Linux guests 
and the differing business requirements that there is a definite need for a 
Workload Manager type of software that will allow you to control the priorities 
of the workloads. 

 

Having a lot of experience with WLM on z/OS this is a must moving forward and I 
expressed that to IBM. Actually it even goes further than just z/VM at some 
point being able to control the priorities of the individual z/Linux processes 
would be great. 

 

Thanks,

 

Terry

 

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Sterling James
Sent: Thursday, January 07, 2010 11:59 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Using VMRM

 


Hello, 
I'm starting to try to use VMRM to handle "meeting our business goals".  I'm 
trying to; 
1) Prioritizing workloads/guests; "most-loved", "not as much",  least , and 
bottom-feeders 
2) setting VMRM's config file to match this above list. 
Hoping that the highest will get as much as it needs (within reason), following 
down the list, and, if the CPU resources are sufficient,  all will get enough 
to accomplish their tasks satisfactorily. 
We do not use VMRM for CMM, or setting DASD velocity goals. I am testing this 
on a low activity system (test lpar) so the results may be shewed because due 
to the lack of competition for CPU resources. 

The following is what I have seen about how VMRM determines when/how to adjust; 

A sample monitor interval of less than 1 minute or more than 5 minutes is not 
recommended. 

For CPU goals, all of the following must be true: 
The user has a Relative share setting. 
The user does not have LIMITHARD specified on the CPU SHARE setting. 
The user is not already within 5% of the goal for the user. 

The CPU velocity goal is the percentage of time that the workload should 
receive CPU resources when it is ready to consume them. This is computed by 
taking the time the users are running and dividing the sum of time the users 
are running or waiting for CPU. The variable target must be an integer from 1 
to 
100. 

If a user is within a reasonable percent of the target goal, they will be 
considered to have met the goal, and no adjustments will be made for this user 

– Workloads are selected first based on importance value 
– If a workload was selected in the last interval either for improvement or 
degradation, it is skipped and an attempt is made to select another 
– If there are workloads of equal importance, the workload farthest from its 
goal is selected 
– Eligible users within a workload will have their SHARE or IOPRIORITY adjusted 
appropriately based on how far they are from the workload goal 

• Individual users within selected workload may be adjusted based on 
calculations from monitor data 
– User must have Relative Share and I/O Priority settings 
– User does not have Limithard specified for CPU Share 
– Sum of wait and run deltas is > current sample size of 5 
– Sum of I/O and Outprioritized deltas is > current sample size of 5 
– CPU actual = run delta / (run delta + wait delta) * 100 
– DASD actual = IO delta / (IO delta + outprior delta) * 100 
• If above criteria is met and user is not within 5% of goal, then they can be 
adjusted 

How to adjust each user 
– relvalue = (CPU goal / actual) * User current share 

The current sample size of 5 is a threshold value. It used to determine if 
there is enough data to make decisions on for the user. 

For example; with monitor settings of;  
INTERVAL1 MINUTES   
RATE 5.00 SECONDS   
a user would need to be active at least 5 of the 12 samples. 


From my testing, VMRM does not provide the results that I expected.  After 
adjusting workloads, goals, and importance; VMRM sets relative shares for 
most-loved and bottom-feeders the same.   My view may be tainted by past 
experiences with WLM on zOS, but I assume this may be due to VMRM just looking 
at the last interval vs a loner period for sampling. Also, I found VMRM logs 
and the PerfToolKit's display very lacking in helpful data in this area. 

Has anyone had a more positive experience with VMRM? Or am I missing something? 
Thanks, 

- Please consider the environment 
before printing this email and any atta

Re: Using VMRM

2010-01-07 Thread David Boyes

Having a lot of experience with WLM on z/OS this is a must moving forward and I 
expressed that to IBM. Actually it even goes further than just z/VM at some 
point being able to control the priorities of the individual z/Linux processes 
would be great.

Going further, a real workload manager MUST deal at the guest, the hypervisor, 
the LPAR and the box level.  IBM cannot omit the LPAR and hypervisor layers, eg 
z/VM.




Re: Fixed length field alignment degradation

2010-01-07 Thread Brian Nielsen
You might try this:

http://portal.acm.org/ft_gateway.cfm?id=29651&type=pdf

The above, "Mimic: A Fast System/370 Simulator", is also cited in patent 

application 5,751,982.

http://www.google.com/patents?
hl=en&lr=&vid=USPAT5751982&id=6jAlEBAJ&oi=f
nd&dq=related:P7ZYcQqpZikJ:s
cholar.google.com/&printsec=abstract#


(Watch for line wrap in the above URL.)

Brian Nielsen



On Thu, 7 Jan 2010 11:37:00 -0600, Gary M. Dennis 
 wrote:

>The POPS For System z contains the following note:
>
>"
>Programming Note: For fixed-field-length operations with field lengths 

that
>are a power of 2, significant performance degradation is possible when
>storage operands are not positioned at addresses that are integral 
multiples
>of the operand length. To improve performance, frequently used storage 

oper-
>ands should be aligned on integral boundaries.
>
>"
>
>Does anyone know what "significant performance degradation" means in ter
ms
>of true machine overhead?
>
>Is anyone aware of published degradation numbers for non-aligned fixed
>length field operations?


Re: Using VMRM

2010-01-07 Thread Martin, Terry R. (CMS/CTR) (CTR)
Yes, I agree. Now getting there is the challenge!

 

Terry 

 

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of David Boyes
Sent: Thursday, January 07, 2010 2:41 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Using VMRM

 

 

Having a lot of experience with WLM on z/OS this is a must moving forward and I 
expressed that to IBM. Actually it even goes further than just z/VM at some 
point being able to control the priorities of the individual z/Linux processes 
would be great. 

 

Going further, a real workload manager MUST deal at the guest, the hypervisor, 
the LPAR and the box level.  IBM cannot omit the LPAR and hypervisor layers, eg 
z/VM. 

 

 



Re: Fixed length field alignment degradation

2010-01-07 Thread Alan Altmark
On Thursday, 01/07/2010 at 02:58 EST, Brian Nielsen 
 wrote:
> You might try this:
> 
> http://portal.acm.org/ft_gateway.cfm?id=29651&type=pdf
> 
> The above, "Mimic: A Fast System/370 Simulator", is also cited in patent

Not everyone has access to ACM's digital library, but the only relevant 
point in the paper (I think) was that alignment was a factor in the number 
of RISC operations required to access an operand.

The warning about alignment-dependent performance has been in the book 
since time immemorial and the assembler has been issuing warnings about it 
since Day 2.  (I remember my great-grandfather mentioning it to his pet 
dinosaur one day)  Read the section in Chapter 5 on "Storage-Operand 
Consistency" for some additional details.

But remember that the Principles of Operations describes *architecture*, 
not implementation.  If you obey the alignment rules, then the machine 
will provide the "block consistency" it describes.  For example:  ST 
R1,R1VALUE.  If R1VALUE is on a fullword (S/390) or doubleword 
(z/Architecture) boundary, the machine will ensure that all relevant bytes 
in R1 will appear to have been stored as a single unit.  Likewise for 
fetch operations.

The PoP does NOT say what happens if R1VALUE is NOT on an integral 
boundary.  The results are 'unspecified' and cannot be depended upon.

Alan Altmark
z/VM Development
IBM Endicott