Don't change your setup to use full pack minidisks.   Then your cloning process 
will have to manage the real labels on the disk.  Not fun.
 
IMHO,  I don't think PAV would be very helpful in a Linux environment.    If 
you think about it, it will help you when you have queuing on a UCB.  If you've 
got one virtual machine doing all the i/o to that volume, it's probably not 
likely that you are going to be queuing.  Now, if you think you do have an i/o 
problem hitting one volume too hard, you can use LVM and stripe that workload 
across multiple volumes (be sure to understand the backend of the controller 
too so you spread it over multiple things in there).
 
We do non-full pack disks, beginning at cyl 1.   Using mostly mod 27 and 54.

Marcy 
 
"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."

 

________________________________

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Krueger Richard
Sent: Thursday, October 01, 2009 5:37 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: [IBMVM] HyperPAV use with full pack MDISK allocation


We are recent installers of z/VM, 5.4, and z/Linux, RHEL 5.3, and we are 
testing applications on z/Linux to try and decide if we want to move workload 
from current non-mainframe environments to it.  At this time we only have 
access to ECKD dasd for testing, and, we recently installed an EMC DMX4 Storage 
Array with the HyperPAV feature, and, our z/VM and z/Linux volumes are all 
allocated on the DMX4.  If I run a CP Q PAV it correctly shows the base and 
alias volumes we have configured on the DMX4.   
 
Using manual: Linux on System z, How to Improve Performance with PAV May, 2008  
SC33-8414-00, attempted to follow steps to test HYPERPAV on z/Linux.  One of 
the steps says to CP ATTACH a volume to the z/Linux guest.  We cannot do that 
because we have our volumes attached to SYSTEM and allocated with non-full pack 
MDISK for assigning space to z/Linux guests. Discussed this with IBM and the 
initial thought was to try to use COMMAND DEFINE HYPERPAVALIAS in the user 
directory stmts.  But that does not work because it requires full pack MDISK, 
which they tell me we can do by assigning cylinder 0 - END in MDISK stmt, and 
then allowing the z/Linux guest to format the volume, with the requirement that 
z/Linux assign the same volume label as the original on cylinder 0.  We also 
discussed that we can still use the more traditional PAV with our current MDISK 
definitions, using user directory DASDOPT / MINIOPT statements, and I still 
need to try that.   
 
At the moment, we are creating z/Linux test guests by cloning two 3390-9 
volumes ( our gold copy ) by using FDR on z/OS.  These contain the OS and 
related files with space reserved the way our z/Linux support team defines it 
in the non-mainframe platforms.  Actually they normally do it in 10gb on the 
non-mainframe platforms, and, with two 3390-9 they get 14 gb.  We allocate 
vdisk: 100 CYL 1 142, 101 CYL 143 9874, on the first 3390-9, and a 102 vdisk on 
the second 3390-9 CYL 1 10016, and they use LVM to manage the space as desired. 
 Any extra space that a particular guest might need for whatever application is 
being tested is allocated as MDISK from portions of additional 3390-9 volumes, 
or in some cases entire 3390-9 volumes.
 
1.      Should we be content with using traditional PAV, and not worry about 
trying to make HyperPAV work ?  Does the z/Linux OS kernel on the two 3390-9 
cloned volumes get any benefit from using HyperPAV, or, would it be more likely 
that we would get benefit from it on volumes allocated to the application 
databases run by the guest ?  
 
2.      Should we change the way we have allocated our gold copy volumes so 
that they are full pack MDISK so that we can take advantage of using HyperPAV ?
 
3.      When we installed the DMX4 we considered allocating many of the larger 
volume sizes, 27, 54, 220, ( but did not ), because we thought we might be able 
to take advantage of them for z/VM MDISK allocation using that allocate from 
the pool concept, and larger would be better for that.  Knowing now that 
HyperPAV only works with full pack MDISK allocation it seems that it was a good 
idea we did not do that.
 
4. Does anyone have any other thoughts on this, are there other ways to take 
advantage of HyperPAV ?  What z/Linux applications could really benefit from it 
?  How do you allocate your ECKD dasd space to z/Linux guests, as full pack, or 
non-full pack MDISK ?
 
 
 
This e-mail is confidential.  If you are not the intended recipient, you must 
not disclose or use the information contained in it.  If you have received this 
e-mail in error, please tell us immediately by return e-mail to 
email.cont...@sentry.com and delete the document.

E-mails containing unprofessional, discourteous or offensive remarks violate 
Sentry policy. You may report employee violations by forwarding the message to 
email.cont...@sentry.com.

No recipient may use the information in this e-mail in violation of any civil 
or criminal statute. Sentry disclaims all liability for any unauthorized uses 
of this e-mail or its contents.

This e-mail constitutes neither an offer nor an acceptance of any offer. No 
contract may be entered into by a Sentry employee without express approval from 
an authorized Sentry manager.

Warning: Computer viruses can be transmitted via e-mail. Sentry accepts no 
liability or responsibility for any damage caused by any virus transmitted with 
this e-mail.

Reply via email to