Re: z/VM 5.3 and PAV

2008-07-07 Thread Imler, Steven J
Eric,
 
Thanks for this information ... I'll review it internally with our
systems guys (maybe the SET CU was what we missed).  
 
If it seems we still have issues contradicting the behavior you've
outlined below I will get in touch with you/IBM to iron this out.
 
Thanks again,
JR
 
JR (Steven) Imler
CA
Senior Software Engineer
Tel:  +1 703 708 3479
Fax:  +1 703 708 3267
[EMAIL PROTECTED]




From: The IBM z/VM Operating System
[mailto:[EMAIL PROTECTED] On Behalf Of Eric R Farman
Sent: Monday, July 07, 2008 08:55 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: z/VM 5.3 and PAV



Hi JR, 

I'm a little confused by your description of the experiences
you've been having with PAV, so have interjected my commentary on the
matter below in hopes of understanding it better... 

Imler, Steven J wrote on 07/03/2008 09:32:13 PM:

> Well ... for me ... it's not an issue of what benefit having a
PAV
> alias(es) for a given volume might yield.  It's a question of
support
> and toleration (for example, will VM:Backup or HiDRO back the
BASE up
> and erroneously back the potential PAV alias(es) too?).
> 
> So, up until recently (6 months ago or so [we did a huge DASD
> migration]), each time our storage group got a new disk array
(IBM or
> Hitachi, whatever), they would hard-code PAV alias addresses
into the
> "DASD subsystem" ... because z/OS needed that.  When this was
done, z/VM
> 5.3 could "see and use" the PAV alias(es) (for non-z/OS
volumes too).
> 

The only hard-coded PAV-aliases I'm aware of are in the
"classic" PAV mode, where a single base subchannel can have zero or more
aliases.  These devices and their respectively subchannels are all
viewed independently of one another as viewed by the host OS, regardless
of whether it's z/OS or z/VM or whoever.  What the OS does with those
devices is clearly a different matter, but from a simplistic stance of
whether z/VM is able to "see and use" them, this is all true. 

> However, now, z/OS does not require that PAV alias(es) be
defined to the
> disk array (hard coded in the DASD subsystem) with the newer
DASD
> subsystems ... z/OS (I think via WLM) can *dynamically define*
PAV
> aliases to alleviate I/O contention ... the PAV aliases do not
need to
> be pre-defined or hard-coded in the DASD
subsystem/configuration.  (This
> is why our storage group no longer defines the PAV alias
addresses to
> the DASD subsystem ... and won't for z/VM.)
> 

If the DASD is still in the "classic" PAV mode, then you're
technically correct that z/OS (via WLM) can "create" a PAV alias to a
device that needs one.  But it does so by stealing an alias from a
comparatively idle base device, while still maintaining a static
association of aliases mapping to particular bases within the DASD
subsystem.  As an example, if you were to go back to the DASD subsystem
management tool, you might note that Alias 816F (corresponding LSS/UA
rules apply) is now associated with base 8114 instead of 810F as it was
originally configured. 

Now if you're referring to HyperPAV (which we added support for
in z/VM 5.3), the aforementioned association is dissolved from the
operating system's point of view, and any base that needs an alias will
pick one out of a pool associated with the LSS on a per-IO basis.  The
association is still there in the DASD subsystem, lest the CU be put
back into "classic" PAV mode (see the z/VM SET CU command), but the
different OS's aren't going to see it unless that has happened.
Regardless, the devices are still defined and should still be visible to
z/VM, they just will not be capable of pointing to anything useful
within the DASD until its used. 

> Please correct me if I'm wrong, but z/VM still requires the
PAV
> alias(es) to be hard-coded/pre-defined to the DASD subsystem
...
> HyperPAV support or whatever in z/VM.  Otherwise, z/VM can not
support
> PAV aliases.  This is what we've experienced at our site.  If
there's
> something we are missing in the z/VM support, please let me
know.
> 
> Thanks,
> JR
> 

Having said all that, I'm still having a hard time getting my
head around the description of your experience.  Perhaps I'm just
missing something obvious, but from what I can gather, I don't know why
it wouldn't work.  If you're having problems with it, get a PMR opened
with us.  If we're jus

Re: z/VM 5.3 and PAV

2008-07-07 Thread Eric R Farman
Hi JR,

I'm a little confused by your description of the experiences you've been 
having with PAV, so have interjected my commentary on the matter below in 
hopes of understanding it better...

Imler, Steven J wrote on 07/03/2008 09:32:13 PM:

> Well ... for me ... it's not an issue of what benefit having a PAV
> alias(es) for a given volume might yield.  It's a question of support
> and toleration (for example, will VM:Backup or HiDRO back the BASE up
> and erroneously back the potential PAV alias(es) too?).
> 
> So, up until recently (6 months ago or so [we did a huge DASD
> migration]), each time our storage group got a new disk array (IBM or
> Hitachi, whatever), they would hard-code PAV alias addresses into the
> "DASD subsystem" ... because z/OS needed that.  When this was done, z/VM
> 5.3 could "see and use" the PAV alias(es) (for non-z/OS volumes too).
> 

The only hard-coded PAV-aliases I'm aware of are in the "classic" PAV 
mode, where a single base subchannel can have zero or more aliases.  These 
devices and their respectively subchannels are all viewed independently of 
one another as viewed by the host OS, regardless of whether it's z/OS or 
z/VM or whoever.  What the OS does with those devices is clearly a 
different matter, but from a simplistic stance of whether z/VM is able to 
"see and use" them, this is all true.

> However, now, z/OS does not require that PAV alias(es) be defined to the
> disk array (hard coded in the DASD subsystem) with the newer DASD
> subsystems ... z/OS (I think via WLM) can *dynamically define* PAV
> aliases to alleviate I/O contention ... the PAV aliases do not need to
> be pre-defined or hard-coded in the DASD subsystem/configuration.  (This
> is why our storage group no longer defines the PAV alias addresses to
> the DASD subsystem ... and won't for z/VM.)
> 

If the DASD is still in the "classic" PAV mode, then you're technically 
correct that z/OS (via WLM) can "create" a PAV alias to a device that 
needs one.  But it does so by stealing an alias from a comparatively idle 
base device, while still maintaining a static association of aliases 
mapping to particular bases within the DASD subsystem.  As an example, if 
you were to go back to the DASD subsystem management tool, you might note 
that Alias 816F (corresponding LSS/UA rules apply) is now associated with 
base 8114 instead of 810F as it was originally configured.

Now if you're referring to HyperPAV (which we added support for in z/VM 
5.3), the aforementioned association is dissolved from the operating 
system's point of view, and any base that needs an alias will pick one out 
of a pool associated with the LSS on a per-IO basis.  The association is 
still there in the DASD subsystem, lest the CU be put back into "classic" 
PAV mode (see the z/VM SET CU command), but the different OS's aren't 
going to see it unless that has happened.  Regardless, the devices are 
still defined and should still be visible to z/VM, they just will not be 
capable of pointing to anything useful within the DASD until its used.

> Please correct me if I'm wrong, but z/VM still requires the PAV
> alias(es) to be hard-coded/pre-defined to the DASD subsystem ...
> HyperPAV support or whatever in z/VM.  Otherwise, z/VM can not support
> PAV aliases.  This is what we've experienced at our site.  If there's
> something we are missing in the z/VM support, please let me know.
> 
> Thanks,
> JR
> 

Having said all that, I'm still having a hard time getting my head around 
the description of your experience.  Perhaps I'm just missing something 
obvious, but from what I can gather, I don't know why it wouldn't work. If 
you're having problems with it, get a PMR opened with us.  If we're just 
talking different languages (entirely possible, I've been told I don't 
speak English), contact me off list as it suggests there are opportunities 
in the links that Bill provided the other day...

Regards,
Eric

Eric Farman
z/VM I/O Development
IBM Endicott, NY


Re: z/VM 5.3 and PAV

2008-07-07 Thread Dave Yarris
Thanks for the reply Bill.  Unfortunately, any numbers I have now are not 
any indication of where we will be.  We have been in the early stages of 
development of our first application of Linux on z/VM for the last two 
years and I have yet to get the project folks to give me much in the way 
of what workload to expect.  I anticipated combining some of the 3390-3 
dedicated to specific filesystems to a M9 just to make the servers more 
manageable.  Other than that we were looking at the possibility of needing 
PAV on M27 volumes that will have DB2 database.  The numbers we were given 
when fully implemented was that the DB would be accessable to 30k+ users.

 



Bill Bitner <[EMAIL PROTECTED]> 
Sent by: The IBM z/VM Operating System 
07/03/2008 04:51 PM
Please respond to
The IBM z/VM Operating System 


To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
Re: z/VM 5.3 and PAV






Are you concerned you'll need PAV as you consolidate current
volumes? or as you grow current workloads? If the former, I'd
suggest looking at some statistics like I/O rate per GB of
disk storage. For example if you are doing 15 I/Os a second on
a 2.7GB disk, that's 5.6 I/Os/GB. If you are then going to
configure your new volumes as 24GB volumes then on average
you'd have 134.4 I/Os per second for the volumes. Ask your
vendor about whether that would be significant enough to
warrant PAV. It's been my experience that most VM shops do
not need PAV for large number of volumes in the same way
as they do in z/OS shops. If you do need PAV, a couple
sources of additional information include:
http://www.vm.ibm.com/storman/pav/index.html
http://www.vm.ibm.com/devpages/farman/WAVVPAV.PDF
http://www.vm.ibm.com/perf/reports/zvm/html/530hpav.html
http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html

Bill Bitner



Re: z/VM 5.3 and PAV

2008-07-03 Thread Imler, Steven J
Well ... for me ... it's not an issue of what benefit having a PAV
alias(es) for a given volume might yield.  It's a question of support
and toleration (for example, will VM:Backup or HiDRO back the BASE up
and erroneously back the potential PAV alias(es) too?).

So, up until recently (6 months ago or so [we did a huge DASD
migration]), each time our storage group got a new disk array (IBM or
Hitachi, whatever), they would hard-code PAV alias addresses into the
"DASD subsystem" ... because z/OS needed that.  When this was done, z/VM
5.3 could "see and use" the PAV alias(es) (for non-z/OS volumes too).

However, now, z/OS does not require that PAV alias(es) be defined to the
disk array (hard coded in the DASD subsystem) with the newer DASD
subsystems ... z/OS (I think via WLM) can *dynamically define* PAV
aliases to alleviate I/O contention ... the PAV aliases do not need to
be pre-defined or hard-coded in the DASD subsystem/configuration.  (This
is why our storage group no longer defines the PAV alias addresses to
the DASD subsystem ... and won't for z/VM.)

Please correct me if I'm wrong, but z/VM still requires the PAV
alias(es) to be hard-coded/pre-defined to the DASD subsystem ...
HyperPAV support or whatever in z/VM.  Otherwise, z/VM can not support
PAV aliases.  This is what we've experienced at our site.  If there's
something we are missing in the z/VM support, please let me know.

Thanks,
JR

JR (Steven) Imler
CA
Senior Software Engineer
Tel:  +1 703 708 3479
Fax:  +1 703 708 3267
[EMAIL PROTECTED]


> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:[EMAIL PROTECTED] On Behalf Of Bill Bitner
> Sent: Thursday, July 03, 2008 04:52 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: z/VM 5.3 and PAV
> 
> Are you concerned you'll need PAV as you consolidate current
> volumes? or as you grow current workloads? If the former, I'd
> suggest looking at some statistics like I/O rate per GB of
> disk storage. For example if you are doing 15 I/Os a second on
> a 2.7GB disk, that's 5.6 I/Os/GB. If you are then going to
> configure your new volumes as 24GB volumes then on average
> you'd have 134.4 I/Os per second for the volumes. Ask your
> vendor about whether that would be significant enough to
> warrant PAV. It's been my experience that most VM shops do
> not need PAV for large number of volumes in the same way
> as they do in z/OS shops. If you do need PAV, a couple
> sources of additional information include:
> http://www.vm.ibm.com/storman/pav/index.html
> http://www.vm.ibm.com/devpages/farman/WAVVPAV.PDF
> http://www.vm.ibm.com/perf/reports/zvm/html/530hpav.html
> http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html
> 
> Bill Bitner
> 
> 


Re: z/VM 5.3 and PAV

2008-07-03 Thread Bill Bitner
Are you concerned you'll need PAV as you consolidate current
volumes? or as you grow current workloads? If the former, I'd
suggest looking at some statistics like I/O rate per GB of
disk storage. For example if you are doing 15 I/Os a second on
a 2.7GB disk, that's 5.6 I/Os/GB. If you are then going to
configure your new volumes as 24GB volumes then on average
you'd have 134.4 I/Os per second for the volumes. Ask your
vendor about whether that would be significant enough to
warrant PAV. It's been my experience that most VM shops do
not need PAV for large number of volumes in the same way
as they do in z/OS shops. If you do need PAV, a couple
sources of additional information include:
http://www.vm.ibm.com/storman/pav/index.html
http://www.vm.ibm.com/devpages/farman/WAVVPAV.PDF
http://www.vm.ibm.com/perf/reports/zvm/html/530hpav.html
http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html

Bill Bitner


Re: z/VM 5.3 and PAV

2008-07-03 Thread Dave Yarris
Appreciate all of the responses so far.  The Hitachi UPS V is relatively 
new  on the market and was not ordered with any PAV in the initial buy. 
Yesit will be FICON attached and serve two z/OS LPARs along with a 
z/VM system with two IFLs. 

I have read the z/VM 5.3 enhancement notes talking about Hyper-PAV 
support.  I just couldn't find anythingor anyone.that talked about 
dynamic PAV.  I did find a paper that talked about how to do "dynamic" PAV 
with 5.1 or 5.2, but it looked real messy.  The 5.3 "CP Planning and 
Admin" book outlines PAV pretty well, but seems to be generic as far as 
which type of PAV is being discussed.  We would obviously like to do Hyper 
over anything else for all of the obvious work intensive reasons.

I don't really have any way to test whether we are going to have I/O 
queueing problems since we can only do 3390-3 right now.  The UPS V is 
still in install process and the planned servers after that.  Trying to 
think logically, by running larger volumes with ..for example...a 
DB2 database on a M9 or M27 accessible to 30k+ users might be a problem 
without PAV.  We would like to do it up front instead of catch up later. 

Re: z/VM 5.3 and PAV

2008-07-03 Thread Kris Buelens
Be sure to carefully read what Alan wrote: I/O queuing in CP on the RDEV.

That is: if you have a guest (like Linux, z/OS, or even SFS and
DB2/VM) it will start only one I/O at a time to a device (the
architecture doesn't allow more).  The guest can have a queue too (on
its virtual device that is), CP's PAV support can't help here.  The
queue in the guest cannot be seen by CP's performance monitor.  To
profit of PAV, guests need PAV support themselves and you have to
attach base plus alias addresses to the guest.

Futhermore: when dataspaces are used, CP's paging routines perform the
I/O, and CP's paging doesn't use PAV...

2008/7/3 Bruce Hayden <[EMAIL PROTECTED]>:
>
> The z/VM 5.3 announcement letter states that it includes HyperPAV
> support for IBM System Storage DS8000.  So - if you have a DS8000,
> then you have the dynamic PAV support.  However, as far as I know, 5.3
> does not virtualize HyperPAV, so a guest can't use it directly,
> although it can take advantage of VM's use of it for minidisk I/O.
>
> On Thu, Jul 3, 2008 at 10:01 AM, Imler, Steven J <[EMAIL PROTECTED]> wrote:
> > Alan,
> >
> > I suppose the reason you say "Before you go to all that work ..." is
> > because z/VM (unfortunately) does *not* support dynamic PAV.  Which
> > means the only way you can leverage or take advantage of PAV on z/VM is
> > to hard code the PAV aliases in the DASD subsystem.
> >
> > (This is the reason we no longer have access to PAV volumes on our z/VM
> > systems ... because no one wants to do the work configure the DASD
> > subsystem when z/OS will do this all dynamically.)
> >
> > JR (Steven) Imler
> > CA
> > Senior Software Engineer
> > Tel:  +1 703 708 3479
> > Fax:  +1 703 708 3267
> > [EMAIL PROTECTED]
> >
>
> --
> Bruce Hayden
> Linux on System z Advanced Technical Support
> IBM, Endicott, NY



--
Kris Buelens,
IBM Belgium, VM customer support


Re: z/VM 5.3 and PAV

2008-07-03 Thread Bruce Hayden
The z/VM 5.3 announcement letter states that it includes HyperPAV
support for IBM System Storage DS8000.  So - if you have a DS8000,
then you have the dynamic PAV support.  However, as far as I know, 5.3
does not virtualize HyperPAV, so a guest can't use it directly,
although it can take advantage of VM's use of it for minidisk I/O.

On Thu, Jul 3, 2008 at 10:01 AM, Imler, Steven J <[EMAIL PROTECTED]> wrote:
> Alan,
>
> I suppose the reason you say "Before you go to all that work ..." is
> because z/VM (unfortunately) does *not* support dynamic PAV.  Which
> means the only way you can leverage or take advantage of PAV on z/VM is
> to hard code the PAV aliases in the DASD subsystem.
>
> (This is the reason we no longer have access to PAV volumes on our z/VM
> systems ... because no one wants to do the work configure the DASD
> subsystem when z/OS will do this all dynamically.)
>
> JR (Steven) Imler
> CA
> Senior Software Engineer
> Tel:  +1 703 708 3479
> Fax:  +1 703 708 3267
> [EMAIL PROTECTED]
>

-- 
Bruce Hayden
Linux on System z Advanced Technical Support
IBM, Endicott, NY


Re: z/VM 5.3 and PAV

2008-07-03 Thread Marcy Cortes
I don't know anything about the HDS storage, presumably your new box is
FICON?  So you should see a world of difference there.
 
We run mod 3s, 9s, 27s, 54s.   All without PAV.  All on DS8300.  We
don't see any performance issues at all.  The performance problem that
PAV solves is queueing on the UCB.Now, I probably wouldn't stick all
the 300 CMS users going after the same focus database on 1 mod 54, but
for a linux server, I think they make sense.   How many I/Os could 1
guest managed to get queued to that volume?  
 
I think i'll just be having them add all new stuff as 54 (65,520 cyl).
(except paging volumes which we'll probalby keep as 3's).
 
Marcy Cortes 
 
"This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation."

 



From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Dave Yarris
Sent: Thursday, July 03, 2008 6:39 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: [IBMVM] z/VM 5.3 and PAV



We are currently running a single z/VM 5.3 (701) system on Hitachi 7700E
disk, set up as 3390-3, connected to a z9BC via ESCON.  This has worked
well, but the limited disk storage has kept us from growing past the 15
RH Linux guests.  We are in the process of installing a new Hitachi USP
V that will give us the storage capability to use some M9 and M27 for
some of the new servers that are planned.  With those volume sizes I am
anticipating that we will need to utilize PAV for some applications.  We
don't have any prior experience or knowledge other than what the manuals
tell us.  Everything I have found on the subject so far has been pretty
generic.  We were looking at Hyper-PAV or possibly dynamic.  Will
dynamic PAV work for z/VM 5.3?  Can anyone share any experiences or
their particular application?  (Volumes?  Minidisks? Particular
applications i.e. DB2, etc?)  Appreciate any and all input. 

Dave


Re: z/VM 5.3 and PAV

2008-07-03 Thread Imler, Steven J
Alan,

I suppose the reason you say "Before you go to all that work ..." is
because z/VM (unfortunately) does *not* support dynamic PAV.  Which
means the only way you can leverage or take advantage of PAV on z/VM is
to hard code the PAV aliases in the DASD subsystem.

(This is the reason we no longer have access to PAV volumes on our z/VM
systems ... because no one wants to do the work configure the DASD
subsystem when z/OS will do this all dynamically.)

JR (Steven) Imler
CA
Senior Software Engineer
Tel:  +1 703 708 3479
Fax:  +1 703 708 3267
[EMAIL PROTECTED]
 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark
> Sent: Thursday, July 03, 2008 09:49 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: z/VM 5.3 and PAV
> 
> On Thursday, 07/03/2008 at 09:39 EDT, Dave Yarris 
> <[EMAIL PROTECTED]> wrote:
> > We are currently running a single z/VM 5.3 (701) system on 
> Hitachi 7700E 
> disk, 
> > set up as 3390-3, connected to a z9BC via ESCON.  This has 
> worked well, 
> but the 
> > limited disk storage has kept us from growing past the 15 RH Linux 
> guests.  We 
> > are in the process of installing a new Hitachi USP V that 
> will give us 
> the 
> > storage capability to use some M9 and M27 for some of the 
> new servers 
> that are 
> > planned.  With those volume sizes I am anticipating that we 
> will need to 
> 
> > utilize PAV for some applications.  We don't have any prior 
> experience 
> or 
> > knowledge other than what the manuals tell us.  Everything 
> I have found 
> on the 
> > subject so far has been pretty generic.  We were looking at 
> Hyper-PAV or 
> 
> > possibly dynamic.  Will dynamic PAV work for z/VM 5.3?  Can 
> anyone share 
> any 
> > experiences or their particular application?  (Volumes?  Minidisks? 
> Particular 
> > applications i.e. DB2, etc?)  Appreciate any and all input. 
> 
> Before you go to all that work, verify that you have a 
> problem that PAV 
> will solve: I/O queuing in CP on the RDEV.  Your performance 
> monitor will 
> tell you that.
> 
> Alan Altmark
> z/VM Development
> IBM Endicott
> 
> 


Re: z/VM 5.3 and PAV

2008-07-03 Thread Alan Altmark
On Thursday, 07/03/2008 at 09:39 EDT, Dave Yarris 
<[EMAIL PROTECTED]> wrote:
> We are currently running a single z/VM 5.3 (701) system on Hitachi 7700E 
disk, 
> set up as 3390-3, connected to a z9BC via ESCON.  This has worked well, 
but the 
> limited disk storage has kept us from growing past the 15 RH Linux 
guests.  We 
> are in the process of installing a new Hitachi USP V that will give us 
the 
> storage capability to use some M9 and M27 for some of the new servers 
that are 
> planned.  With those volume sizes I am anticipating that we will need to 

> utilize PAV for some applications.  We don't have any prior experience 
or 
> knowledge other than what the manuals tell us.  Everything I have found 
on the 
> subject so far has been pretty generic.  We were looking at Hyper-PAV or 

> possibly dynamic.  Will dynamic PAV work for z/VM 5.3?  Can anyone share 
any 
> experiences or their particular application?  (Volumes?  Minidisks? 
Particular 
> applications i.e. DB2, etc?)  Appreciate any and all input. 

Before you go to all that work, verify that you have a problem that PAV 
will solve: I/O queuing in CP on the RDEV.  Your performance monitor will 
tell you that.

Alan Altmark
z/VM Development
IBM Endicott


z/VM 5.3 and PAV

2008-07-03 Thread Dave Yarris
We are currently running a single z/VM 5.3 (701) system on Hitachi 7700E 
disk, set up as 3390-3, connected to a z9BC via ESCON.  This has worked 
well, but the limited disk storage has kept us from growing past the 15 RH 
Linux guests.  We are in the process of installing a new Hitachi USP V 
that will give us the storage capability to use some M9 and M27 for some 
of the new servers that are planned.  With those volume sizes I am 
anticipating that we will need to utilize PAV for some applications.  We 
don't have any prior experience or knowledge other than what the manuals 
tell us.  Everything I have found on the subject so far has been pretty 
generic.  We were looking at Hyper-PAV or possibly dynamic.  Will dynamic 
PAV work for z/VM 5.3?  Can anyone share any experiences or their 
particular application?  (Volumes?  Minidisks? Particular applications 
i.e. DB2, etc?)  Appreciate any and all input. 

Dave