Re: PAV and VSE guest

2007-03-20 Thread Dieltiens Geert
Dave,

I would concentrate on MDC iso. PAV for VSE-fullpacks. When we started
using MDC for VSE-dasd, we saw a real boost in throughput even though
our real dasd already had quite a large cache. 

Once MDC has been activated, you can use a monitor (or QMDC EXEC,
written by K. Buelens, should be on the VM-download pages) to get an
idea of the MDC hit-ratio per minidisk. Keep MDC on for fullpacks with
good hit-ratios, and turn it off for the really bad ones. You could even
re-arrange your VSE-files onto mostly-read fullpacks vs. mostly write
minidisks.
We use about 800MB for MDC (no xSTORE). Using more storage didn't really
help. 

A good example for MDC success is the fullpack where DOSRES and SYSWK1
reside: it has an average MDC Hit ratio of 96%. Some of our
database(IDMS) fullpacks get 90% as well...

Just give it a try.

Bye,
Geert.  

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Dave Reinken
Sent: maandag 19 maart 2007 21:19
To: IBMVM@LISTSERV.UARK.EDU
Subject: PAV and VSE guest

I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.
DISCLAIMER

This email and any files transmitted with it are confidential 
and intended solely for the use of the individual or entity 
to whom they are addressed. If you have received this email 
in error please notify [EMAIL PROTECTED]
This footnote also confirms that this email has been checked 
for the presence of viruses.

Informatica J.Van Breda  Co NV BTW BE 0427 908 174


Re: PAV and VSE guest

2007-03-20 Thread Dave Reinken
OK, I am going to recap here what I am hearing, so that anyone can point
out any flaws.

1) VSE is going to queue the I/O, therefore simply changing everything
from full packs to minidisks and adding PAV is not going to get me
anything.

2) A way to trick VSE into not queuing the I/O would be to take my full
pack, and instead of making it a single minidisk, make it (say) three
minidisks. This would have the effect of causing VSE to not queue I/Os
among those three packs, and allow VM to do its PAV magic. The problem
I see with this is that with our predominently sequential processing,
VSE is still probably going to queue on each of the three minidisks on
the physical volume serially, most likely with the end effect of not
buying me anything.

3) The most promising performance increase, especially for a sequential
read such as ours, would be to convert the full packs to minidisks and
use spare memory (which we do have) to run a decently large
(800MB-1GB?) cache against the minidisks. This should, however, be
measured and reality checked by measuring read/write ratios and
checking cache hits by device, which would lead to turning the cache
off for volumes that are not getting any benefit.

4) PAV and MD cache don't play nice together, therefore since MD cache
may benefit me and PAV likely will not, I should forget PAV for now,
although in the future with system updates it may be something to
revisit.

Thank you for your time so far Eric, Catherine, Kris, Rob, Bill, and
Dietltiens.


Re: PAV and VSE guest

2007-03-20 Thread Daver!
 From: Eric Schadow [EMAIL PROTECTED]
 ESCON or FICON?

We are currently on ESCON. I think we would consider moving to FICON,
but since we aren't even saturating the ESCON channels yet, it seems
futile. Given this, more likely we will move to FICON when we upgrade
from the 2105.


PAV and VSE guest

2007-03-19 Thread Dave Reinken
I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.


Re: PAV and VSE guest

2007-03-19 Thread Eric Schadow
Dave

If you made the DASD mini-disks instead of DEDICATED you can try VM mini disk 
caching. 

I am pretty sure PAV is z/OS only...

All the regular tuning things can reviewed also
- VSAM buffer tuning -
- Sequential file blocking
- Application s/w tuning
- VSE or VM paging?

etc
etc



Eric 



At 04:18 PM 3/19/2007, you wrote:
I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.

Eric Schadow
Mainframe Technical Support
www.davisvision.com 





The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.



Re: PAV and VSE guest

2007-03-19 Thread McBride, Catherine
Dave, you may want to cross-post this one on VSE-L..could generate some good
dialogue.  

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Dave Reinken
Sent: Monday, March 19, 2007 3:19 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: PAV and VSE guest


I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.


Re: PAV and VSE guest

2007-03-19 Thread Dave Reinken
Well, I know that VSE can't do PAV, but z/VM 5.2 w/ APAR VM63952 can do
it for VSE, providing that the volumes are minidisks. I just don't
think that it is going to get me much (if anything) with a single
guest. 

Would VM minidisk caching help throughput in a large batch environment?
The manager is looking at the performance monitors and seeing less than
full usuage on the Shark's channels, I'm thinking we need to look closer
at the COBOL programs running there than the Shark...

  Original Message 
 Subject: Re: PAV and VSE guest
 From: Eric Schadow [EMAIL PROTECTED]
 Date: Mon, March 19, 2007 4:27 pm
 To: IBMVM@LISTSERV.UARK.EDU
 
 Dave
 
 If you made the DASD mini-disks instead of DEDICATED you can try VM mini disk 
 caching. 
 
 I am pretty sure PAV is z/OS only...
 
 All the regular tuning things can reviewed also
 - VSAM buffer tuning -
 - Sequential file blocking
 - Application s/w tuning
 - VSE or VM paging?
 
 etc
 etc
 
 
 
 Eric 
 
 
 
 At 04:18 PM 3/19/2007, you wrote:
 I was recently reviewing this:
 http://www.vm.ibm.com/storman/pav/pav2.html
 at the behest of my manager. He is looking to extend the life of and
 better utilize our current hardware. We are running z/VM 5.2 on a z800,
 with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
 use DEDICATED volumes for z/VSE. I am not necessarily against changing
 these volumes to minidisks if there is a performance benefit to be
 gained. However, from my reading of the above referenced article, it
 appears to me that converting them to minidisks and running PAV is
 going to gain me about ZERO, since all I have accessing the disks is a
 single z/VSE guest.
 
 Is this true, or am I missing something and should look into PAV and
 minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
 guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
 a single one won't.
 
 Eric Schadow
 Mainframe Technical Support
 www.davisvision.com 


Re: PAV and VSE guest

2007-03-19 Thread Kris Buelens

I guess that what you found is that VSE adheres to the old rules: it is
useless to send an I/O to a disk that is busy, so VSE queues. VM can't
change that.

Minidisk cache can help or hurt VSE: CP's MDC will change the IO to make it
a fulltrack read.  So if you work sequentially, it will help. I you work
randomly the fulltrack read will probaly not help.  With high I/O rates it
will even hurt due to the larger data transfers.

--
Kris Buelens,
IBM Belgium, VM customer support


Re: PAV and VSE guest

2007-03-19 Thread Rob van der Heij

On 3/19/07, Kris Buelens [EMAIL PROTECTED] wrote:


I guess that what you found is that VSE adheres to the old rules: it is
useless to send an I/O to a disk that is busy, so VSE queues. VM can't
change that.


In that case, it might help to give VSE its disk space in less than
full pack mini disks so that there are more virtual subchannels per GB
(only make sense if the workload in VSE is such that you would end up
with I/O spread over those smaller disks).


Minidisk cache can help or hurt VSE: CP's MDC will change the IO to make it
a fulltrack read.  So if you work sequentially, it will help. I you work
randomly the fulltrack read will probaly not help.  With high I/O rates it
will even hurt due to the larger data transfers.


For random I/O with small block size you might enjoy recordmdc but you
would need your data sufficiently grouped to do that. For full track
cache, you probably will not notice the larger data transfers if
you're on FICON, but there is also CPU cost associated with MDC, and
if you don't save I/O by MDC then spending CPU cycles and memory makes
less sense.

If the z800 has a lot of available memory (quite possible with one
z/VSE guest) then MDC could help reduce some I/O and maybe speed up
some things. Again, for this it could also help if you spread your
datasets over different mini disks. That way you can still read data
out of MDC while waiting for a write to complete.

Rob
--
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/


Re: PAV and VSE guest

2007-03-19 Thread Bill Bitner
Dave, you are probably correct in being cautious about making
a sweeping change without looking for evidence first. For more
details on the PAV support that went out with the APAR you
referenced, see http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html.
Remember that if there is no queueing on I/O in VM, there is no benefit
from this PAV support. So if the I/O is queued in VSE, this won't
benefit.
The minidisk cache aspect is interesting. I imagine at one time you
had dedicated perhaps for V=R and I/O Assist. Since that doesn't
apply anymore, there might be value in looking at MDC. I'd start
first with just getting a feel for the read/write ratios, since
only reads have a chance to benefit from MDC. Since this is VSE,
it would be limited to full track cache of MDC. The record level
MDC does not apply here.
We also learned recently that PAV and MDC do not mix as well as
we would like. Watch this space.

Bill Bitner - VM Performance Evaluation - IBM Endicott - 607-429-3286