Re: Moving on

2007-03-19 Thread Peter Carrier
Hi Richard,

We at MIT wish to thank you for your dedication and support for Mailbook 

over the years.  While we're down to only a handful of users, it is still
 
appreciated

Peter, and the folks at MIT


Re: Moving on

2007-03-19 Thread William Munson

Richard,

Good luck !

I enjoyed working with you.
I also enjoyed the MAILBOOK product and all it could do.
Thanx for all the help you gave me and all the others
you helped in the VM community.

thanx again

Bill Munson
IT Specialist
Office of Information Technology
State of New Jersey
(609) 984-4065

President MVMUA
http://www.marist.edu/~mvmua



Richard A. Schafer wrote:
After over 22 years of working on VM and my MailBook email software, I 
have decided to stop.


 

I recently ran across the following email that I sent out in 1989(!) 
that I think is worth repeating here one last time:


 


==

Date: Tue, 5 Dec 89 20:49:40 CST

Reply-To: MAIL/MAILBOOK subscription list [EMAIL PROTECTED]

Sender: MAIL/MAILBOOK subscription list [EMAIL PROTECTED]

From: Richard A. Schafer [EMAIL PROTECTED]

Subject: History

To: MAIL/MAILBOOK Mailing list [EMAIL PROTECTED]

I suddenly realized I had let something slip by without notice. (Those 
of you with the full commented source to the code, go take a look at 
line 11 of MAILB00K XEDIT.) For the rest of you, that line says that the 
original date of the code is October 16, 1984. That means I've now been 
working on this package for


 5 YEARS *** !

For those of you who haven't been around that long, you might find a 
little history of the project interesting. The predecessor of the 
current code was written at MIT by a  fellow named Dave Burleigh. That's 
why you may occasionally see a reference to MIT MAIL. What follows is 
a note from Dave he wrote me in 1986 when I was working on a 
presentation to SHARE on the package. (If anyone cares, I think this 
would have been SHARE 66; the presentation was published in Volume I of 
the proceedings.)


Subject: Re: MAIL history
To: Richard Schafer [EMAIL PROTECTED]
In-Reply-To: Your message of Tue 11 Mar 1986 16:59:36 CDT

Glad to. I'd like a copy of your presentation, if possible, too. I began 
working on MIT MAIL's earliest ancestor sometime in 1981, as soon as I 
had a full screen terminal in my office. I was dissatisfied with the 
line-oriented MAIL program then in use at MIT, and wrote a simple 
EXEC/XEDIT macro to let me prepare my mail message in full-screen mode 
and send it out. That program displayed incoming mail in line mode, and 
didn't include much power for processing incoming mail. Since other 
staff members were used to finding little goodies on my disk, my mail 
program began to find its way onto other people's disks, and soon became 
the standard mail program used by the CMS programming staff. I began to 
get suggestions/requests/demands for extensions. I had been doing a lot 
with the Display Management System (DMS) at the time, and decided to put 
it to work in MAIL for displaying incoming mail in full-screen mode. I 
added subcommands for replying, forwarding, logging, etc. After 
realizing that the MIT-Multics machine could act as a gateway to 
Arpanet, I studied up on the RFC822 standard for mail headers and 
altered MAIL to recognize and build standard headers.


And then there was BITNET. While the BITNET standard for mail headers 
was practically undefined at the time, the work involved in being able 
to handle incoming BITNET mail and Arpanet mail was starting to 
overwhelm me. I decided it was time to make MAIL into a legitimate 
programming project, and approached my supervisor about scheduling some 
time for me to tackle it (it had been a spare-time project 
heretofore). Since he was the author of the line-oriented mail program I 
was replacing, it was very  big of him to endorse the project. Although 
REXX had just arrived on the scene, my initial test with it suggested 
that a REXX version of MAIL would be slow and expensive to run, and I 
decided to reprogram in EXEC2. I regret that decision now, but we were 
starting to hear complaints about how much MAIL was costing to run, and 
I knew that the added functionality I had in mind for MAIL was going to 
make it even more expensive.


I began rewriting MAIL in August 1983 as a Read/Send/Menu mode system, 
much as it appears now. It was installed for public use in late 
September or early October 1983, was well received, and soon began to 
make its way out to other BITNET sites. It underwent a great many 
revisions in response to suggestions and bug reports, mostly from BITNET 
users. Particularly helpful and influential in MAIL's early development 
were Richard Schafer (RICE), Bill Rubin (CUNY), and Hank Nussbacher 
(CUNY and WEISMANN).


In January 1984 I left MIT for a contract programming venture. I 
continued to work on MAIL during most of that year, but found I could 
commit less and less time to it. I finally had to pass the baton to 
Richard Schafer, author of MAILBOOK, who courageously agreed to take on 
the project.


***

I didn't know what form you wanted this history to take. This is quick 
and dirty, off the 

HCPCLS174E Paging I/O error; IPL failed - on second level z/VM

2007-03-19 Thread Stefan Raabe
Hello List, 


i was working on a a second level VM (z/VM 5.2, 5203RSU applied)
when i got these messages after IPL when trying to log on users:

HCPCLS174E Paging I/O error; IPL failed 
HCPCLS059E AUTOLOG failed for AUTOLOG1 - IPL failed 

I was running with 384 Meg, so i made it 512meg and after ipl everything
worked fine again.

After some ipl's (dont know exactly, 3-5) i got the same problem again.
This time i drained the page devices, and after i drained them all
i was able to log on users.

Do i have a bad dasd pack as described in the HCPCLS174E documentation?
Or is this somehow caused by a main / auxilliary page storage 
constellation?

Is someone able to put some light on this? 

Thanks + Regards, Stefan





Deutsche Börse Systems AG
Chairman of the Supervisory Board/
Vorsitzender des Aufsichtsrats:
Reto Francioni 
Executive Board/Vorstand:
Michael Kuhn (Chief Executive Officer/Vorsitzender),
Yves Baguet (Deputy Chief Executive Officer/
stellvertretender Vorsitzender), Gerhard Leßmann.
Aktiengesellschaft with registered seat in/mit Sitz in
Frankfurt am Main.
Commercial register/Handelsregister:
Local court/Amtsgericht Frankfurt am Main HRB 42413.


-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte
Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie
bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte
Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen
Informationen
ist nicht gestattet.

The information contained in this message is confidential or
protected by
law. If you are not the intended recipient, please contact the
sender and
delete this message. Any unauthorised copying of this message or 
unauthorised distribution of the information contained herein is
prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead


Re: TCPNJE

2007-03-19 Thread Schuh, Richard
What happened? Did you forget about a NOT operator somewhere? The link
stays alive with KEEPALIV=NO and drops with KEEPALIV=YES. Reminds me of
the OS2 dialog boxes. You told me to do x. Reply YES to not do it; NO
to go ahead and do it.


Regards, 
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Les Geer (607-429-3580)
Sent: Friday, March 16, 2007 6:46 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

Not yet. I haven't been able to get in contact with anyone in the MVS 
or Network groups since Wednesday morning - it must be the cursed 
caller id thing - and could not pry the answer from them prior to that.


Regards,
Richard Schuh


Just for the heck of it, try using KEEPALIV=NO and see if that makes a
difference.

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: HCPCLS174E Paging I/O error; IPL failed - on second level z/VM

2007-03-19 Thread RPN01
Did you format the entire paging space? It needs to be pre-defined with 4k
page blocks before it can be used.

CMS minidisk volumes can be defined using ICKDSF by just formatting cyl 0 to
contain the correct label, and everything will handle formatting their small
piece of the world when the time comes, but paging space needs to be
completely formatted via ICKDSF before it¹s first use.


-- 
   .~.Robert P. Nix Mayo Foundation
   /V\RO-OC-1-13  200 First Street SW
 / ( ) \  507-284-0844   Rochester, MN 55905
^^-^^   - 
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 3/19/07 10:12 AM, Stefan Raabe [EMAIL PROTECTED]
wrote:

 
 Hello List, 
 
 
 i was working on a a second level VM (z/VM 5.2, 5203RSU applied)
 when i got these messages after IPL when trying to log on users:
 
 HCPCLS174E Paging I/O error; IPL failed
 HCPCLS059E AUTOLOG failed for AUTOLOG1 - IPL failed
 
 I was running with 384 Meg, so i made it 512meg and after ipl everything
 worked fine again.
 
 After some ipl's (dont know exactly, 3-5) i got the same problem again.
 This time i drained the page devices, and after i drained them all
 i was able to log on users.
 
 Do i have a bad dasd pack as described in the HCPCLS174E documentation?
 Or is this somehow caused by a main / auxilliary page storage constellation?
 
 Is someone able to put some light on this?
 
 Thanks + Regards, Stefan
 
 
 
 
 Deutsche Börse Systems AG
 Chairman of the Supervisory Board/
 Vorsitzender des Aufsichtsrats:
 Reto Francioni 
 Executive Board/Vorstand:
 Michael Kuhn (Chief Executive Officer/Vorsitzender),
 Yves Baguet (Deputy Chief Executive Officer/
 stellvertretender Vorsitzender), Gerhard Leßmann.
 Aktiengesellschaft with registered seat in/mit Sitz in
 Frankfurt am Main.
 Commercial register/Handelsregister:
 Local court/Amtsgericht Frankfurt am Main HRB 42413.
 
 
 
 
 
 Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte
 Informationen.
 Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie
 bitte
 sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte
 Kopieren
 dieser E-Mail oder die unbefugte Weitergabe der enthaltenen
 Informationen
 ist nicht gestattet.
 
 The information contained in this message is confidential or
 protected by
 law. If you are not the intended recipient, please contact the
 sender and
 delete this message. Any unauthorised copying of this message or
 unauthorised distribution of the information contained herein is
 prohibited.
 
 Legally required information for business correspondence/
 Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
 http://deutsche-boerse.com/letterhead
 




PAV and VSE guest

2007-03-19 Thread Dave Reinken
I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.


Re: PAV and VSE guest

2007-03-19 Thread Eric Schadow
Dave

If you made the DASD mini-disks instead of DEDICATED you can try VM mini disk 
caching. 

I am pretty sure PAV is z/OS only...

All the regular tuning things can reviewed also
- VSAM buffer tuning -
- Sequential file blocking
- Application s/w tuning
- VSE or VM paging?

etc
etc



Eric 



At 04:18 PM 3/19/2007, you wrote:
I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.

Eric Schadow
Mainframe Technical Support
www.davisvision.com 





The information contained in this communication is intended
only for the use of the recipient(s) named above. It may
contain information that is privileged or confidential, and
may be protected by State and/or Federal Regulations. If
the reader of this message is not the intended recipient,
you are hereby notified that any dissemination,
distribution, or copying of this communication, or any of
its contents, is strictly prohibited. If you have received
this communication in error, please return it to the sender
immediately and delete the original message and any copy
of it from your computer system. If you have any questions
concerning this message, please contact the sender.



Re: PAV and VSE guest

2007-03-19 Thread McBride, Catherine
Dave, you may want to cross-post this one on VSE-L..could generate some good
dialogue.  

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Dave Reinken
Sent: Monday, March 19, 2007 3:19 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: PAV and VSE guest


I was recently reviewing this:
http://www.vm.ibm.com/storman/pav/pav2.html
at the behest of my manager. He is looking to extend the life of and
better utilize our current hardware. We are running z/VM 5.2 on a z800,
with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
use DEDICATED volumes for z/VSE. I am not necessarily against changing
these volumes to minidisks if there is a performance benefit to be
gained. However, from my reading of the above referenced article, it
appears to me that converting them to minidisks and running PAV is
going to gain me about ZERO, since all I have accessing the disks is a
single z/VSE guest.

Is this true, or am I missing something and should look into PAV and
minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
a single one won't.


Re: PAV and VSE guest

2007-03-19 Thread Dave Reinken
Well, I know that VSE can't do PAV, but z/VM 5.2 w/ APAR VM63952 can do
it for VSE, providing that the volumes are minidisks. I just don't
think that it is going to get me much (if anything) with a single
guest. 

Would VM minidisk caching help throughput in a large batch environment?
The manager is looking at the performance monitors and seeing less than
full usuage on the Shark's channels, I'm thinking we need to look closer
at the COBOL programs running there than the Shark...

  Original Message 
 Subject: Re: PAV and VSE guest
 From: Eric Schadow [EMAIL PROTECTED]
 Date: Mon, March 19, 2007 4:27 pm
 To: IBMVM@LISTSERV.UARK.EDU
 
 Dave
 
 If you made the DASD mini-disks instead of DEDICATED you can try VM mini disk 
 caching. 
 
 I am pretty sure PAV is z/OS only...
 
 All the regular tuning things can reviewed also
 - VSAM buffer tuning -
 - Sequential file blocking
 - Application s/w tuning
 - VSE or VM paging?
 
 etc
 etc
 
 
 
 Eric 
 
 
 
 At 04:18 PM 3/19/2007, you wrote:
 I was recently reviewing this:
 http://www.vm.ibm.com/storman/pav/pav2.html
 at the behest of my manager. He is looking to extend the life of and
 better utilize our current hardware. We are running z/VM 5.2 on a z800,
 with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently
 use DEDICATED volumes for z/VSE. I am not necessarily against changing
 these volumes to minidisks if there is a performance benefit to be
 gained. However, from my reading of the above referenced article, it
 appears to me that converting them to minidisks and running PAV is
 going to gain me about ZERO, since all I have accessing the disks is a
 single z/VSE guest.
 
 Is this true, or am I missing something and should look into PAV and
 minidisks for my single z/VSE guest? It looks to me that multiple z/VSE
 guests sharing volumes on minidisk _may_ benefit from PAV under VM, but
 a single one won't.
 
 Eric Schadow
 Mainframe Technical Support
 www.davisvision.com 


Re: PAV and VSE guest

2007-03-19 Thread Kris Buelens

I guess that what you found is that VSE adheres to the old rules: it is
useless to send an I/O to a disk that is busy, so VSE queues. VM can't
change that.

Minidisk cache can help or hurt VSE: CP's MDC will change the IO to make it
a fulltrack read.  So if you work sequentially, it will help. I you work
randomly the fulltrack read will probaly not help.  With high I/O rates it
will even hurt due to the larger data transfers.

--
Kris Buelens,
IBM Belgium, VM customer support


Re: PAV and VSE guest

2007-03-19 Thread Rob van der Heij

On 3/19/07, Kris Buelens [EMAIL PROTECTED] wrote:


I guess that what you found is that VSE adheres to the old rules: it is
useless to send an I/O to a disk that is busy, so VSE queues. VM can't
change that.


In that case, it might help to give VSE its disk space in less than
full pack mini disks so that there are more virtual subchannels per GB
(only make sense if the workload in VSE is such that you would end up
with I/O spread over those smaller disks).


Minidisk cache can help or hurt VSE: CP's MDC will change the IO to make it
a fulltrack read.  So if you work sequentially, it will help. I you work
randomly the fulltrack read will probaly not help.  With high I/O rates it
will even hurt due to the larger data transfers.


For random I/O with small block size you might enjoy recordmdc but you
would need your data sufficiently grouped to do that. For full track
cache, you probably will not notice the larger data transfers if
you're on FICON, but there is also CPU cost associated with MDC, and
if you don't save I/O by MDC then spending CPU cycles and memory makes
less sense.

If the z800 has a lot of available memory (quite possible with one
z/VSE guest) then MDC could help reduce some I/O and maybe speed up
some things. Again, for this it could also help if you spread your
datasets over different mini disks. That way you can still read data
out of MDC while waiting for a write to complete.

Rob
--
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/


Re: TCPNJE

2007-03-19 Thread Alan Altmark
On Monday, 03/19/2007 at 11:50 MST, Schuh, Richard [EMAIL PROTECTED] 
wrote:
 What happened? Did you forget about a NOT operator somewhere? The link
 stays alive with KEEPALIV=NO and drops with KEEPALIV=YES. Reminds me of
 the OS2 dialog boxes. You told me to do x. Reply YES to not do it; NO
 to go ahead and do it.

:-) So, back in the Dark Ages, we used autodial modems with *real* 
inactivity timers.  If you want to keep the modem going and, hence, your 
connection, you used Keepalive packets.  These days their only purpose is 
to verify that the path is 'alive' and to tell your application if not.

I've got good news and bad news.

The good news is that we have keepalive packets.  The bad news is that we 
have keepalive packets.  Now you see first hand why they are not generally 
accepted into all TCP implementations. 

It also tells me you have only a single route between your VM and MVS 
system.  NG.  This is the worst environment in which to use keepalive 
packets.  With two or more paths, a failed keepalive indicates a *major* 
problem, not just a glitch as someone fiddles with a port definition 
somewhere.

Alan Altmark
z/VM Development
IBM Endicott


Re: TCPNJE

2007-03-19 Thread Schuh, Richard
This is a first cut talking to a lab MVS system to verify that their
TCPNJE is viable. The final configuration has not been established. It
will probably include from 3 to 10 MVS systems and 2 VMs. 

KEEPALIV is the default for the PARM (CONFIG file) or DEFINE (command)
for a TCPNJE link. Obviously z/OS decided to do something different.


Regards, 
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Alan Altmark
Sent: Monday, March 19, 2007 3:30 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

On Monday, 03/19/2007 at 11:50 MST, Schuh, Richard [EMAIL PROTECTED]
wrote:
 What happened? Did you forget about a NOT operator somewhere? The link

 stays alive with KEEPALIV=NO and drops with KEEPALIV=YES. Reminds me 
 of the OS2 dialog boxes. You told me to do x. Reply YES to not do it;

 NO to go ahead and do it.

:-) So, back in the Dark Ages, we used autodial modems with *real*
inactivity timers.  If you want to keep the modem going and, hence, your
connection, you used Keepalive packets.  These days their only purpose
is to verify that the path is 'alive' and to tell your application if
not.

I've got good news and bad news.

The good news is that we have keepalive packets.  The bad news is that
we have keepalive packets.  Now you see first hand why they are not
generally accepted into all TCP implementations. 

It also tells me you have only a single route between your VM and MVS
system.  NG.  This is the worst environment in which to use keepalive
packets.  With two or more paths, a failed keepalive indicates a *major*
problem, not just a glitch as someone fiddles with a port definition
somewhere.

Alan Altmark
z/VM Development
IBM Endicott


Adding help files

2007-03-19 Thread O'Brien, Dennis L
I'd like to add a few thousand help files (for VM:Manager) to MAINT 19D.
What's the proper way to do this?  If I just COPYFILE them to the disk,
HELP works, but PUT2PROD chokes the next time it tries to build the help
segment.  Could the segment be getting full, or is there a VMSES command
that should be issued to add the files, so that all the proper VMSES
control files get updated?

   Dennis O'Brien

Lindsay Lohan, Drink Canada Dry is a slogan, not a dare.  -- Bill
Maher


Re: Adding help files

2007-03-19 Thread Graeme Moss
Hi Dennis,

I have a feeling that when I follow my cheatsheet below that 
during rebuilds when applying maintenance the rebuilds fail 
due to storage already in use. Haven't tested but 
maintenance goes on fine on systems without the increased 
HELPSEG

===
| Install procedure for CA help files z/VM 5.1 
|
===
| 
|
|   Below are the CA helpfile installation procedures. For 
additional |
|   information refer to the VM:Manager Installation Guide, 
Rel 7.2a  |
| 
|
===

I. Load files from cart
   1.  Logon to VMRMAINT

   2.  Have cart mounted on virtual 181

   3.  VMIMAINT
 tab to Load Help Files and press enter
 select all files

II. Place files on system help file disk
   1.  Logon to MAINT

   2.  Access source and target disks
 ACCESS 19D M
 VMLINK VMRMAINT 29D * N

   3.  Copy files
 COPY * * N = = M (OLDD

III.  Re-save HELPSEG.
  This segment contains a copy of the mdisk directory.

   1. Logon to MAINT
  Increase virtual machine storage
DEF STOR 256M
IPL

  Issue command
vmfbld ppf segbld esasegs segblist helpseg ( wild

  if you get msg VMFBDS1965E with return code of 32
  increase size of segment and repeat step 1.

   2. Increase size of HELPSEG
vmfsgmap segbld esasegs segblist
page forward to find HELPSEG
place cursor on word HELPSEG and press pf4
CHANGE defparms FROM C00-CFF SR
press PF5 to return to map screen
press PF5 to file changes
  Note. For zVM 5.1 size was increased from C00-CFF to 
C00-D4F
12dec2005 GAM size increased to C00-D7F



Cheers Graeme


- Original Message - 
From: O'Brien, Dennis L 
Dennis.L.O'[EMAIL PROTECTED]
To: IBMVM@LISTSERV.UARK.EDU
Sent: Tuesday, March 20, 2007 9:55 AM
Subject: Adding help files


I'd like to add a few thousand help files (for VM:Manager) 
to MAINT 19D.
What's the proper way to do this?  If I just COPYFILE them 
to the disk,
HELP works, but PUT2PROD chokes the next time it tries to 
build the help
segment.  Could the segment be getting full, or is there a 
VMSES command
that should be issued to add the files, so that all the 
proper VMSES
control files get updated?

   Dennis 
O'Brien

Lindsay Lohan, Drink Canada Dry is a slogan, not a 
are.  -- Bill
Maher


Re: PAV and VSE guest

2007-03-19 Thread Bill Bitner
Dave, you are probably correct in being cautious about making
a sweeping change without looking for evidence first. For more
details on the PAV support that went out with the APAR you
referenced, see http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html.
Remember that if there is no queueing on I/O in VM, there is no benefit
from this PAV support. So if the I/O is queued in VSE, this won't
benefit.
The minidisk cache aspect is interesting. I imagine at one time you
had dedicated perhaps for V=R and I/O Assist. Since that doesn't
apply anymore, there might be value in looking at MDC. I'd start
first with just getting a feel for the read/write ratios, since
only reads have a chance to benefit from MDC. Since this is VSE,
it would be limited to full track cache of MDC. The record level
MDC does not apply here.
We also learned recently that PAV and MDC do not mix as well as
we would like. Watch this space.

Bill Bitner - VM Performance Evaluation - IBM Endicott - 607-429-3286