Re: PDS I/O Performance Improvement

2016-05-16 Thread Ted MacNEIL
That's the way PDS's work. Bigger directory more connect time. Member access is 
trivial.

-teD
  Original Message  
From: Kreiter IBM-Main
Sent: Thursday, April 28, 2016 08:33
To: IBM-MAIN@LISTSERV.UA.EDU
Reply To: IBM Mainframe Discussion List
Subject: PDS I/O Performance Improvement

Hello, 

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time.

Thanks, 
Chuck 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-03 Thread ronjhawk...@sbcglobal.net
Chuck, 
It does not sound like VIO was ever an option to improve this utility.
I think you identified early on that directory search is the main issue with 
this PDS, so your options are PDSE V2, PDSE, LA add/remove.
VIO would be an option if the PDS was smaller, but reading all the members 
(1400 Cyls) into VIO just to touch some portion of the members makes no sense.
Ron


Sent via the Samsung Galaxy S®6 active, an AT&T 4G LTE smartphone 
Original message From: Chuck Kreiter  
Date: 5/2/2016  13:48  (GMT-08:00) To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: 
[IBM-MAIN] PDS I/O Performance Improvement 
VIO might not be an option as the dataset is 1400 cylinders.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Hawkins
Sent: Monday, May 2, 2016 2:05 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement

Skip,

VIO is still a great way to handle small, temp data sets. I've used the method 
mentioned below where VLF was no help, and LLA Freeze a hindrance, and it works 
surprisingly well. At less than 1Mb per CYL it’s far from being expensive. Then 
again, I was a big fan of VFETCH too... Not unlike building a custom LSR pool 
just for one problematic file.

I'm pretty sure the majority of shops are using DFSMS to limit and direct small 
allocations to VIO. None of that nasty IO - in and out like the Flash. The best 
IO is the one you don't do, so why bother with all that VTOC IO just to create 
and delete a one track data set that you may or may not write to?

20MB (~25 Cyls) is a fairly reasonable max vio size limit, but being a lab I 
have coded special cases in the ACS routines where I let 0.5GB into VIO. It's 
nice when I'm in control of 100% of what is running.

Ron



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Thursday, April 28, 2016 3:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] PDS I/O Performance Improvement

Since DASD has become so fast, many shops--including ours--long ago dropped VIO 
processing. A VIO request simply goes to a SYSALLDA volume. In any case, VIO 
would be very expensive way to improve performance of a very large data set.

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Dana Mitchell
Sent: Thursday, April 28, 2016 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: PDS I/O Performance Improvement

Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL programs,  
it would do storage compressions multiple times a minute.  A temporary 
performance boost came from copying the main loadlib into VIO dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer  
wrote:

>If it's a matter of repeated reading why not copy to VIO in Central 
>Storage and read from there?


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Jesse 1 Robinson
I put this question to a SHARE buddy who has a lot more experience in the area 
than I do. He strongly recommended a PDSE-2 if at all possible in order to 
avoid limitations in the original, venerable design of PDSE. 

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Edward Finnell
Sent: Monday, May 02, 2016 4:00 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: PDS I/O Performance Improvement

This is a typical CICS tuning opportunity. What to keep and what to make 
pageable are very installation dependent. If there are vendors they should be  
able to provide guidance.
 
https://www.ibm.com/support/knowledgecenter/SSGMCP_5.1.0/com.ibm.cics.ts.app
licationprogramming.doc/topics/dfhp3c0074.html
 
 
In a message dated 5/2/2016 5:27:44 P.M. Central Daylight Time, 
r.skoru...@bremultibank.com.pl writes:

I can  imagine "the bes of" library created just for the job. It can be PDSE or 
 VIO (not both), or something else, but the most important would be it's  small.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Edward Finnell
This is a typical CICS tuning opportunity. What to keep and what to make  
pageable are very installation dependent. If there are vendors they should be 
 able to provide guidance.
 
https://www.ibm.com/support/knowledgecenter/SSGMCP_5.1.0/com.ibm.cics.ts.app
licationprogramming.doc/topics/dfhp3c0074.html
 
 
In a message dated 5/2/2016 5:27:44 P.M. Central Daylight Time,  
r.skoru...@bremultibank.com.pl writes:

I can  imagine "the bes of" library created just for the job. It can be 
PDSE or  VIO (not both), or something else, but the most important would 
be it's  small.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread R.S.

W dniu 2016-05-02 o 22:48, Chuck Kreiter pisze:

VIO might not be an option as the dataset is 1400 cylinders.

Wild question: Is it absolutely necessary to work with *all* the members?
I can imagine "the bes of" library created just for the job. It can be 
PDSE or VIO (not both), or something else, but the most important would 
be it's small.


--
Radoslaw Skorupka
Lodz, Poland






--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.955.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Chuck Kreiter
VIO might not be an option as the dataset is 1400 cylinders.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Hawkins
Sent: Monday, May 2, 2016 2:05 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement

Skip,

VIO is still a great way to handle small, temp data sets. I've used the method 
mentioned below where VLF was no help, and LLA Freeze a hindrance, and it works 
surprisingly well. At less than 1Mb per CYL it’s far from being expensive. Then 
again, I was a big fan of VFETCH too... Not unlike building a custom LSR pool 
just for one problematic file.

I'm pretty sure the majority of shops are using DFSMS to limit and direct small 
allocations to VIO. None of that nasty IO - in and out like the Flash. The best 
IO is the one you don't do, so why bother with all that VTOC IO just to create 
and delete a one track data set that you may or may not write to?

20MB (~25 Cyls) is a fairly reasonable max vio size limit, but being a lab I 
have coded special cases in the ACS routines where I let 0.5GB into VIO. It's 
nice when I'm in control of 100% of what is running.

Ron



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Thursday, April 28, 2016 3:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] PDS I/O Performance Improvement

Since DASD has become so fast, many shops--including ours--long ago dropped VIO 
processing. A VIO request simply goes to a SYSALLDA volume. In any case, VIO 
would be very expensive way to improve performance of a very large data set.

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Dana Mitchell
Sent: Thursday, April 28, 2016 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: PDS I/O Performance Improvement

Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL programs,  
it would do storage compressions multiple times a minute.  A temporary 
performance boost came from copying the main loadlib into VIO dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer  
wrote:

>If it's a matter of repeated reading why not copy to VIO in Central 
>Storage and read from there?


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Martin Packer

Boy this takes me back - to the Summer of 1990. :-)

At the time what I call "The Coffee Table Book" aka the Data In Memory
Performance Studies WSC Orange Book was freshish out.

One of the studies was VIO to Expanded Storage. It showed CPU cost across
the range of data set sizes, relative to disk.

Now, it's 26 years on and it was a while back reimplemented in Central
Storage.

But I worry it's still CPU-wise expensive.

On the other hand memory is MUCH cheaper now.

So I doubt customers are really sending all small temp data sets to VIO in
CS. (Maybe to VIO.)

In the late 1980s we did a lot of work with my customer (Lloyds Bank) to
manage VIO with DFSMS and VIOMAXSIZE. And also with Expanded Storage
Criterion Ages (ESCTVIO).

Out of it came my first widely-propagated presentation: "VIO To Expanded
Storage".

Sadly I don't have a copy. I'd be pleasantly surprised if someone had a
copy (and sent it to me).

Much of what I wrote then appeared in 1995 Redbook "Parallel Sysplex Batch
Performance" (SG24-2557).

Happy days. :-)

Cheers, Martin

Sent from my iPad

> On 2 May 2016, at 21:04, Ron Hawkins  wrote:
>
> Skip,
>
> VIO is still a great way to handle small, temp data sets. I've used the
method mentioned below where VLF was no help, and LLA Freeze a hindrance,
and it works surprisingly well. At less than 1Mb per CYL it’s far from
being expensive. Then again, I was a big fan of VFETCH too... Not unlike
building a custom LSR pool just for one problematic file.
>
> I'm pretty sure the majority of shops are using DFSMS to limit and direct
small allocations to VIO. None of that nasty IO - in and out like the
Flash. The best IO is the one you don't do, so why bother with all that
VTOC IO just to create and delete a one track data set that you may or may
not write to?
>
> 20MB (~25 Cyls) is a fairly reasonable max vio size limit, but being a
lab I have coded special cases in the ACS routines where I let 0.5GB into
VIO. It's nice when I'm in control of 100% of what is running.
>
> Ron
>
>
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Jesse 1 Robinson
> Sent: Thursday, April 28, 2016 3:45 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [IBM-MAIN] PDS I/O Performance Improvement
>
> Since DASD has become so fast, many shops--including ours--long ago
dropped VIO processing. A VIO request simply goes to a SYSALLDA volume. In
any case, VIO would be very expensive way to improve performance of a very
large data set.
>
> .
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-302-7535 Office
> robin...@sce.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Dana Mitchell
> Sent: Thursday, April 28, 2016 7:34 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Re: PDS I/O Performance Improvement
>
> Wow! flashback from the 80's!
>
> We had a CICS region, seriously storage constrained with huge COBOL
programs,  it would do storage compressions multiple times a minute.  A
temporary performance boost came from copying the main loadlib into VIO
dataset at starup.
>
> Dana
>
>> On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer
 wrote:
>>
>> If it's a matter of repeated reading why not copy to VIO in Central
>> Storage and read from there?
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send
email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Ron Hawkins
Skip,

VIO is still a great way to handle small, temp data sets. I've used the method 
mentioned below where VLF was no help, and LLA Freeze a hindrance, and it works 
surprisingly well. At less than 1Mb per CYL it’s far from being expensive. Then 
again, I was a big fan of VFETCH too... Not unlike building a custom LSR pool 
just for one problematic file.

I'm pretty sure the majority of shops are using DFSMS to limit and direct small 
allocations to VIO. None of that nasty IO - in and out like the Flash. The best 
IO is the one you don't do, so why bother with all that VTOC IO just to create 
and delete a one track data set that you may or may not write to?

20MB (~25 Cyls) is a fairly reasonable max vio size limit, but being a lab I 
have coded special cases in the ACS routines where I let 0.5GB into VIO. It's 
nice when I'm in control of 100% of what is running.

Ron



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Thursday, April 28, 2016 3:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] PDS I/O Performance Improvement

Since DASD has become so fast, many shops--including ours--long ago dropped VIO 
processing. A VIO request simply goes to a SYSALLDA volume. In any case, VIO 
would be very expensive way to improve performance of a very large data set.

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Dana Mitchell
Sent: Thursday, April 28, 2016 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: PDS I/O Performance Improvement

Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL programs,  
it would do storage compressions multiple times a minute.  A temporary 
performance boost came from copying the main loadlib into VIO dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer  
wrote:

>If it's a matter of repeated reading why not copy to VIO in Central 
>Storage and read from there?


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Chuck Kreiter
Thanks.  I will check in to see if that is an option.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Burr
Sent: Monday, May 2, 2016 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement

I haven't seen it mentioned, so thought I would bring up the issue of "user 
data" in the PDS member entries.
We weren't told whether the PDS did or did not support 'user data' for its 
members, but if the PDS is being maintained using ISPF with STATS ON (the 
default), then a performance hit in the retrieval of a given member will be the 
result.
Each directory block can contain roughly 3 times as many member names without 
'user data', than one maintained with ISPF STATS ON.
In a PDS with 180,000 members, that can make a huge difference.

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-05-02 Thread Ron Burr
I haven't seen it mentioned, so thought I would bring up the issue of "user 
data" in the PDS member entries.
We weren't told whether the PDS did or did not support 'user data' for its 
members, but if the PDS is being maintained using ISPF with STATS ON (the 
default), then a performance hit in the retrieval of a given member will be the 
result.
Each directory block can contain roughly 3 times as many member names without 
'user data', than one maintained with ISPF STATS ON.
In a PDS with 180,000 members, that can make a huge difference.

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-29 Thread Ron Hawkins
Chuck,

I think the copy to VIO suggestion is one of the best here, but you will likely 
be copying more members than you actually touch along with the directory.

Is the environment one where the JCL command interface, or automation to add 
the PDS to LLA FREEZE in the step before the job runs, and then remove it when 
it completes?

That would mean a one-time read of the directory only, eliminate all those long 
key searches for each member, and then just read the members the utility 
touches. I think this would be the most efficient solution.

Ron



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Kreiter IBM-Main
Sent: Thursday, April 28, 2016 5:33 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [IBM-MAIN] PDS I/O Performance Improvement

Hello, 

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time. 

Thanks,
Chuck 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Jesse 1 Robinson
Since DASD has become so fast, many shops--including ours--long ago dropped VIO 
processing. A VIO request simply goes to a SYSALLDA volume. In any case, VIO 
would be very expensive way to improve performance of a very large data set.

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-302-7535 Office
robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Dana Mitchell
Sent: Thursday, April 28, 2016 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: PDS I/O Performance Improvement

Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL programs,  
it would do storage compressions multiple times a minute.  A temporary 
performance boost came from copying the main loadlib into VIO dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer  
wrote:

>If it's a matter of repeated reading why not copy to VIO in Central 
>Storage and read from there?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Martin Packer
Right. But still valid for today.

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): 
https://developer.ibm.com/tv/category/mpt/



From:   Dana Mitchell 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   28/04/2016 15:33
Subject:Re: PDS I/O Performance Improvement
Sent by:IBM Mainframe Discussion List 



Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL 
programs,  it would do storage compressions multiple times a minute.  A 
temporary performance boost came from copying the main loadlib into VIO 
dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer 
 wrote:

>If it's a matter of repeated reading why not copy to VIO in Central 
>Storage and read from there? 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread John Eells

David Betten wrote:

You might consider testing with PDSE.  In z/OS 2.1 there were a number of
performance improvements made to PDSE processing including improved
directory processing.  If you do a google search, you'll find a number of
Share presentations on the PDSE improvements.



I think Dave is suggesting you *copy* the data set to a PDSE before 
processing it.  Data sets in the parmlib concatenation must not be 
PDSEs, because the code to open and read PDSEs lives in LPA and the LPA 
list must be retrieved from the parmlib concatenation for CLPA processing.


Of course, if it's a "parmlib" in the generic sense, and not "a data set 
in the systems parmlib concatenation," then convert away...


--
John Eells
IBM Poughkeepsie
ee...@us.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Dana Mitchell
Wow! flashback from the 80's!

We had a CICS region, seriously storage constrained with huge COBOL programs,  
it would do storage compressions multiple times a minute.  A temporary 
performance boost came from copying the main loadlib into VIO dataset at starup.

Dana
 
On Thu, 28 Apr 2016 14:22:37 +0100, Martin Packer  
wrote:

>If it's a matter of repeated reading why not copy to VIO in Central  
>Storage and read from there? 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Vernooij, CP (ITOPT1) - KLM
If the library is frequently updated, LLA is indeed no choice. 
LLA freezes the directory in storage until it is refreshed. This means that 
every update will be invisible until LLA (or all LLAs sharing the library) has 
been refreshed. A truly unworkable situation, I suppose.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Kreiter IBM-Main
Sent: 28 April, 2016 15:32
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement

Thanks everyone for your suggestions.  I'll test some of these suggestions with 
the user and see what the results are.  

I'm going to start with a PDSE test as I think that is the quickest thing to 
try.  The parmlib is updated frequently so caching the directory with LLA 
probably isn't the best plan.  

As I understand the user's description of the job, they are using PROJCL to 
generate some kind of reporting that is expanding JCL and pulling in the parms 
from this PDS.  


- Original Message -
From: "Kreiter IBM-Main" 
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Thursday, April 28, 2016 8:32:56 AM
Subject: PDS I/O Performance Improvement

Hello, 

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time. 

Thanks, 
Chuck 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Lizette Koehler
I forgot to add.

Have you checked with the vendor to see if there is anything they can do to 
help? (I think PROJCL is ASG)

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lizette Koehler
> Sent: Thursday, April 28, 2016 6:17 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: PDS I/O Performance Improvement
> 
> Since PROJCL is a JCL Checker, is this process doing a large number of JCL
> members are one time or is this feeding the PROJCL Database for cross
> Reference processing?
> 
> What process for the process is being performed?
> 
> 
> Lizette
> 
> >
> > Kreiter IBM-Main wrote:
> >
> > >I'm looking for some suggestions on how to possibly improve I/O
> > >performance
> > to a PDS.
> >
> > You already got good suggestions including moving to PDSE and
> > compressing of that PDS.
> >
> >
> > >A user is running a job that is reading a large parmlib (through
> > >PROJCL I
> > believe). I think the access is random rather than sequential. The
> > parmlib has
> > ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920.
> >
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Kreiter IBM-Main
Thanks everyone for your suggestions.  I'll test some of these suggestions with 
the user and see what the results are.  

I'm going to start with a PDSE test as I think that is the quickest thing to 
try.  The parmlib is updated frequently so caching the directory with LLA 
probably isn't the best plan.  

As I understand the user's description of the job, they are using PROJCL to 
generate some kind of reporting that is expanding JCL and pulling in the parms 
from this PDS.  


- Original Message -
From: "Kreiter IBM-Main" 
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Thursday, April 28, 2016 8:32:56 AM
Subject: PDS I/O Performance Improvement

Hello, 

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time. 

Thanks, 
Chuck 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Andrew Rowley
I have a memory from 15-20 years ago that PDS directory search was done 
connected, which meant high connect time indicated a directory search.


I don't know if this has changed since then - I would have assumed it 
had, but perhaps not? In which case something that caches the directory 
should help. I have a memory of a similar vintage that it could be done 
with VLF, but I can't remember the details.


Sorry if this is too vague to be useful :-)

Andrew Rowley
Black Hill Software


On 28/04/2016 10:32 PM, Kreiter IBM-Main wrote:

Hello,

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time.

Thanks,
Chuck

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Martin Packer
If it's a matter of repeated reading why not copy to VIO in Central 
Storage and read from there?

The OP said "read". I'm construing that as "read only".

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): 
https://developer.ibm.com/tv/category/mpt/



From:   "Vernooij, CP (ITOPT1) - KLM" 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   28/04/2016 14:04
Subject:Re: PDS I/O Performance Improvement
Sent by:IBM Mainframe Discussion List 



VLF doesn't do anything by itself. It will cache objects handed to it by 
the managers of those objects, like LLA, TSO CLIST processor, CAS address 
space etc. Which VLFCLASS should I specify for this case?

LLA can cache the directory of non-Loadmodule PDSes, this could help, like 
PDSEs directory caching will.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Staller, Allan
Sent: 28 April, 2016 14:57
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement


It is a parmlib, I don't think the members will be staged to VLF (who 
would stage them)?
Considering the I/O times and the size of the directory, I suppose the 
directory is the problem. Maybe LLA member caching might help if a BLDL is 
done to locate a member. Otherwise converting it to PDSE would help with 
its member caching.


Just tell VLF to do the caching. It will handle it. 

However, VLF will most likely be limited in its effectiveness, due to the 
fact this is a parmlib, not a loadlib.

Without getting too exotic, I would try (in order):
LLA management of the PDS (same as VLF. Just tell LLA to manage it).
PDSE


> I'm looking for some suggestions on how to possibly improve I/O 
> performance to a PDS. A user is running a job that is reading a large 
> parmlib (through PROJCL I believe). I think the access is random 
> rather than sequential. The parmlib has ~180,000 members is has an 
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
> found ~ 6ms response time to the volume that houses the PDS with most 
> of the time being connect time.


My guess is that this is PDS directory search. Do some googling for the 
historical performance problems with PDS directory searches. The CCW 
commands used were SEARCH KEY EQUAL (don't have the hex value handy).
Reducing the number of members in the PDS will most likely help. 

Compressing the PDS *might* help.



HTH,


This email � including attachments � may contain confidential 
information. If you are not the intended recipient, do not copy, 
distribute or act on it. Instead, notify the sender immediately and delete 
the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain 
confidential and privileged material intended for the addressee only. If 
you are not the addressee, you are notified that no part of the e-mail or 
any attachment may be disclosed, copied or distributed, and that any other 
action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify 
the sender immediately by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission 
of this e-mail or any attachments, nor responsible for any delay in 
receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered 
number 33014286

 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread John McKown
On Thu, Apr 28, 2016 at 7:32 AM, Kreiter IBM-Main <
kreiter_ibm-m...@centurylink.net> wrote:

> Hello,
>
> I'm looking for some suggestions on how to possibly improve I/O
> performance to a PDS. A user is running a job that is reading a large
> parmlib (through PROJCL I believe). I think the access is random rather
> than sequential. The parmlib has ~180,000 members is has an LRECL of
> 80/BLKSIZE of 27,920. The performance team has reviewed a found ~ 6ms
> response time to the volume that houses the PDS with most of the time being
> connect time.
>
> Thanks,
> Chuck
>
>
​This is probably a stupid idea. I get the impression that you can't change
the program. So my logic is this:

IEBCOPY is very fast in copying a PDS.
Memory is faster than DASD
thought: use IEBCOPY to make a temporary DSN copy of the parmlib on VIO
(in-memory buffering) before the step doing the high I/O to the parmlib​

​and use this temporary DSN instead. This assumes that your VIO set up is
large enough to accommodate ​the DSN.


-- 
The unfacts, did we have them, are too imprecisely few to warrant our
certitude.

Maranatha! <><
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Lizette Koehler
Since PROJCL is a JCL Checker, is this process doing a large number of JCL 
members are one time or is this feeding the PROJCL Database for cross Reference 
processing?

What process for the process is being performed?


Lizette

> 
> Kreiter IBM-Main wrote:
> 
> >I'm looking for some suggestions on how to possibly improve I/O performance
> to a PDS.
> 
> You already got good suggestions including moving to PDSE and compressing of
> that PDS.
> 
> 
> >A user is running a job that is reading a large parmlib (through PROJCL I
> believe). I think the access is random rather than sequential. The parmlib has
> ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920.
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Blaicher, Christopher Y.
Search key equal can be a multi-track search, which even in a cached situation 
takes time.  All PDS directory blocks are 256 bytes and a search for a member 
in a PDS always starts at the beginning of the directory.  My guess is you 
would be better off with multiple proclibs and putting a JCLLIB card in the job 
for the appropriate proclib.

Chris Blaicher
Technical Architect
Mainframe Development
Syncsort Incorporated
50 Tice Boulevard, Woodcliff Lake, NJ 07677
P: 201-930-8234  |  M: 512-627-3803
E: cblaic...@syncsort.com

www.syncsort.com





-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Staller, Allan
Sent: Thursday, April 28, 2016 8:57 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement


It is a parmlib, I don't think the members will be staged to VLF (who would 
stage them)?
Considering the I/O times and the size of the directory, I suppose the 
directory is the problem. Maybe LLA member caching might help if a BLDL is done 
to locate a member. Otherwise converting it to PDSE would help with its member 
caching.


Just tell VLF to do the caching. It will handle it.

However, VLF will most likely be limited in its effectiveness, due to the fact 
this is a parmlib, not a loadlib.

Without getting too exotic, I would try (in order):
LLA management of the PDS (same as VLF. Just tell LLA to manage it).
PDSE


> I'm looking for some suggestions on how to possibly improve I/O
> performance to a PDS. A user is running a job that is reading a large
> parmlib (through PROJCL I believe). I think the access is random
> rather than sequential. The parmlib has ~180,000 members is has an
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a
> found ~ 6ms response time to the volume that houses the PDS with most
> of the time being connect time.


My guess is that this is PDS directory search. Do some googling for the 
historical performance problems with PDS directory searches. The CCW commands 
used were SEARCH KEY EQUAL (don't have the hex value handy).
Reducing the number of members in the PDS will most likely help.

Compressing the PDS *might* help.



HTH,


This email   including attachments   may contain confidential information. If 
you are not the intended recipient, do not copy, distribute or act on it. 
Instead, notify the sender immediately and delete the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN





ATTENTION: -

The information contained in this message (including any files transmitted with 
this message) may contain proprietary, trade secret or other confidential 
and/or legally privileged information. Any pricing information contained in 
this message or in any files transmitted with this message is always 
confidential and cannot be shared with any third parties without prior written 
approval from Syncsort. This message is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any use, disclosure, copying or distribution of this message, in any form, is 
strictly prohibited. If you have received this message in error, please 
immediately notify the sender and/or Syncsort and destroy all copies of this 
message in your possession, custody or control.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Elardus Engelbrecht
Kreiter IBM-Main wrote:

>I'm looking for some suggestions on how to possibly improve I/O performance to 
>a PDS. 

You already got good suggestions including moving to PDSE and compressing of 
that PDS.


>A user is running a job that is reading a large parmlib (through PROJCL I 
>believe). I think the access is random rather than sequential. The parmlib has 
>~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. 

Do you need *all* those members for that single job? Grouping your members into 
different PDS / PDSE may help.

Perhaps you should redesiging that job to read the contents sequential instead 
of randomly.


>The performance team has reviewed a found ~ 6ms response time to the volume 
>that houses the PDS with most of the time being connect time.

Is that volume cached? If all these members are just being read, caching may 
help marginally. If that volume is heavily used, consider moving your PDS to an 
inactive volume.
Ask your storage admin about this.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Vernooij, CP (ITOPT1) - KLM
VLF doesn't do anything by itself. It will cache objects handed to it by the 
managers of those objects, like LLA, TSO CLIST processor, CAS address space 
etc. Which VLFCLASS should I specify for this case?

LLA can cache the directory of non-Loadmodule PDSes, this could help, like 
PDSEs directory caching will.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Staller, Allan
Sent: 28 April, 2016 14:57
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement


It is a parmlib, I don't think the members will be staged to VLF (who would 
stage them)?
Considering the I/O times and the size of the directory, I suppose the 
directory is the problem. Maybe LLA member caching might help if a BLDL is done 
to locate a member. Otherwise converting it to PDSE would help with its member 
caching.


Just tell VLF to do the caching. It will handle it. 

However, VLF will most likely be limited in its effectiveness, due to the fact 
this is a parmlib, not a loadlib.

Without getting too exotic, I would try (in order):
LLA management of the PDS (same as VLF. Just tell LLA to manage it).
PDSE


> I'm looking for some suggestions on how to possibly improve I/O 
> performance to a PDS. A user is running a job that is reading a large 
> parmlib (through PROJCL I believe). I think the access is random 
> rather than sequential. The parmlib has ~180,000 members is has an 
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
> found ~ 6ms response time to the volume that houses the PDS with most 
> of the time being connect time.


My guess is that this is PDS directory search. Do some googling for the 
historical performance problems with PDS directory searches. The CCW commands 
used were SEARCH KEY EQUAL (don't have the hex value handy).
Reducing the number of members in the PDS will most likely help. 

Compressing the PDS *might* help.



HTH,


This email � including attachments � may contain confidential information. If 
you are not the intended recipient, do not copy, distribute or act on it. 
Instead, notify the sender immediately and delete the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Staller, Allan

It is a parmlib, I don't think the members will be staged to VLF (who would 
stage them)?
Considering the I/O times and the size of the directory, I suppose the 
directory is the problem. Maybe LLA member caching might help if a BLDL is done 
to locate a member. Otherwise converting it to PDSE would help with its member 
caching.


Just tell VLF to do the caching. It will handle it. 

However, VLF will most likely be limited in its effectiveness, due to the fact 
this is a parmlib, not a loadlib.

Without getting too exotic, I would try (in order):
LLA management of the PDS (same as VLF. Just tell LLA to manage it).
PDSE


> I'm looking for some suggestions on how to possibly improve I/O 
> performance to a PDS. A user is running a job that is reading a large 
> parmlib (through PROJCL I believe). I think the access is random 
> rather than sequential. The parmlib has ~180,000 members is has an 
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
> found ~ 6ms response time to the volume that houses the PDS with most 
> of the time being connect time.


My guess is that this is PDS directory search. Do some googling for the 
historical performance problems with PDS directory searches. The CCW commands 
used were SEARCH KEY EQUAL (don't have the hex value handy).
Reducing the number of members in the PDS will most likely help. 

Compressing the PDS *might* help.



HTH,


This email � including attachments � may contain confidential information. If 
you are not the intended recipient, do not copy, distribute or act on it. 
Instead, notify the sender immediately and delete the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Kreiter IBM-Main
It is not.  That was about the only thing I could come up with but wanted to 
run this by the experts.  

- Original Message -
From: "Mark Jacobs - Listserv" 
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Thursday, April 28, 2016 8:36:11 AM
Subject: Re: PDS I/O Performance Improvement

Is it defined to VLF?

> Kreiter IBM-Main <mailto:kreiter_ibm-m...@centurylink.net>
> April 28, 2016 at 8:32 AM
> Hello,
>
> I'm looking for some suggestions on how to possibly improve I/O 
> performance to a PDS. A user is running a job that is reading a large 
> parmlib (through PROJCL I believe). I think the access is random 
> rather than sequential. The parmlib has ~180,000 members is has an 
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
> found ~ 6ms response time to the volume that houses the PDS with most 
> of the time being connect time.
>
> Thanks,
> Chuck
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> Please be alert for any emails that may ask you for login information 
> or directs you to login via a link. If you believe this message is a 
> phish or aren't sure whether this message is trustworthy, please send 
> the original message as an attachment to 'phish...@timeinc.com'.
>

-- 

Mark Jacobs
Time Customer Service
Technology and Product Engineering

The standard you walk past is the standard you accept.
Lt. Gen. David Morrison


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Vernooij, CP (ITOPT1) - KLM
It is a parmlib, I don't think the members will be staged to VLF (who would 
stage them)?
Considering the I/O times and the size of the directory, I suppose the 
directory is the problem. Maybe LLA member caching might help if a BLDL is done 
to locate a member. Otherwise converting it to PDSE would help with its member 
caching.

Kees.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Mark Jacobs - Listserv
Sent: 28 April, 2016 14:36
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: PDS I/O Performance Improvement

Is it defined to VLF?

> Kreiter IBM-Main <mailto:kreiter_ibm-m...@centurylink.net>
> April 28, 2016 at 8:32 AM
> Hello,
>
> I'm looking for some suggestions on how to possibly improve I/O 
> performance to a PDS. A user is running a job that is reading a large 
> parmlib (through PROJCL I believe). I think the access is random 
> rather than sequential. The parmlib has ~180,000 members is has an 
> LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
> found ~ 6ms response time to the volume that houses the PDS with most 
> of the time being connect time.
>
> Thanks,
> Chuck
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> Please be alert for any emails that may ask you for login information 
> or directs you to login via a link. If you believe this message is a 
> phish or aren't sure whether this message is trustworthy, please send 
> the original message as an attachment to 'phish...@timeinc.com'.
>

-- 

Mark Jacobs
Time Customer Service
Technology and Product Engineering

The standard you walk past is the standard you accept.
Lt. Gen. David Morrison


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread David Betten
You might consider testing with PDSE.  In z/OS 2.1 there were a number of
performance improvements made to PDSE processing including improved
directory processing.  If you do a google search, you'll find a number of
Share presentations on the PDSE improvements.


Have a nice day,
Dave Betten
z/OS Performance Specialist
Cloud and Systems Performance
IBM Corporation
email:  bet...@us.ibm.com


IBM Mainframe Discussion List  wrote on
04/28/2016 08:32:56 AM:

> From: Kreiter IBM-Main 
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date: 04/28/2016 08:33 AM
> Subject: PDS I/O Performance Improvement
> Sent by: IBM Mainframe Discussion List 
>
> Hello,
>
> I'm looking for some suggestions on how to possibly improve I/O
> performance to a PDS. A user is running a job that is reading a
> large parmlib (through PROJCL I believe). I think the access is
> random rather than sequential. The parmlib has ~180,000 members is
> has an LRECL of 80/BLKSIZE of 27,920. The performance team has
> reviewed a found ~ 6ms response time to the volume that houses the
> PDS with most of the time being connect time.
>
> Thanks,
> Chuck
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS I/O Performance Improvement

2016-04-28 Thread Mark Jacobs - Listserv

Is it defined to VLF?


Kreiter IBM-Main 
April 28, 2016 at 8:32 AM
Hello,

I'm looking for some suggestions on how to possibly improve I/O 
performance to a PDS. A user is running a job that is reading a large 
parmlib (through PROJCL I believe). I think the access is random 
rather than sequential. The parmlib has ~180,000 members is has an 
LRECL of 80/BLKSIZE of 27,920. The performance team has reviewed a 
found ~ 6ms response time to the volume that houses the PDS with most 
of the time being connect time.


Thanks,
Chuck

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Please be alert for any emails that may ask you for login information 
or directs you to login via a link. If you believe this message is a 
phish or aren't sure whether this message is trustworthy, please send 
the original message as an attachment to 'phish...@timeinc.com'.




--

Mark Jacobs
Time Customer Service
Technology and Product Engineering

The standard you walk past is the standard you accept.
Lt. Gen. David Morrison


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


PDS I/O Performance Improvement

2016-04-28 Thread Kreiter IBM-Main
Hello, 

I'm looking for some suggestions on how to possibly improve I/O performance to 
a PDS. A user is running a job that is reading a large parmlib (through PROJCL 
I believe). I think the access is random rather than sequential. The parmlib 
has ~180,000 members is has an LRECL of 80/BLKSIZE of 27,920. The performance 
team has reviewed a found ~ 6ms response time to the volume that houses the PDS 
with most of the time being connect time. 

Thanks, 
Chuck 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN