>if I remember correctly, then the directory structure of PDSEs was
>designed to speed up finding specific members as opposed to listing
>the entire directory.

In my opinion, waiting 90 seconds for a sequential read is too long, no matter 
what the objective was. Considering that PDSEs are the only ones that can 
have more than 65K tracks in the primary allocation, it is NOT acceptable. 

And our heavy user (IBMs fault analyzer for sidefiles) regularly gets the 
'there 
may be a PDSE problem' and once you issue the command, it shows that it is 
one of those large ones. Besides, Fault Analyzer stops interactive analysis 
after 10 minutes being stuck, and whenever I looked at the resulting dump, FA 
was stuck in I/O operations, which is supposedly benefitting from the 
hierarchical stuff. It isn't.

>The bad directory listing performance probably stems from the index
>pages being scattered across the entire PDSE instead of being nicely
>ordered at the beginning of the dataset. Does copying and thus
>defragmenting the library also reorganize the index pages?

No. It used to, a few years back I routinely recopied the large datasets and 
that boosted performance. That was *before* BUFFER_BEYOND_CLOSE. These 
days, a copy of one of the large ones takes up to 90 minutes, and it does not 
make it one second faster after the copy. Using ISPF 3.4 to measure and test 
is just a nice, reproducible test case.

Regards, Barbara Nitz

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to