> Well, they do have at least one other advantage: they can store
> program objects, which allows entry points with long, case-sensitive
> names, which is sometimes handy.

>But based on a thread last year (or was it two years ago), there
>seems to be precious little management interest in, or support for,
>developing new applications using z/OS UNIX, and the systems folks
>on ibm-main are certainly not big fans (for the most part, anyway),
>eh Barbara?

okay, I'll bite:

>Long entry points.
Have you ever seen a dump of an address space running some sort of 
Websphere? Almost *all* of those lmod names are 'long', usually 
called 'specialname'. And I don't talk about Java here.

> How do Unix directories compare?
> o They don't need to be compressed.
> o Multiple members can be written concurrently.
> o Members can be appended or updated in place (with a granularity
>   of byte.).
> o They support long case-sensitive names.
> o They allow a mixture of program objects and other member types.

You're all aware that the original HFSs are based on PDSE code, right? In fact, 
they use the same lower levels like media manager to do the actual IO, and on 
top of that IGW.. (i.e. PDSE) modules, and then it starts to be different. As 
far 
as I know, all of the above is a characteristic of PDSEs, that HFSs just happen 
to use. And I bet it wasn't too hard (given that media manager uses 4K blocks, 
anyway) to put the HFS functionality into zFS, which are VSAM linear using 4K 
blocks.

> Questions:
> o How do performance and reliablity compare with PDS[E]?  I suppose
>   there might be four answers, separate for PDS vs. PDSE and for
>   HFS vs. zFS.
Take a look at my performance problems with (not-even-*that*-) large 
PDSEs. At least one other installation has the same issue, and I bet there are 
more out there. Performance of larger PDSE is abysmal, but the limits of PDSs 
(they cannot be allocated larger than 64K tracks) force us to use them. 
PDS outperforms PDSE for allocations lower than 64K tracks. And that can be 
directly attributed to the design of PDSEs - their 'directory' is not in one 
place 
like in PDS's, but scattered throughout the dataset. So in order to get the 
full 
directory of the PDSE (10.000 member), in our case it takes more than 10.000 
IOs (verified by looking at the SMF records) to get the directory. The long 
time (more than 90seconds) to get that list can be directly attributed to the 
time it takes for the IO to come back. Remedy is to have a program that keeps 
the dataset open artificially, but that only helps for the first about 15 
minutes 
after the first real access.

I have also mentioned it before: Back when we ran Lotus Notes on z/OS, we 
migrated those HFSs to zFS first chance we got (on IBMs urgent 
recommendation, when the performance was bad). Fat lot of good it did. After 
about 6 months we went back to HFS, and they were *faster* than zFS, due 
to the fact that in those days the HFS/PDSE directory must have been stored 
differently. A 'recopy' or 'reorg' in those days made directory access faster, 
and then performance of the HFS was better than via zFS. That was more 
than 5 years ago. (And Lotus Notes now runs on zLinux). Some design change 
in PDSE took away that 'reorg' capability (that boosted performance), so 
recopying it these days has no effect whatsoever. That's when I invented 
my 'just-do-an-open' program.

And the funny thing is, when PDSEs came out about MVS 4.3 (or was it 4.2 
and the corresponding SMS version), I really liked them. Still do for *small* 
datasets.

But I also think that if they hadn't been foisted on us, then IBM would have 
just as happily found a way to make long names and all the new functionality 
possible these days only via PDSE for PDSs. As I understand it, back then 
customers complained about the need for reorganization. And using PDSE code 
for HFSs (now 'functionally stabilized') was just a marketing ploy to force the 
world to use PDSEs, IMO. Just like RRSs (or CICSs) use of logger was a way to 
get system looger more widely accepted. Just as 'giving away' a WAS 
underneath z/OSMF ('at no cost') is a way to enforce usage of WAS on z/OS. 
And new functionality will be forced to use z/OSMF, just to get customers 
needing that functionality to use z/OSMF.

Barbara

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to