Hi everybody,
occasionally I wonder how SMS Compression (Compaction=YES on DATACLASS)
works and how much, in terms of elapsed and CPU, does it modify the step
execution.
In my shop I usually observe the obvious increase of CPU usage and a (maybe
less obvious) bigger elapsed time if compared to tape write.
In this "real example" I ran the same program with same input (a SMS
compressed dataset) writing to a SMS Compressed dataset, a DUMMY DD and a
Virtual Tape dataset.
The output dataset has more than 930 millions records. The final
compression ratio is more than 70%.
Looking at the step statistic:
Output - Compressed Dataset
STEPNAME PROCSTEP RC EXCP CPU SRB CLOCK SERV PG PAGE SWAP
ST040 00 1247K 8.65 .79 23.77 38592K 0 0 0
Output - DUMMY
STEPNAME PROCSTEP RC EXCP CPU SRB CLOCK SERV PG PAGE SWAP
ST040 00 201K 1.10 .04 2.24 4725K 0 0 0
Output - Virtual Tape Dataset
STEPNAME PROCSTEP RC EXCP CPU SRB CLOCK SERV PG PAGE SWAP
ST040 00 3730K 1.19 .20 8.86 7476K 0 0 0
What is not so clear to me is why the elapsed time does grow so much even
though there's no CPU constraint.
Because of the lower number of EXPCs I'd been expecting something less in
terms of elapsed or something similar.
An A.P.A. trace on the job while writing Compressed Dataset shows that more
then 80% of the TCB time in on Compression Services (of course it depends
on the fact the application itself uses a small amount of CPU).
I wasn't able to find out any further infos about how DFSMS compression
works.
Finally, sometimes (observing other jobs) the behaviour is not so regular.
Thanks in advance for your hints.
Massimo
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN