David, We are using it - biggest benefit so far has been with our backups in converting DFSMSDSS (ADRDSSU) backups using "ZPREF" option. It reduced runtime on backups significantly (50%+) with also some reduction in CPU cycles. We also are using it in other scenarios. You do need to test though - think biggest areas of concern are with vendors that don't use standard access methods - few of note SAS, NOMAD - sure many others. Feel free to contact me at sest...@gmail.com but am a big proponent of using. I think I have white paper somewhere but need to track down. One of the reasons we went this route is we are starting to exploit pervasive encryption which for most part requires extended format datasets (also required for ZEDC). Best practice is to compress before you encrypt. One other area we exploited was DFHSM and activated that so it now compresses migrated and backed up datasets - hard to measure improvement, but am sure is much more efficient than prior settings. I know that large VSAM files see benefits as well - but just ensure you test before going to far in that direction - one other point - ZFS have the ability to compress as well which can be advantageous - especially when vendors send large ZFS packages - such as SAS does. Areas we haven't exploited yet but are intriguing are JES spool space compression as well as SMF data compression.
One other key suggestion to look at is to run a ZBNA (free software from IBM) study as it helps you identify good candidate datasets from your batch job cycles. As others have noted it doesn't make sense to compress things that are relatively small - but do try and apply the 80/20 rule here. Anytime you can do less I/O in batch jobs is always a good thing of course and win-win situation! All the best, Steve Estle Peraton ZOS Systems Programmer ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN