On 03/22/2013 04:31 AM, Ravi Gaur wrote:
We had a very weird situation where one of the TSO user compressed the PDS 
dataset which had 6000+ members however eventually the JCL in all of these 
members got similar atmost ..so look like during stow somehow same memory 
overlaid or Directory TTR got wrong...

We had no backup of the dataset and then have had to create new and with best 
knowledge or recovered members from 2011 ...
Now challenge been to figure out what really happened and any way to restore it 
back to the stage it was compress(been push it's not possible... but thought to 
bring it on table)..


Allowing users to compress very large PDS libraries from TSO is always risky. The compress typically takes much longer than than the user expects -- especially if the system is heavily loaded -- and sooner or later someone will attention-out with PA1 in the middle of a big compress or have someone else cancel their session, either of which is guaranteed to trash the PDS and get all sorts of unpredictable strangeness. If you allow such compresses, you need at the very least to train users to know that interrupting a PDS compress process with PA1 or having their session cancelled during a compress will trash the PDS, to call if they think they are hung and allow tech support to determine if they are really hung or still running the compress, and use HSM or some other means to auto back up such libraries when changed so when they do get trashed they are easier to restore and it's simpler to determine what recent updates have been lost.

Number one rule: any library a TSO user can update is also at risk of being corrupted or destroyed, and if it contains important data you must have some automated process to insure relatively current backups!

...and unscheduled, on-demand compresses of important large libraries in batch should probably be restricted to support personnel who do such things regularly, and not leave it to average Joe user who may not have done one for many months and might not use the the best procedure. Wherever possible we always tried to use a batch sequence that would get exclusive control of the dataset for the duration of the job, rename it, allocate a new ds with original name, and compress by IEBCOPY of old to new - guaranteeing a most current backup if anything went wrong.

--
Joel C. Ewing,    Bentonville, AR       jcew...@acm.org 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to