All of this assumes that you're not taking frequent incremental backups. When 
you have periodic full volume backups and frequent increwmental backups then 
recovering deleted production DASD datasets is no big deal. Of course, that 
requires that the retention period be adequate.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3

________________________________________
From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Joel C. Ewing [jce.ebe...@cox.net]
Sent: Sunday, July 5, 2020 10:14 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Storage & tape question

One of the major historical functional differences between tape-based
and DASD-based data sets has to do with with ability to recover deleted
data sets later found to be needed.   You delete a data set on DASD,
odds are very good something else overwrites that data or all knowledge
of the location of the old data quickly in seconds, minutes or hours and
destroys all possibility of recovery.   You delete a data  set on tape,
the physical tape volume may not be re-used for days or weeks -- or if
you realized there is an issue, the physical tape volume could be set
aside and easily kept for archive indefinitely.

In the old days, an application read a tape master file, did updates,
wrote out a new tape master file with the same name, and operators put
physical labels on the tape volumes and just knew how long to keep the
old physical master volumes and not mount them as output tapes.   That
design evolved so that master files and other files that needed
retention became GDG's with "reasonable" limits, with tape volumes
protected or made eligible for re-use by a tape management system.  In
theory, such GDGs could just as easily be on DASD as tape.  In practice
one encountered applications systems where "temp" data sets that were
originally on tape because of data set size probably should have been
GDGs but were not; where applications that used to run once a month now
ran more frequently or irregularly on user demand, and GDG limits and
data set retention rules had not been increased as much as they should
have been.  These errors typically don't get detected until there is a
problem requiring old data to recover.

It is not always possible for application systems to anticipate all
possible failures, particularly those caused by bad user input where the
error might not be discovered until much later, or by an operational
error or JCL design error where incorrect job re-starts could cause
premature data-set deletion.  Over the decades I saw a number of
application systems that were able to recover from problems where the
recovery was either made easier or possible by access to tape data sets
that had logically scratched but were still physically available.   Even
virtual tape systems still allow for some leeway on the destruction of
logically scratched tape volumes, but typically that retention with
virtual tape was only a matter of days, unless the problem was
recognized in time to mark the volume for retention in the tape
management system.

It is even possible to recover the data in the case of a deleted ML2
data set on tape:  If the physical volume ML2 is still intact and you
have backups of the HSM CDS data sets before the deletion, an
independent test/recovery z/OS system can be used to recall the data set
and save it in a way that can be ported back to the  original system.

So yes, in a perfect world you could just eliminate all tape and replace
tape data sets with DASD data sets; but this really needs to be
co-ordinated  with a careful review of application systems to be sure
that there is proper retention of all data sets potentially needed for
recovery from data and/or design errors -- and best to err on the side
of excess retention to guard against the unexpected.  For some
application systems it might even make sense to ask, what would it take
to reprocess all data from any starting point in the last x months for
some value of x.
    Joel C. Ewing

On 7/5/20 7:12 AM, kekronbekron wrote:
> Hello List,
>
> Just wondering ... assuming there's a primary storage product out there that 
> can store how-many-ever hoo-haa-bytes, and is a good product in general, it 
> should make sense to begin eliminating all tape (3490/3590) use right?
> First, ML1 & ML2 in HSM, then HSM itself, then rebuild jobs to write to disk, 
> or do SMS/ACS updates to make it all disk reads/writes.
>
> Looking at the current storage solutions out there, this is possible, right?
> What would be the drawbacks (assume that primary storage is super 
> cost-efficient, so there's no need to archive anything).
>
> - KB
>
> ...


--
Joel C. Ewing

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to