Re: ICF Catalog with lots of redundant datasets

2006-03-27 Thread Joel C. Ewing

Mike Baker wrote:

Hi Ron,

Thanks for your excellent explanation. (PS: I attended one of your VSAM
courses in Wellington, New Zealand, back in the early 90s).

Just to elaborate on the lots of redundant datasets and HLQ's...  for
example, we have a HLQ called BUS, and approx 9000 BUS.* datasets of
which about 95% of them have been migrated to tape. The remaining 5% which
are still being used are because people have been to lazy(?) to change a
few remaining jobs to use a different HLQ. We could safely change these 5%
remaining jobs, and then delete all BUS.* datasets.

However, seeing as this has not been done, would this have much of an
overhead performance impact on the Catalog / CAS??

Could you please elaborate on this finer detail?

Thanks very much.

...
That the datasets have migrated to tape doesn't mean they are either 
obsolete or redundant, just that they haven't been accessed recently. 
 This is still true even if all your BUS.** datasets are migrated to 
tape. They may still be needed for some types of recovery, as part of a 
historical archive, or possibly to satisfy legal requirements for 
retention of corporate data.  Someone with knowledge of the application 
area will have to judge whether these considerations apply and what is 
eligible for permanent deletion and when.


If you can get the application people to agree that datasets not 
accessed since date x may be deleted, then there is no reason those 
datasets couldn't be deleted today - no need to wait on the remaining 
datasets.  ISMF can be used to generate a list of datasets with a 
last-referenced date prior to x, and then that list could be massaged 
by various methods into a sequence of DELete commands for those datasets 
and executed. Better yet, if possible make all future datasets 
SMS-managed and assign an appropriate SMS management class so future 
deletions occur automatically after an agreed-upon interval of non 
reference.  I would be primarily motivated by the general principal that 
leaving truly useless things around is bad idea, but cleaning things up 
might also save on some resources, such as the number of migration tapes 
in use.


Catalog overhead would not be that much of an issue, unless people 
frequently display and hunt through lists of all BUS.** datasets rather 
than specifying a specific-enough search argument to restrict 
consideration to the actual datasets of interest.


Another possible issue is that these migrated datasets may be mixed 
among datasets that do get scratched on your migration tapes.  If that 
is the case, you could be wasting resources copying these migrated BUS 
datasets repeatedly as migration tapes are consolidated through recycle 
activity.


That having been said, I have seen cases where it was judged that the 
overhead costs of keeping possibly-useless migrated datasets around was 
less than the cost of diverting scarce resources to accurately determine 
whether the datasets were really needed.

--
Joel C. Ewing, Fort Smith, AR[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ron Ferguson
Hi Mike,

 Mike Baker [EMAIL PROTECTED] wrote:  If we have lots of
 redundant datasets on the machine, and many HLQ (high level
qualifiers)  which could (also) be completely removed, but have not
been removed /
 cleaned up, is this likely to have much of a performance degradation
 affect on the Catalog / CAS.  

Mike, I'm not certain I fully understand the description of your
situation.  For example, how many is lots, and what exactly is a
redundant dataset?

Nevertheless, however I interpret the situation, my opinion is that it
does not affect the performance of either the catalog or CAS.  

The catalog is physically a VSAM KSDS, accessed directly by key, and
even if it contains thousands (even tens of thousands) of useless
records, the speed of access to any record in the catalog will not be
affected by the redundant records.  Most (possibly all) of the
catalog's index will/should be in CAS buffers, and therefore, accessing
any record in the catalog's data component will require just a single
I/O (at most).

The same answer applies to CAS performance  --  there's no way that
redundant catalog records can affect CAS performance.  Assuming by
redundant data set you mean a cataloged data set that doesn't actually
exist, this redundant record would never be read in the first place, and
would never find its way into CAS.  Since it would never be read, it
wouldn't take up space in CAS buffers, as records are brought into CAS
only by specific request when a task attempts to locate a data set.  By
your question, I'm guessing that you possibly believe all of a catalog's
records are somehow buffered in CAS, regardless of being specifically
requested, and that's not true.  

In my opinion, the single biggest performance benefit you can give your
catalog(s) is to turn on VLF (the Virtual Lookaside Facility) within CAS
(which is specified in the COVLFxx member of SYS1.PARMLIB, and can be
checked by a MODIFY CATALOG,PREPORT,VLF operator command).  This topic
has been discussed many times on this Listserv, and can be found in the
archives.  There's also very good and extensive information on this in
the z/OS DFSMS: Managing Catalog manual, SC26-7409.

Having said that, cleaning up redundant data set records  --  for
example, entire HLQ levels of useless data set entries that haven't been
cleaned up  --  makes your catalog at risk of significantly greater
problems when/if you have a catalog problem which requires diagnostic
analysis.  Any attempt to run diagnostics on this catalog will likely
identify lots (your word) of useless records that just clutter up the
true status of the catalog.  By not cleaning up these entries, you're
potentially creating a bigger problem for yourself at some later time
--  and if that time is when you have an outage on the catalog and it
results in longer recovery time, you may have critical applications
delayed while you struggle with a dirty catalog.

I hope I've shed some light on your question.  If I'm on a tangent and
don't understand what you're asking, drop me a note (on the Listserv, or
privately).

Ron Ferguson
President and CEO
Mainstar Software Corporation
www.mainstar.com
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Mike Baker
Hi Ron,

Thanks for your excellent explanation. (PS: I attended one of your VSAM
courses in Wellington, New Zealand, back in the early 90s).

Just to elaborate on the lots of redundant datasets and HLQ's...  for
example, we have a HLQ called BUS, and approx 9000 BUS.* datasets of
which about 95% of them have been migrated to tape. The remaining 5% which
are still being used are because people have been to lazy(?) to change a
few remaining jobs to use a different HLQ. We could safely change these 5%
remaining jobs, and then delete all BUS.* datasets.

However, seeing as this has not been done, would this have much of an
overhead performance impact on the Catalog / CAS??

Could you please elaborate on this finer detail?

Thanks very much.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ron Ferguson
Hi Mike,

 Mike Baker [EMAIL PROTECTED] wrote:  Just to elaborate on
the
 lots of redundant datasets and HLQ's...  for example, we have a HLQ

 called BUS, and approx 9000 BUS.* datasets of which about 95% of
them
 have been migrated to tape. The remaining 5% which are still being
used
 are because people have been to lazy(?) to change a few remaining
jobs to
 use a different HLQ. We could safely change these 5% remaining jobs,
and  then delete all BUS.* datasets.

Your explanation of redundant datasets still leaves a significant
unanswered question  --  when they've been migrated to tape, are they
still valid data sets under their original name, and if access to them
was to be necessary, would they be located through the catalog entry?
Or, in fact, are they orphaned catalog entries, and a valid data set
isn't anywhere for them?

On the assumption that this is not the case, and the catalog entries
really are orphaned, your elaboration does not change the answer  --
no, performance of the catalog itself, nor of CAS would not be adversely
affected by having so many useless BUS.* data sets (95%) scattered
around amongst the valid BUS.* data sets (5%).  Granted, with a ratio
like that, whenever a valid data set is located, the CI that contains
the record will be brought into a buffer within CAS, and depending on
the CI size of the catalog, one or more of the useless data set records
will also be brought in, thereby wasting some very-hard-to-quantify
amount of CAS buffer area.  

Nevertheless, cleaning this up will be a good idea, and you probably
should organize a project to whip these application people into shape
would be advised.  My larger concern would still be the issue of too
much of this orphaned garbage in the catalog, and some day when you
least expect it, you'll have problems with the catalog for some totally
unrelated reason, and now you have 8,000+ data set entries that only
complicate the situation.

Take care,
Ron Ferguson
Mainstar Software Corporation
www.mainstar.com
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ted MacNEIL
 called BUS, and approx 9000 BUS.* datasets of which about 95% of
them
 have been migrated to tape. The remaining 5% which are still being
used
 are because people have been to lazy(?) to change a few remaining
jobs

I think you missed Ron's point.
At the risk of exagerating, you can have a million datasets, that (while taking 
up catalogue space) have no impact to performance.
The catalogue is indexed.
It is cached.
But, if you don't touch a DSN (or an alias) it is not loaded, opened, or 
referenced!


-
-teD

I’m an enthusiastic proselytiser of the universal panacea I believe in!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


ICF Catalog with lots of redundant datasets

2006-03-25 Thread Mike Baker
Hi all,

If we have lots of redundant datasets on the machine, and many HLQ (high
level qualifiers) which could (also) be completely removed, but have not
been removed / cleaned up, is this likely to have much of a performance
degradation affect on the Catalog / CAS.

Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html