Unfortuanately I haven't looked up this stuff in a long time, so I might be
wrong. But IIRC, Hiperbatch is intended for sequential access and is
counter-productive for random files. Since it uses a Most Recently Used
algorithm (instead of LRU), the intent was to ensure that the most recent
access to a record was the most eligible for getting discarded from memory
(since this represented the last reader of the data).
The whole point was to avoid having records discarded because of age just
ahead of someone that was reading the file sequentially.
Also, another point was that the I/O counts were unaffected since the
application was unaware that it was using Hiperbatch, so that information is
largely irrelevant.
Anyway ... here's hoping my memory isn't completely gone
Adam
We have a highly used randomly accessed read-only VSAM KSDS that is
managed
by Hiperbatch during the Production batch window. Unfortunately, some of
the jobs that use it are still seeing unacceptably high I/O counts and
long
elapsed times.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html