Remember there are two types of DLF objects (Hiperbatch Retain and
non-Retain)

Non-Retain is the one that gets deleted when the open count for the dataset
is 0.  This is the one that was intended for sequential use and you didn't
need a DLR object as large as the dataset.  Just one large enough that
concurrent sequential readers would benefit.  Reader 1 reads from disk and
places copy in Hiperbatch, readers 2, 3, 4, etc. read the Hiperbatch copy.
If the DLF object can hold 100 records than you just hope the other readers
have all gotten to record 1 before the first one gets to record 101.  Sort
of a moving window.

Retain is the one that stays there until you explicility delete it.  This
is good for both sequential and random, especially if it's large enough to
fit the entire dataset.  You load the whole thing into memory once and then
everyone reads the Hiperbatch copy in memory.

Just thought I'd pass on that distinction since it's important when talking
about whether sequential or random can benefit.

Have a nice day,
Dave Betten
DFSORT Development, Performance Lead
IBM Corporation
email:  [EMAIL PROTECTED]
1-240-715-4655, tie line 268-1499
DFSORT/MVSontheweb at http://www.ibm.com/storage/dfsort/

IBM Mainframe Discussion List <IBM-MAIN@BAMA.UA.EDU> wrote on 11/09/2007
04:07:56 PM:

> > -----Original Message-----
> > From: Gerhard Adam [mailto:[EMAIL PROTECTED]
> > Sent: Friday, November 09, 2007 1:57 PM
> > To: IBM-MAIN@BAMA.UA.EDU
> > Subject: Re: Performance comparison: Hiperbatch, BLSR, SMB?
> >
> > Unfortuanately I haven't looked up this stuff in a long time, so I
might
> > be wrong.  But IIRC, Hiperbatch is intended for sequential access and
is
> > counter-productive for random files.  Since it uses a Most Recently
Used
> > algorithm (instead of LRU), the intent was to ensure that the most
recent
> > access to a record was the most eligible for getting discarded from
memory
> > (since this represented the last reader of the data).
>
> Well, I don't know about your memory, but the latest version does indeed
> support random access, AFTER the DLF has been loaded sequentially.
>
> Don't know what algorithm it uses, but my SMF gurus tell me they can
prove
> it is being used, and in fact the batch jobs run longer if anything
happens
> to Hiperbatch, so I think it's working.  Our access is almost all random.
>
> > The whole point was to avoid having records discarded because of age
just
> > ahead of someone that was reading the file sequentially.
> >
> > Also, another point was that the I/O counts were unaffected since the
> > application was unaware that it was using Hiperbatch, so that
information
> > is largely irrelevant.
>
> I didn't know that, but it makes sense.  Thanks.
>
> > Anyway ... here's hoping my memory isn't completely gone
>
> Not completely.  :)
>
> Peter
>
> This message and any attachments are intended only for the use of
> the addressee and
> may contain information that is privileged and confidential. If the
> reader of the
> message is not the intended recipient or an authorized representative of
the
> intended recipient, you are hereby notified that any dissemination of
this
> communication is strictly prohibited. If you have received this
> communication in
> error, please notify us immediately by e-mail and delete the message and
any
> attachments from your system.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to