<snip> >Pardon me? >It builds an in-storage cache of directories and stages modules to VLF. >Modules are administrated per library, so is can manage duplicate >modules. > >Kees.
Hi Kees, Thank you very much. Given duplicate modules exist in separate libraries, and LLA is making a determination based on fetch performance and not sequential order, do you know how the fetch performance is evaluated? I.e., how is one directory chose over another? Is it a function of the LLA search engine? I'm trying to understand this scenario: When job 1 is looking for a module, I presume the engine looks at LLA before LNKLST. Let's say it finds the module, and passes the address to the job. Job 2 is looking for a module, and LLA looks for it. Does the engine start at the beginning of the LLA in-storage cache of directories, or does it continue searching from where it was last? Please excuse me if this should be obvious; I'm not a MVS Sysprog. Thanks, Linda ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html