[EMAIL PROTECTED] wrote: > HASP accessed its SPOOL data this way until users began to complain that > they had no program that would back up/restore SPOOl volumes to tape. Then > the > HASP team made this record alternation an option. The thought was that > accessing every other record in sequence would provide a little boost in > performance. The same technique was (and maybe still is) used in CMS files. > HASP's > record alternation option was removed when HASP was replaced with JES2.
vm370 used 101 byte filler records on 3330 drives for page formated stuff (paging, spool, chkpt, directory, etc). 3330 had 3 4k page records per track with 101 byte filler records between the page records ... 57 4k page records per cylinder. in cp67, the original code did fifo single record transfer at a time. i added code for disk (2314) that would do ordered seek queueing and disk/drum (2314 & 2301) would chain multiple requests (for same cylinder) in single i/o. on 2301 ... the single record per i/o thruput peaked at around 80 records/sec. with chaining, the thruput could hit 300 records/sec. for vm370, on 3330 ... you would like to transfer 3 pages per revolution ... however, page requests for the three slots (1,2,3) might not be on the same track ... which could involve doing a head-switch to pick up the next consecutive (rotational) record on a different track. the additional head switch ccw resulted in additional end-to-end processing latency (while the disk continued spinning) and would result in the start of the next page record having rotated past the head by the time the i/o transfer processing had come up. the dummy filler records was to increase the rotational latency before the start of the next page record came under the head ... and hopefully the channel/controller/drive then would have had time to do the extra ccw processing. i did a bunch of benchmarking with different filler record sizes, channels, processors, ... as well as disks/controllers from different vendors. default channel processing spec. required 110 byte filler record for the extra head-switch ccw processing latency. standard 3330 spec. only had room on the track for 101 byte fillers. 158 & below processors had integrated channels (i.e. the processor engine was time-shared between executing 370 microcode code and executing channel microcode). 168 had external hardware channels that had high performance and lower latency. 4341 integrated channels tended to have latency close to 168 external hardware channels. 158 integrated channels had the highest processing latency. for the 303x ... an external channel director box was used. the 303x channel director was actually the 158 processor engine with the 370 microcode removed leaving only the integrated channel migrated. a 3031 was basically a 158 processor engine with only the 370 microcode (and the integrated channel migrated removed) and configured to use a 303x channel director (in some sense a 3031 was actually a two-processo smp ... except the two processors were running different microcode loads). A 3032 was a 168-3 configured to use channel directors. A 3033 started out being 168-3 wiring/logic design mapped to faster chip technology (and configured for channel directors). All of the 303x channel tests showed the same channel i/o processing latency as the 158 channel test. originally on cp67 ... and then ported to vm370 ... i had done a remap of the cms filesystem to use page mapped semantics with a high-level virtual machine interface using the kernel paging infrastructure. As an undergraduate, I had created a special I/O interface for cms disk i/o that drastically reduced the pathlength processing (and eventually turned into diag i/o). The page mapped semantics further reduced the pathlength overhead (since it eliminated various operations of simulating a real i/o paradigm in a virtual address space environments) and allowed me to do all sorts of optimization tricks performing the i/o operation (a lot of fancy optimization tricks had been done in the kernel paging environment ... that now were free for cms filesystem i/o). For instance, somewhat because of the real i/o paradigm orientation, cms only did chained i/o if records for file were sequentially consecutive allocated on the disk. page slot chaining code didn't care which order they were on the disk ... if there were requests for the same cylinder ... just chain up all pending requests and let it rip (regardless of things like file sequential consecutive considerations). misc. collected posts on having done page mapped semantics for cms filesystem ... originally on cp67 in the early 70s http://www.garlic.com/~lynn/subtopic.html#mmap misc. past posts on filler records http://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic) http://www.garlic.com/~lynn/2001j.html#3 YKYGOW... http://www.garlic.com/~lynn/2001n.html#16 Movies with source code (was Re: Movies with DEC minis) http://www.garlic.com/~lynn/2002b.html#17 index searching http://www.garlic.com/~lynn/2003f.html#40 inter-block gaps on DASD tracks http://www.garlic.com/~lynn/2003f.html#51 inter-block gaps on DASD tracks http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

