The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
> We called them "Gap Records" at NCSS and they worked very well for
> paging on 2305 devices. We'd use the first exposure for the first
> page, the second exposure for the third page and so on. They were
> connected to a 370/168 via 2860 Selector Channels. Grant Tegtmeier
> designed the code and computed the sizes of the Gap Records. He also
> installed 3330 and 2305 support code, of his own design, in our
> modified CP67/CMS system. In a long weekend! He was noted for long
> periods of seeming boredom, punctuated by spurts of sheer genius.

re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum

there were 3 people that came out from the science center to the
univ. to install cp67 the last week in jan68. One of these people left
the science center june68 to be part of ncss. He was suppose to teach a
cp67 class the following week (after he gave notice) for customers
... and the science center had to really scamble to find people to fill
in for him.

the initial cp67 code had fifo single operation processing for 2311,
2314s, and 2301 (drums). It would get about 80page transfers/sec on
2301. I redid the 2301 to do chained processing which increased peak
2301 page transfers to 300/sec. 2301 didn't have multiple request
exposure. i also redid the 2311 & 2314 code to implement ordered seek
operation (for all queued requests) ... both cp requests and cms
requests ... as well as chained request for page operations. On heavily
loaded CMS systems, the ordered seek queueing made big difference
... both graceful degradation as load increased ... as well as peak
throughput.

i also redid a whole bunch of the kernel pathlengths. This old
post:
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

contains part of presentation that I made at the '68 SHARE meeting in
Boston.

I had been doing heavy optimization of OS MFT system generations ...
carefully reordering all the STAGE2 output (from STAGE1) so that the
result would optimally place os/360 system files and PDS members on disk
(in order to minimize avg. arm seek distance). For the univ. student
work load, I would get a factor of about three times thruput
improvement. This would degrade over time as PTF maintenance was applied
... affecting high use system components. I would then have to
periodically rebuild the system in order to restore the carefully order
placement of files and PDS members.

I also got to do some work rewritting cp67 kernel ... besides redoing
the i/o stuff ... i also reworked a lot of the pathlengths ... in some
cases getting factor of 100 times improvement for some of the stuff.

As mentioned in the presentation, the original unmodified cp67 kernel
had 534 cpu seconds overhead for running MFT14 workload that took 322
seconds elapsed time. In the period between Jan68 and Aug68, I was able
to get that cp67 kernel virtual machine overhead down from 534 cpu
seconds to 113 cpu seconds (by rewritting several parts of the cp67
kernel).

I normally had classes during the week ... so much of my maintenance and
support work for OS/360 MFT and work on cp67 occurred on weekends.  The
univ. typically shutdown the datacenter from 8am Sat. until 8am Monday
... during which time I could have the whole place for my personal use.
Monday classes were sometimes a problem after having been up for 48hrs
straight.

I had also done a dynamic adaptive resource manager and my own page
replacement algorithm and thrashing controls for cp67 ... lots of the
stuff IBM picked up and shipped in product while I was still
undergraduate at the univ (including the TTY/ASCII terminal support
mentioned in earlier post in this thread).

This recent post:
http://www.garlic.com/~lynn/2008s.html#17 IBM PC competitors

mentioned that I continued to do various cp67 things ... but much of it
was dropped in the product morph from cp67 to vm370. The above has
references/pointers to some old email regarding migrating various of the
pieces from cp67 to vm370 (after the science center finally replaced
their 360/67 with a 370/155-II).

some of the old email 
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

... included these posts from a couple years ago:
http://www.garlic.com/~lynn/2006v.html#36
http://www.garlic.com/~lynn/2006w.html#7
http://www.garlic.com/~lynn/2006w.html#8

Before the decision was made to release some of it in the standard vm370
product ... they let me build, distribute, and support highly modified
vm370 (aka csc/vm) systems for large number of internal systems. at one
point I would joke with the people on the 5th flr that the number peaked
about the same as the total number that they were supporting ... recent
reference here:
http://www.garlic.com/~lynn/2008s.html#48 New machines code

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to