charl...@mcn.org (Charles Mills) writes:
> Culture is a key here.
>
> IBM's background was in punched cards. IBM's strength in punched card
> tabulating is what transferred over to their success in computer data
> processing. They never forgot that.
>
> Many other computer systems' analog of the punched card was punched paper 
> tape.
>
> We see that legacy today. z/OS's model of a file is one of discrete
> records with "hard" boundaries. UNIX's model of a file is a continuous
> undifferentiated stream of characters.
>
> BASIC and FORTRAN both used sequence numbers as "labels" but they were
> on the left, not the right, correct?
>
> Speaking of not portable program formats, didn't Symbolic Optimal
> Assembly Program (SOAP) optimize code speed by scattering instructions
> around a drum such that the next instruction to be executed was just
> coming under the read head?


re:
http://www.garlic.com/~lynn/2013e.html#52 32760?

this chronicals that 360 was suppose to be an ASCII machine ... but
because of some scheduling constraints and being able to leverage old
BCD card machines ... it was supposedly temporarily done as EBCDIC
... "one of the biggest computer goofs ever":
http://www.bobbemer.com/P-BIT.HTM

CP67/CMS was distributed from science center to customers in full source
form ... and maintenance was distributed as CMS "UDPATE" files ... that
relied on sequence numbers (and process was heavily used by customers
for their changes/fixes).

At the university, I was making so many source changes ... that I did a
preprocessor to the CMS UPDATE program that figured out the sequence
numbers to put into cols. 73-80 ... that were then inserted into the
working source file ... so that sequence numbers proceeded in sequential
increasing order (so I didn't have to manually type in the sequence
numbers on every insert).

Later, the cp67/cms multi-level source update process was added, a
wrapper cms exec file that iteratively applied a series of source
updates in sequential order to source working file before assembly.

I had archived all of it ... and when Melinda was looking for the early
history of multi-level source process ... I was able to pull off the
original files. This turned out to be fortunate timing ... since even
tho I had archived on multiple tape files ... all the tapes were in the
Almaden datacenter tape library ... and a couple months later they went
through a period where operations were pulling random tapes from the
tape library for scratch tapes (and destroyed lots of my files from 60s
up through the late 70s).

some old email with Melinda
http://www.garlic.com/~lynn/2006w.html#email850906
http://www.garlic.com/~lynn/2006w.html#email850906b
http://www.garlic.com/~lynn/2006w.html#email850908

later support was added to CMS editor ... so that edit changes to base
source file were stored as source update file ... as opposed to storing
replacement for the original file.

melinda's history tomes here:
http://www.leeandmelindavarian.com/Melinda/

a UNIX convention (rather than "updates") ... was to do "downdates"
... the most recent file is taken as current and to get earlier
versions, "downdates" are applied to regress to earlier version(s).

the CMS convention was to have stable base ... and then to apply some
number of update changes.

in the SOAP case, the drum continued to rotate while the fetched
instruction was executed (before the processor was ready to fetch the
next instruction) ... the objective was to not have to do another full
rotation to get to the next sequential instruction to execute.

vm370 paging did something similar, but different on DASD starting with
3330s. 3330 paging & spool areas were formated three 4k records per
track with 19 tracks per cylinder. vm370 tried to maximize all queued
record i/o transfers (for same cylinder) per revolution with single
channel program. the idea was if you had queued request for the 1st
record from one track and another for the 2nd record on a different
track ... insert a seek head between the channel command for the 1st
record and the channel command for the 2nd record. 

The problem is that the fetch and execution of a seek head has latency
while the disk continues to rotate (and 360, 370, etc channel
architecture precluded channel command word prefetch). The solution was
to format tracks with short "dummy" records between the 4k data records
... increasing rotational time between the end of previous data record
transfer and the start of the following data record (allowing time for
the seek head channel command to be fetched and executed).

turns out 370 channel timing architecture requires 110 byte short dummy
record to provide the latency for execution of 3330 seek head fetch &
execute ... but the 3330 tracks only had space for three 4k data records
plus 101 byte short dummy records. Turns out 145, 148, 165, 168 with
3830 controller actually could do the switch head in the 101 byte
latency (and many oem disk controllers could do the head switch
operation in the latency of a 50byte short dummy record) ... only the
158 integrated channels were so slow that they required the latency of
full 110 byte short dummy record to perform the switch head operation.

and what was used for all 303x machines?

channels for all 3031, 3032, and 3033 machines were the channel
director. the channel director was 370/158 engine with only the
integrated channel microcode and w/o the 370 instruction microcode.

a 3031 was two 370/158 engines ... one dedicated as channel director
with only the integrated channel microcode and one with only the 370
instruction microcode

a 3032 was 370/168 configured to work with 303x channel director

a 3033 was 168 logic remapped to faster chips.

all 303x i/o suffered from using the slowest of all the 370 channels

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to