re:
http://www.garlic.com/~lynn/2011o.html#33 Data Areas?

stage1 os/360 sysgen was a 40-100 card assembler (macro) program that
was assembled and produced possibly 1000-2000 cards as stage2. stage1
"macros" didn't generate any machine code ... it was all "punch"
statements that generated their own card images. stage2 was large number
of job steps, mostly iebcopy & iehmove ... that took mostly PDS members
from libraries on "starter system" disk to target production disks.

starting with sysgen for release11 ... I would take the stage2 card
ouput and rework it so i could run it in production jobstream (with
HASP, instead of in the starter system) and also significantly
re-organized the steps and the member copy/move. this was at univ with
large number of student jobs ... predating watfor. The univ
re:
http://www.garlic.com/~lynn/2011o.html#33 Data Areas?

stage1 os/360 sysgen was a 40-100 card assembler (macro) program that
was assembled and produced possibly 1000-2000 cards as stage2. stage1
"macros" didn't generate any machine code ... it was all "punch"
statements that generated their own card images. stage2 was large number
of job steps, mostly iebcopy & iehmove ... that took mostly PDS members
from libraries on "starter system" disk to target production disks.

starting with sysgen for release11 ... I would take the stage2 card
ouput and rework it so i could run it in production jobstream (with
HASP, instead of in the starter system) and also significantly
re-organized the steps and the member copy/move. this was at univ with
large number of student jobs ... predating watfor. The univ. had been
running student fortran jobs on IBSYS/709 tape->tape with 1401 front-end
that did tape<->unit record (tapes manually moved between 1401 drives
and 709 drives). On 709, student jobs ran a second or two elapsed
time. Initial move to os/360 on 360/65, the elapsed time increase to
more like 100 seconds (without HASP). Adding HASP got elapsed time down
to almost 30 seconds. Extensive hand-crafting stage2 ... so that files
and members were ordered for optimal disk arm seek ... got elapsed time
down to approx. 13 seconds (almost three times faster). For small
student fortran G 3-step jobs, majority of the time was spent doing job
scheduler related stuff that was extrodinarily disk arm intensive.

part of old presentation i gave at fall '68 SHARE meeting
http://www.garlic.com/~lynn/94.html#18

The machine was actually a 360/67 but ran nearly all the time as 360/65
but I was allowed to play with cp67 on the weekends. The presentation
includes measures of extensive code rewrite that I did of cp67 during
spring and summer of 1968, in addition to performance of os/360 running
under cp67 and os/360 running on bare machine (as well as extensive work
i did on os/360 sysgen for optimal arm seek operation).

One of the problems was that most PTFs were PDS member replacements
... which could totally destory optimal arm seek operation over the
period of several months (resulting in throughput degradation to almost
that of unoptimized system and could require periodic regen just to
re-establish optimal throughput).

all of cp67 started out being purely source it wasn't until later that a
object/pre-assembled option was added. many customers defaulted to
assemble. besides virtual machines and bunch of online stuff,
the internal network also came out of the science center ... misc
past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

GML was also invented at the science center in 1969 ... and GML tag
processing was added to CMS script (dot-command formating process
... port from ctss runoff), decade after invention, GML morphed into ISO
international SGML standard ... and another decade SGML morphed into
HTML (at CERN). misc. past posts mentioning GML, SGML, etc
http://www.garlic.com/~lynnsubmain.html#sgml

misc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the early uses of the network technology was distributed
development between Cambridge and Endicott to add virtual 370 machine
support to cp67. Part of the effort also resulted in development in the
cms-based multi-level source update process. The cambridge 360/67 ran
the cp67 "L" system. In 360/67 virtual machine would run the cp67 "H"
system (updates to add support for 370 virtual machine as alternative to
360/67 virtual machine, issue was to not expose base cp67 users
... which included non-employees/students from universities in boston
area, to 370 virtual memory which hadn't been announced yet). In an "H"
system 370 virtual machine ran CP67 "I" system (i.e. cp67 code changed
to run with 370 hardware definition rather than 360/67 hardware
definition). CP67 "I" was running normal operation a year before first
370 engineering machine with virtual memory support was operational.
Later, as increasing number of 370 machines became available internally,
most of them ran cp67I (also well before vm370 virtual machine product
became available).

-- 
virtualization experience starting Jan1968, online at home since Mar1970
. had been
running student fortran jobs on IBSYS/709 tape->tape with 1401 front-end
that did tape<->unit record (tapes manually moved between 1401 drives
and 709 drives). On 709, student jobs ran a second or two elapsed
time. Initial move to os/360 on 360/65, the elapsed time increase to
more like 100 seconds (without HASP). Adding HASP got elapsed time down
to almost 30 seconds. Extensive hand-crafting stage2 ... so that files
and members were ordered for optimal disk arm seek ... got elapsed time
down to approx. 13 seconds (almost three times faster). For small
student fortran G 3-step jobs, majority of the time was spent doing job
scheduler related stuff that was extrodinarily disk arm intensive.

part of old presentation i gave at fall '68 SHARE meeting
http://www.garlic.com/~lynn/94.html#18

The machine was actually a 360/67 but ran nearly all the time as 360/65
but I was allowed to play with cp67 on the weekends. The presentation
includes measures of extensive code rewrite that I did of cp67 during
spring and summer of 1968, in addition to performance of os/360 running
under cp67 and os/360 running on bare machine (as well as extensive work
i did on os/360 sysgen for optimal arm seek operation).

One of the problems was that most PTFs were PDS member replacements
... which could totally destory optimal arm seek operation over the
period of several months (resulting in throughput degradation to almost
that of unoptimized system and could require periodic regen just to
re-establish optimal throughput).

all of cp67 started out being purely source it wasn't until later that a
object/pre-assembled option was added. many customers defaulted to
assemble. besides virtual machines and bunch of online stuff,
the internal network also came out of the science center ... misc
past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

GML was also invented at the science center in 1969 ... and GML tag
processing was added to CMS script (dot-command formating process
... port from ctss runoff), decade after invention, GML morphed into ISO
international SGML standard ... and another decade SGML morphed into
HTML (at CERN). misc. past posts mentioning GML, SGML, etc
http://www.garlic.com/~lynnsubmain.html#sgml

misc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the early uses of the network technology was distributed
development between Cambridge and Endicott to add virtual 370 machine
support to cp67. Part of the effort also resulted in development in the
cms-based multi-level source update process. The cambridge 360/67 ran
the cp67 "L" system. In 360/67 virtual machine would run the cp67 "H"
system (updates to add support for 370 virtual machine as alternative to
360/67 virtual machine, issue was to not expose base cp67 users
... which included non-employees/students from universities in boston
area, to 370 virtual memory which hadn't been announced yet). In an "H"
system 370 virtual machine ran CP67 "I" system (i.e. cp67 code changed
to run with 370 hardware definition rather than 360/67 hardware
definition). CP67 "I" was running normal operation a year before first
370 engineering machine with virtual memory support was operational.
Later, as increasing number of 370 machines became available internally,
most of them ran cp67I (also well before vm370 virtual machine product
became available).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to