[EMAIL PROTECTED] (Clark Morris) writes:
> How do the various source maintenance packages for other platforms
> such as Unix handle the problem.  I'm thinking of CVS and the various
> Itegrated Development Environments.  There are differential upgrades
> and other techniques.  I am not familiar with them but realize that I
> am not familiar with most of the tools in the non-MVS environment.

rcs, cvs, etc ... tend to be "down-dates" ... you have the complete
source for the current version ... with control information how to
regress to earlier versions.

cms had "update" command from mid-60s ... which applied an update
control file to source, resulting in "temporary" file to be updated
recent refs:
http://www.garlic.com/~lynn/2006o.html#14 SEQUENCE NUMBERS

this provides a short description of the evoluation of the CMS
update command into multi-level source maintenance updates
http://www.garlic.com/~lynn/2006n.html#45 sorting

one of the things that fell by the wayside was an application
that attempting to merge potentially parallel update activity.

during the early days at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

evolving the cms multi-level source maintenance process ... there was
an application written that attempted to merge and resolve parallel
update/maint. operations.

the infrastructure evolved out of a joint project between cambridge
and endicott to add 370 virtual machine support to cp67. cp67 provide
virtual 360 and virtual 360/67 (i.e. virtual memory) virtual machines
... but 370 was going to announce virtual memory (it was something
like two years away). 370 virtual memory definition had various
differences from 360/67. the idea was quickly implement 370 virtual
machines (with 370 defined virtual memory hardware tables) under cp67
(running on 360/67).

the multi-level initially consisted of

1) normal set of updates and enhancements built on base cp67 source,
("cp67l" system)

2) set of updates applied to normal cp67 that added support for 370
virtual machine option ("cp67h" system)

3) set of updates that modified cp67 kernel to run on 370 hardware
(rather than 360/67 hardware; "cp67i" system)

part of the issue was that the cambridge cp67 system hosted some
number of students (mit, bu, harvard, etc) and other non-employees in
the boston area. since 370 virtual memory hadn't been announced yet,
it was being treated as super sensitive corporate information and
there was no desire for it to leak to non-employees.

as a result only #1 kernel typically ran on the real hardware.  #2
kernel would run in a 360/67 virtual machine, isolated from prying
eyes of the students and other non-employees. for testing of #2, #3
would then run in a 370 virtual machine (under #2 kernel, running in
360/67 virtual machine under #1 kernel, which ran on real machine).

so potential problem was that there might be new updates introduced at
the "#1 level" (earlier in the update sequence) which might impact the
updates applied later in the update sequence (i.e. updates to the base
system that was going on independently supporting changes for 370
virtual machines).

as an aside, "cp67i" was up and running as normal operation a year
before the first engineering 370 machine with virtual memory support was
operational. then as 370 real machines with virtual memory support
became available internally (still well before customer first customer
ship), the "cp67i" was standard operating system running on those (real)
machines ... at least until the vm370 morph became available (and some
of the other operating system development got far enuf along to move
from testing in virtual machine to real machine operation).

Reply via email to