Re: Before the PC: IBM invents virtualisation (Cambridge skunkworks)

2011-07-16 Thread Anne Lynn Wheeler

On 07/14/11 23:16, Boyes wrote:

http://www.theregister.co.uk/2011/07/14/brief_history_of_virtualisation_par=
t_2/

Somebody who actually gets it that there was a world before the PC. Few minor
nits, but overall an actually decent article on the role VM played in 
prefiguring
virtualization before VMWare.  Recommended reading for the glossy
magazine set.


there is forum/blog for the article:
http://forums.theregister.co.uk/forum/1/2011/07/14/brief_history_of_virtualisation_part_2/

I've added several ... also archived here:
http://www.garlic.com/~lynn/2011i.html#63

--
virtualization experience starting Jan1968, online at home since Mar1970


HILLGANG (history) Presentation

2011-03-19 Thread Anne Lynn Wheeler

Earlier this week I converted an old OCT86 SEAS (european share) presentation on VM History 
Performance from foildoc to powerpoint ... with a couple additions ... and 
presented it at 16Mar HILLGANG meeting. Exported PDF copy here:
http://www.garlic.com/~lynn/hill0316g.pdf

a little discussion of the foildoc (presentation) in this recent a.f.c. post
http://www.garlic.com/~lynn/2011e.html#20

some of the period also discussed in this (linkedin) MainframeZone group 
discussion:
http://lnkd.in/ku-thX

--
virtualization experience starting Jan1968, online at home since Mar1970


Re: Central vs. expanded storage

2010-09-25 Thread Anne Lynn Wheeler

Kris Buelens wrote:

There is handshaking between Linux and VM, and even more than one flavor

The fact that z/VM still likes to have some expanded storage is that the
management of central and expanded are different:
For expanded, CP has a time stamp and know exactly how old each page is.
For central storage there only is the reference bit, thus CP can only know
if the page was referenced since the last scan.


re:
http://www.garlic.com/~lynn/2010n.html#39 Central vs. expanded storage

least-recently-used approximation replacement (whether used in the
last scan or not) ... is based on assumption that it is predictor of
probability that the page will be needed in the near future.

as things age past a certain point ... differences in their age (since
last used) becomes less reliable differentiator (as to higher or lower
probability of being used in the near future) ... and some sort of
psuedo-random can actually outperform strict ordering (some of these are
scenarios where LRU devolves to FIFO ... and random performs better
than straight FIFO).

in the 70s, there were a number of places that looked at multiple bits
... effectively one per scan, possibly one hardware bits and one or
more software bits, where RRB becomes more like logical shift instruction.

One of the issues is if it takes too long to do a complete scan ...
then there is little differentiation being made between pages.
Splitting memory into storage and extended storage ... makes regular
storage smaller and therefor the scan goes faster. Having multiple
bits (more history) also tends to scan going faster ... since
the additional history tends to require scan to look at more
pages each time.

Again from the early 70s, another approach is to offset the testing of the
reference bit from resetting the reference bit ... say by 1/2 or 1/4
the number of pages. This gave the name to clock in the early 80s
i.e. two hands rotating around storage pages ... one resetting and
the other testing ... rather than a single hand resetting and
testing simultaneously; while I had done something similar in the late
60s and early 70s ... clock was a stanford phd in the early 80s.

At the time of the stanford phd ... there was some academic opposition
to giving a degree in that specific area ... and I was asked to
provide some of my supporting studies from more than a decade
earlier. An old post
http://www.garlic.com/~lynn/2006w.html#46
with copy of communication from the time:
http://www.garlic.com/~lynn/2006w.html#email821019

Besides doing VM, GML, bunch of online  conversational stuff, the
science center had done a lot of work on performance monitoring,
workload  configuration profiling, and stuff that would turn into
capacity planning. This included various kinds of system simulators
and analytical modeling. One of the system simulators included using
instruction/storage traces for simulating variety of page replacement
algorithms ... including exact LRU (aka maintaining exact LRU
ordering of *every* page based on each  every reference). In the
early 70s, I had come up with a variation on clock which would always
beat exact LRU (coming closer to Belady's OPT ...  given fore
knowledge of program execution, it would always choose the page for
replacement that resulted in the fewest total page faults).


Re: Central vs. expanded storage

2010-09-24 Thread Anne Lynn Wheeler

On Thu, Sep 23, 2010 at 2:14 AM, O'Brien, Dennis L
dennis.l.o'br...@bankofamerica.com wrote:

I heard from a couple of performance people at SHARE that we should have
20% to 25% of the total storage in an LPAR configured as expanded
storage.  Naturally, that's a guideline and the proper amount varies by
workload.  What should I look at to determine if we have enough expanded
storage?  We use Velocity's ESALPS suite.  The systems that I'm most
concerned about have a Linux guest workload.  One of them is all WAS,
and the other is a mix of WAS, Oracle, and some other things.

I've heard that WAS isn't the best choice for System z, but that's not
the focus of my concern.  We have the workload that we have, and I just
want to make it run as well as it can.


expanded store was originally done for 3090 because of physical
packaging problems ... it was not possible to locate all the memory
they needed for configuration within the latency of the standard
memory bus ... so they created the expanded store bus that was wider 
longer ... and used software control to move 4k pages backforth
between regular storage and expanded store. a synchronous instruction
was provided for moving the data backforth.

the expanded store bus was also used to attach HIPPI (100mbyte/sec)
channel/devices ... since the standard 3090 i/o interface couldn't
handle the data-rate. However, since bus didn't support channel
programs ... there was a peculiar (pc-liked) peek/poke convention used
(i.e. i/o control was done by moving 4k blocks to/from special
reserved addresses on the bus).

moving forward (after physical packaging was no longer an issue)
... expanded store paradigm has been preserved because of software
storage management /or storage addressing deficiencies.

effectively, expanded store paradigm is used to partition real storage
into different classes   however, going back at least 40yrs
... there is large body of data that shows that single large store is
more efficient than partitioning the same amount of storage (assuming
that there aren't other storage management issues/shortcomings).

the simple scenario is 1 storage pages and 1 expanded storage
pages ... all occupied; when there is requirement for page that is in
expanded storage, it is swapped with a page in regular storage
(incurring some software overhead). The alternative is one large block
of 2 pages ... all directly executable ... and doesn't require
swapping any pages between expanded store and regular store.

One of the efficiencies is dealing with application and/or operating
systems that perform their own caching/paging algorithm using some
sort of LRU mechanism (i.e. replacing their own pages/records using
some approximation to least-recently-used). This is characteristic of
large DBMS infrastructures that manage records in their own cache as
well as operating systems that support virtual memory. Their is a
pathological scenario if the virtual operation doesn't have all its
own dedicated storage (like in LPARs); VM will be managing virtual
pages using an LRU methodology (least-recently-used pages are the ones
selected for replacement) ... at the same time the virtual guest/DBMS
is also managing (what it thinks is real storage) with an LRU
methodology. If both are operating simultaneously ... it is possible
for VM to replace what it thinks is the least-recently-used page
(the virtual page least likely to be used) ... at the same time the
virtual guest/DBMS has decided that same page is exactly the next page
it wants to use.

Executing LRU replacement algorithms in a virtual guest/DBMS ... where
its storage is also being managed via an LRU replacement algorithm,
... can invalidate the assumption underlying LRU replacement
algorithms ... that the least-recently-used page is the least likely
to be used (a virtual guest/DBMS ... doing is own LRU algorithm is
likely to select the least-recently-used page as the next page most likely
to be used).

misc. past posts mentioning expanded store
http://www.garlic.com/~lynn/2000c.html#61 TF-1
http://www.garlic.com/~lynn/2001k.html#73 Expanded Storage?
http://www.garlic.com/~lynn/2001k.html#74 Expanded Storage?
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
http://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
http://www.garlic.com/~lynn/2004e.html#4 Expanded Storage
http://www.garlic.com/~lynn/2006.html#13 VM maclib reference
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#18 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces

3090 service processor 370 virtual memory (long winded x-over from ibm-main)

2009-12-17 Thread Anne Lynn Wheeler

re:
http://www.garlic.com/~lynn/2009r.html#18 Portable data centers

the modifications for vm370 release 6 to be service processor started
well before 3090 came out. it was a copy of standard vm370 release 6
(predates vm/sp) and then frozen (with respect to the standard
product) and then various enhancements added ... like interfaces to all
the diagnostic hardware that was going to be in the 3090. i provided
some number of tools and other stuff, supporting the effort. Some of the
stuff was things I had done for the disk engineering  product test labs
... to eliminate large class of failures (they had been running
stand-alone with trivial monitor ... after having tried to use MVS and
experienced 15min MTBF with a single testcell):
http://www.garlic.com/~lynn/subtopic.html#disk

At the time they had started the effort ... they gathered up as much
stuff as they could ... anticipating that vm370 release 6 ... would not
be current thru the lifetime of the 3090; TROUT ... some past posts with
other old email from TROUT period:
http://www.garlic.com/~lynn/2006j.html#27 virtual memory
http://www.garlic.com/~lynn/2006j.html#31 virtual memory

i was blamed for online computer conferencing on the internal network
in the late 70s and early 80s. The POK engineering manager that headed
up the 3090 service processor ... had observed all the issues with the
3081 service processor ... having to write everything from scratch and
was a big proponent of using as much as possible readily available tools
(i.e. 3090 service processor screens were actually CMS IOS3270). In any
case, the guy heading up the effort also became somewhat active in some
of the computer conferencing and took some amount of hits for the
activity (although not nearly as much as I did). He also took hits on
scope-creep in the effort and growing demand for people and resources.

DOS Emulator feature had base/bound (significantly simpler than all the
segment and page table stuff; just check the address against the bound
... and then add in the base) ... not for paging but for address
translation ... like some LPAR implementations ... still required a
contiguous amount of real-memory.

I was undergraduate in the 60s ... but still doing a lot of work on both
os/360 (responsible for academic and administrative system at the
univ. ... including doing highly customized os/360 system for careful
placement of datasets and PDS members to optimize arm seek  getting
approx three times thruput improvement for student jobs). I was also
allowed to do a lot of cp67 ... rewritting lots of the kernel code.

Anycase, recent posts about Boeing trying to move all of their
dataprocessing into BCS (fledging BCS started out in boeing
corporate hdqtrs administrative ... which had single 360/30
for payroll):
http://www.garlic.com/~lynn/2009r.html#43 Boeings New Dreamliner Ready For 
Maiden Voyage
http://www.garlic.com/~lynn/2009r.html#44 Boeings New Dreamliner Ready For 
Maiden Voyage
http://www.garlic.com/~lynn/2009r.html#45 Boeings New Dreamliner Ready For 
Maiden Voyage

and I got dragged into it. I was con'ed into giving one week class
(during spring break, '69) to the fledging BCS technical staff (and the
ibm technical support team). I was then brought in as full-time BCS
employee for the summer of '69. Part of responsibility was installing
cp67 operation in corporate hdqtrs machine room (which until then just
had a 360/30 for payroll). Part of BCS was to take over the renton
datacenter (a little corporate internal politics) ... which was the
largest operation I had seen ... summer of '69 there were always pieces
for 2-3 360/65s staged around in the hallways ... because 360/65s were
arriving faster than they could be installed.

In any case, after 370s were available ... but not yet with virtual
memory support ... one of the IBM SEs on the boeing account did a
hacked version of cp67 to use 370 DOS Emulator (aka address
base+bound, contiguous real storage) ... again much more like LPAR
support.  He did do complete swap of a virtual machine address space
(i.e.  virtual machine size had to match the base+bound contiguous area)
... so could run more virtual machines that there was total real storage
available.

As mentioned in the above ... summer of '69 ... Boeing also moved the
360/67 multiprocessor from Huntsville to Seattle (this was separate from
the 360/67 uniprocessor installed in corporate hdqtrs). Huntsville had
been using it to run a highly modified version of MVT release 13.
Problem was that MVT had significant problem with storage fragmentation
with long running jobs. Huntsville had a large collection of 2250s with
long running graphics application. Hack to MVT was to use the 360/67
address translation hardware to re-arrange real storage to appear
contiguous (no paging) ... this was different than the early 370 hack
to cp67 to use the base+bound (instead of full address translation) and
real contiguous storage for primitive virtual machine swapping.

Before virtual 

Re: spool file data

2009-12-01 Thread Anne Lynn Wheeler

Another application by the VMSG author was parasite/story. It used the PVM logical device/3270 
facility and had a HLLAPI type scripting language (before the advent of ibm/pc) ... it was 
after REX was available internally. Another remarkable thing was that the executable 
was small enough to fit in the CMS transient area.

some old storys ... including automatic logging into RETAIN and retrieving PUT 
buckets:
http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
http://www.garlic.com/~lynn/2001k.html#36 Newbie TOPS-10 7.03 question

for other topic drift ... in the early 80s I did a re-implementation of the 
spool file system written in vs/pascal running in virtual machine. a problem 
was that HSDT project was putting in multiple T1 (and higher speed links) ... 
with aggregate thruput requirements of megabyte/sec or more. This was 
effectively impossible with RSCS and the existing spool file system. The spool 
file interface was synchronous transfer of 4k block at a time. On a system with 
competing transfers, a RSCS transfer might be stuffed in queue with 3-4 other 
operations ... giving RSCS possibly 4-5 4kbyte transfers/sec; I needed possibly 
200-300 4kbyte transfers/sec for RSCS.

I had my page-mapped filesystem for CMS with things like contiguous allocation, 
efficient multiple block transfers and while CMS might view the interface as 
synchronous ... but playing games with page table entries allowed for 
asynchronous transfers (a CMS application with moderate filesystem use had 
something like three times the thruput using page-mapped filesystem compared to 
normal CMS filesystem running on exact same hardware), i could leverage that 
interface for the virtual machine spool file implementation ... but I was 
taking spool file assembler kernel implementation, redo it in virtual machine 
pascal implementation and get at least 100 times increased thruput.

The additional challenge was that since RSCS transferred the whole 4k spool file 
block ... all of the high-thruput changes  modifications had to be done so 
that they were transparent and interoperated with RSCS running on unmodified vm370 
systems.

old posts on the subject in this mailing list
http://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device 
(was Re: removal of paging device)
http://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX

other posts on the subject:
http://www.garlic.com/~lynn/94.html#22 CP spooling  programming technology
http://www.garlic.com/~lynn/94.html#29 CP spooling  programming technology
http://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long 
post warning)

misc. past HSDT posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970


Re: CMS IPL ( other misc)

2009-07-15 Thread Anne Lynn Wheeler

On 07/14/2009 01:00 AM, IBMVM automatic digest system wrote:

Version 3 was the first CMS that could not be IPLled on the iron, I
think.  Someone should ask Lynn Wheeler.


re:
http://www.garlic.com/~lynn/2009j.html#67 DCSS
http://www.garlic.com/~lynn/2009j.html#68 DCSS addenda

CMS started out with 256kbyte (virtual) machine operation (on real 360/40).

original virtual machine system (at science center) was cp/40 done on a 
(256kbyte) 360/40 that
had special hardware modification to support virtual memory.

while cp/40 was being developed ... cms was also being developed ... running on 
the
bare 360/40 in non-virtual memory mode.

when the science center replaced the 360/40 with 360/67 (standard product, 
basically
360/65 with hardware modifications to support virtual memory) ... cp40 morphed 
into
cp67

when 3 people from the science center came out to install cp67 at the univ the 
last week
of jan68 ... all source was kept on cards on loaded into os/360 and assembled 
under os/360
producing physical text decks ... which were combined together in a card tray 
with
a modified BPS loader in the front. The physical cp67 deck was loaded into 
2540 card
reader and ipl. after BPS loader got the CP67 txT decks into memory ... it 
would
transfer to the last program ... CPINIT (in vm370 DMKCPI) ... which would write
the core image to specified disk location and write the IPL CCW sequence to the 
IPL disk.

Distribution was os/360tapes.

CMS would run in a 256kbyte virtual machine or on the bare hardware.

Part of the issue was both CP40 (and then CP67) and CMS were being developed in
parallel ... with the original source compile, etc ... all being done on os/360.

Sometime by summer 68, science center had moved to having source as CMS files
and assembling on CMS to produce TXT decks. Physical TXT were still being
kept in card tray and physically IPL to build new IPL'able kernel.

By that summer, I had done a lot of kernel CP67 pathlength work ... especially
targeted for OS/360 in CP67 virtual machine. Old post with part of presentation
I gave at the Aug68 SHARE meeting (held in Boston) ... lots of changes
were picked up by the science center for standard cp67 and shipped
http://.garlic.com/~lynn/94.html# CP/67  OS MFT14

I was also doing very carefully crafted OS/360 stage2 sysgens. I originally 
would get
the stage2 card deck output from the OS/360 stage1 assemble ... and reorder
all the statements to achieve carefully order of resulting generated system
file on disk (to optimize arm seek operation).

Later in 68, I looked at doing some pathlength enhancements for CMS environment
(as well as starting on dynamic adaptive resource management, new page 
replacement
algorithms, new scheduling algorithms and other stuff). Lots of CMS
operation was simplified (compared to OS/360) ... so major (cp67) pathlength
overhead was in CMS disk I/O channel program translation (CCW). I originally
defined a new CCW op-code that in single CCW specified all the parameters
for seek/search/tic/read/write operation ... drastically reducing the
channel program translation overhead. I also noticed that CMS didn't do
any multitasking ... just did serialized wait for the disk i/o to complete.
So I gave this new CCW op-code serialized semantics (i.e. it actually
returned to virtual SIO after the I/O had completed, with CC=1 CSW stored).

I got a lot of push back from the science center about having violated
virtual machine architecture (a CCW that wasn't defined in any hardware
manual). They explained that the appropriate way to violate the 360
principles of operation was with the diagnose instruction ... which
was defined as being model dependent implementation. The fiction was
then to define a virtual machine hardware model ... where the
operation of the diagnose instruction were according to virtual
machine (model) specification.

CMS was modified to use a diagnose instruction at startup to determine
whether it was running in virtual machine or (instruction failed)
on real machine. If running in virtual machine, it would be setup
to use diagnose instruction for diak i/o (semantics about the
same as my special CCW) or SIO ( interrupts) for disk I/O.

In the initial translation to VM370 (release 1), CMS (cambridge monitor
system) was renamed to CMS (conversational monitor system) and
the test for running in virtual machine was removed as well
as the code to use SIO ( interrupts) for disk I/O ... eliminating
CMS's ability to run on bare hardware.

In cp67 there was a facility for saving named virtual memory
pages and ipl-by-name virtual memory pages. The NAME specifications
were part of a kernel module (renamed DMKSNT for VM370). In cp67,
the named tables specified the range of virtual pages to be saved
(and the disk location where they were to be saved). The ipl-by-name
would modify virtual memory tables to point to the specified disk
location (with RECOMP bit ... that the disk location was R/O to
the page replacement algorithm ... 

Re: IBMVM Digest - 12 Jul 2009 to 13 Jul 2009 (#2009-188)

2009-07-15 Thread Anne Lynn Wheeler

Jeff wrote:

I'm definitely no substitute for Sir Lynn, but I remember DCSS and DMKSNT in
VM/370 Release 3 PLC 8, which is where I started with VM.

In fact, I used CMSAMS and CMSVSAM then for Unnatural Practices, or at least
not for the purposes for which they were created. I was porting the CP/67
port of LISP/MTS to VM/370, and needed something to replace the named
segment used under CP/67 for LISP's pushdown stack. Instead of checking the
stack pointer for the end of the stack, it would just push onto the stack
and take the program check when it ran off the end. I simulated that by
using DIAG x'64' to attach CMSAMS and CMSVSAM, and then set the protect key
to user key for all but the last 2K (remember 2K pages?) page.

A LISP interpreter written entirely in BAL, with self-modifying code and
almost out of base register addressibility... that was quite an interesting
piece of code.


re:
http://www.garlic.com/~lynn/2009j.html#67
http://www.garlic.com/~lynn/2009j.html#68
http://www.garlic.com/~lynn/2009j.html#76

and post in similar thread that I started in linkedin mainframe discussion
http://www.garlic.com/~lynn/2009j.html#73

I mentioned that Cambridge had done port of apl\360 to (cp67) cms for
cms\apl. This then became one of the main vehicles for deliverying
sales  marketing support on (virtual machine based) HONE (first
cp67 and then moved to vm370 ... eventually with HONE clones all
over the world) ... some past HONE clones
http://www.garlic.com/~lynn/subtopic.html#hone

a quick named version was done by getting APL started and then
getting it at a certain point and doing a named system ... that
was not only CMS but also APL. Then when IPL'ed ... the machine
was placed immediately at a point in APL (special place chosen
so it would do some last minute housekeeping and setup). One of the
univ. did something similar for ipl-by-named version of os/360.

for vm370 ... palo alto science center did a lot of additional
stuff for apl\cms (including the apl microcode assist on the 370/145).

For early vanilla vm370, HONE started out with ipl-by-name
apl\cms ... with the addition that the cms shared segment was
defined as well as most of the APL executable module and even
some APL workspace. A early problem had some non-APL applications
and had issue with trying to explain to salesman (which were mostly
hardly computer literate users) how to IPL CMS ... to execute
non-APL applications and then IPL APL ... to get back into the
normal (APL-based) sales  marketing environment.

When I started distributing CSC/VM with the enhancements,
HONE was one of the major internal clients (they even con'ed
me into doing several of the early clone installations
around the world). With the paged-mapped filesystem and
the enhanced changes ... most salesmarketing could
IPL normal CMS and have their profile setup to immediately
execute APL (cms executable from S or Y disk that happened
to be paged mapped format) ... and all the page mapping
and shared segment was done as part of normal CMS
program loading.

Then it was possible to have APL processes that would invoke
and execute non-APL applications ... even placing the user
in the non-APL application environment w/o having to explain
to the user about the IPL command or some of the other non-APL
(there was a large salesmarketing APL application called SEQUOIA
that actually hid nearly all APL  CMS characteristics from the
sales  marketing people ... it was even possible that many
sales  marketing people never realized that they were using
APL /or CMS).

I've several times told the story that between middle 70s until
at least middle 80s .. every couple yrs there would be a promotion of some
sales/marketing person to the head of the dataprocessing business
unit that included HONE. They would be startled to find that
HONE was VM370/CMS based on figure that they could make their
career by forcing HONE to be ported to MVS. This would consume
the HONE organization for possibly 12 months until it was extremely
evident that it was practical. Then things would almost return
to normal until the next person was promoted into the position.

As I've mentioned ... a small subset of the (shared/named) capabilities
was shipped in vm370 release 3 as DCSS. It was then possible
for (normal) customers to define APL as a named system w/o
requiring the paged mapped filesystem support.

One of the early issues with port of APL\360 for CMS\APL ...
there was a big performance issue with how APL did storage
allocation and (periodic) garbage collection. The problem
wasn't noticed in a real workspace environment of APL\360
where the whole workspace was swapped as single unit.
CMS/APL opened workspace up to nearly the full virtual
machine size (which might be 16mbytes) ... and the garbage
collection performed terribly in virtual memory paged
environment ... and had to be significantly redone.

the port to CMS\APL also added APL functions that
could access CMS system services ... including ability
to read  

Re: More named/shared systems

2009-07-15 Thread Anne Lynn Wheeler

Jeff wrote:

I'm definitely no substitute for Sir Lynn, but I remember DCSS and DMKSNT in
VM/370 Release 3 PLC 8, which is where I started with VM.

In fact, I used CMSAMS and CMSVSAM then for Unnatural Practices, or at least
not for the purposes for which they were created. I was porting the CP/67
port of LISP/MTS to VM/370, and needed something to replace the named
segment used under CP/67 for LISP's pushdown stack. Instead of checking the
stack pointer for the end of the stack, it would just push onto the stack
and take the program check when it ran off the end. I simulated that by
using DIAG x'64' to attach CMSAMS and CMSVSAM, and then set the protect key
to user key for all but the last 2K (remember 2K pages?) page.

A LISP interpreter written entirely in BAL, with self-modifying code and
almost out of base register addressibility... that was quite an interesting
piece of code.


re:
http://www.garlic.com/~lynn/2009j.html#67
http://www.garlic.com/~lynn/2009j.html#68
http://www.garlic.com/~lynn/2009j.html#76

and post in similar thread that I started in linkedin mainframe discussion
http://www.garlic.com/~lynn/2009j.html#73

I mentioned that Cambridge had done port of apl\360 to (cp67) cms for
cms\apl. This then became one of the main vehicles for deliverying
sales  marketing support on (virtual machine based) HONE (first
cp67 and then moved to vm370 ... eventually with HONE clones all
over the world) ... some past HONE clones
http://www.garlic.com/~lynn/subtopic.html#hone

a quick named version was done by getting APL started and then
getting it at a certain point and doing a named system ... that
was not only CMS but also APL. Then when IPL'ed ... the machine
was placed immediately at a point in APL (special place chosen
so it would do some last minute housekeeping and setup). One of the
univ. did something similar for ipl-by-named version of os/360.

for vm370 ... palo alto science center did a lot of additional
stuff for apl\cms (including the apl microcode assist on the 370/145).

For early vanilla vm370, HONE started out with ipl-by-name
apl\cms ... with the addition that the cms shared segment was
defined as well as most of the APL executable module and even
some APL workspace. A early problem had some non-APL applications
and had issue with trying to explain to salesman (which were mostly
hardly computer literate users) how to IPL CMS ... to execute
non-APL applications and then IPL APL ... to get back into the
normal (APL-based) sales  marketing environment.

When I started distributing CSC/VM with the enhancements,
HONE was one of the major internal clients (they even con'ed
me into doing several of the early clone installations
around the world). With the paged-mapped filesystem and
the enhanced changes ... most salesmarketing could
IPL normal CMS and have their profile setup to immediately
execute APL (cms executable from S or Y disk that happened
to be paged mapped format) ... and all the page mapping
and shared segment was done as part of normal CMS
program loading.

Then it was possible to have APL processes that would invoke
and execute non-APL applications ... even placing the user
in the non-APL application environment w/o having to explain
to the user about the IPL command or some of the other non-APL
(there was a large salesmarketing APL application called SEQUOIA
that actually hid nearly all APL  CMS characteristics from the
sales  marketing people ... it was even possible that many
sales  marketing people never realized that they were using
APL /or CMS).

I've several times told the story that between middle 70s until
at least middle 80s .. every couple yrs there would be a promotion of some
sales/marketing person to the head of the dataprocessing business
unit that included HONE. They would be startled to find that
HONE was VM370/CMS based on figure that they could make their
career by forcing HONE to be ported to MVS. This would consume
the HONE organization for possibly 12 months until it was extremely
evident that it was practical. Then things would almost return
to normal until the next person was promoted into the position.

As I've mentioned ... a small subset of the (shared/named) capabilities
was shipped in vm370 release 3 as DCSS. It was then possible
for (normal) customers to define APL as a named system w/o
requiring the paged mapped filesystem support.

One of the early issues with port of APL\360 for CMS\APL ...
there was a big performance issue with how APL did storage
allocation and (periodic) garbage collection. The problem
wasn't noticed in a real workspace environment of APL\360
where the whole workspace was swapped as single unit.
CMS/APL opened workspace up to nearly the full virtual
machine size (which might be 16mbytes) ... and the garbage
collection performed terribly in virtual memory paged
environment (LISP had something similar) ... and had to
be significantly redone.

the port to CMS\APL also added APL functions that
could access CMS system services 

Timeline: The evolution of online communities

2009-07-15 Thread Anne Lynn Wheeler

A history related post copied from a.f.c. ng:

Timeline: The evolution of online communities
http://www.computerworld.com/s/article/9135308/Timeline_The_evolution_of_online_communities

from above:

E-mail discussion lists, chat rooms, BBSs, Usenet groups and more all
played a role in the development of online communities as we know them
today.

... snip ...

cp67  vm370 had real time messages on the same real system and then
supported by rscs/vnet that would forward such messages between remote
systems ... internal network was larger than arpanet/internet from just
about the beginning until sometime late '85 or early '86.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

Tymshare supported online conferencing early 70s ... and made the
facility free to SHARE VM group in Aug76:
http://vm.marist.edu/~vmshare/

in the late 70s and early 80s I got blamed for online computer
conferencing on the internal network doing semi-automated mailing
list operation ... recent reference in this n.g.
http://www.garlic.com/~lynn/2009e.html#26 Microminiaturized Modules

the above some major motivation behind official effort that resulted
in the internal TOOLSRUN ... which could simultaneously operate somewhat
similar to USENET and mailing list (i.e. somebody could subscribe as
mailing list ... or setup TOOLSRUN client that would subscribe and
maintain local repository of posts).

Listserv on BITNET, misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet

was somewhat to duplicate at least part of the TOOLSRUN function
discussed here (starting Paris, 1985 ... i.e. the EARN part
of BITNET)
http://www.lsoft.com/products/listserv-history.asp

related email (from Paris) regarding setting up
EARN
http://www.garlic.com/~lynn/2001h.html#email840320
in this post
http://www.garlic.com/~lynn/2001h.html#65

and looking for network-oriented applications for the educational
institution users.

There is also esample of the distributed evolution of the REX language
implementation in the late 70s and early 80s (leveraging the internal
network) ... some discussion here:
http://www-01.ibm.com/software/awdtools/rexx/library/rexxhist.html
and here:
http://web.archive.org/web/20020506063424/http://computinghistorymuseum.org/ieee/af_forum/read.cfm?forum=10id=21thread=7

the author of rexx ... had also done a multi-user space war game for cms
(on 3270s) that used the rscs/vnet forwarding interface to extend the
game into distributed environment across multiple machines in the
network.

somewhat as the result for getting blamed for online computer conferencing
on the internal network in the late 70s and early 80s ... there was a
researcher that was paid to sit in the back of my office for nine months
and take notes on how I communicated (as well go with me to meetings).
They also got copies of all my incoming and outgoing email as well as
logs of all my instant messages. The result was an internal corporate
report ... but also Stanford Phd thesis (joint between language and computer
AI) as well as some number of papers and books ... misc. past posts
mentioning computer mediated conversation
http://www.garlic.com/~lynn/subnetwork.html#cmc



--
40+yrs virtualization experience (since Jan68), online at home since Mar1970


Re: DCSS

2009-07-13 Thread Anne Lynn Wheeler

Chip Davis wrote:


... when shared segments were implemented in VM.

It seems to me that it predated the VM/370 SEPP/BSEPP days when I started, but
there's been many a synapse lost since then.

Google, Wikipedia, ibm.com, and even Melinda's wonderful work have not been
revealing, so I thought perhaps might be an old gray-beard like myself (with a
better memory) still reading this list.

Any help?


CP67 had named systems ... basically page image was saved to reserved 
location and
the IPL command would map the saved portion of virtual memory to those 
saved pages
on disk. Used originally for CMS. 360/67 segment (sharing) only offered 1mbyte 
segments
... and CMS was much smaller than 1mbyte ... in fact standard CMS virtual 
machines
were 256kbytes and CMS kernel (low core address) was something like the first
18 pages. So something as a result ... CMS had 3 shared pages ... that were
locked into real storage ... every virtual memory page table (for named CMS)
had same 3 virtual page table entries pointing to the same (locked) real pages.

To provide read-only protection of those three pages, CP67 played special games
with the 360 storage protect keys.

Moving to 370, original 370 virtual memory architecture (defined in the 370
red book ... the red book was cms script file with command line options
would print the full architecture book ... or just the principles of operation
subset) had 64kbyte segment options and 1mbyte segment options. For 370,
CMS was restructured to have the 1st 64k non-shared  data, and the 2nd
64k shared ... using the 370 64kbyte shared segment facility. The original
370 virtual memory architecture also had R/O segment protect facility ...
bit defined in each virtual memory segment table which would provide R/O
segment protection. vm370 was initially implemented to use this facility
for protecting shared pages. The mechanism was still the defined named
systems and invoked/used via the ipl-by-name facility.

the retrofit of virtual memory hardware to 370/165 ran into delays and
at one point there was suggestion to drop a lot of the 370 virtual memory
features in order to buy back six months in the scheduled (and not slip
the 370 virtual memory announcement by six months). One of the features
that got dropped was segment protect. As a result, all the other
hardware implementations had to go back and remove all the features
dropped by the 165 implementation ... and vm370 had to return to the
(kludge) r/o page protection mechanism using the 360 key protect mechanism
(from cp67) ... but for whole segments.

I was at the science center ... past posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

and we were still running with 360/67 and
doing lots of enhancements to cp67. One of the features was a page-mapped
filesystem faciilty for cms. This eliminated a whole lot of I/O simulation
overhead and pathlength (even compared to diagnose I/O ... a form of which
I had originally done as undergraduate) and opened up the ability to do
a whole lot more interesting things using virtual memory (basically
allowing page mapped views of anything done as part of standard cms
file operations ... not just restricted to ipl-by-name). Misc. past
posts mentioning page-mapped work for cms filesystem
http://www.garlic.com/~lynn/subtopic.html#mmap

Eventually, science center was slated for getting at 370/155 and
I had to look at moving lots of my cp67 work to vm370 ... old memo
on the subject
http://www.garlic.com/~lynn/lhwemail.html#email731212

and a couple describing having done the work (and what
was in the csc/vm distribution system)
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

one of my hobbies had been providing highly modified cp67 systems
to internal locations (sort of my own product distribution). that
dropped off as some number of internal locations moved from
cp67 to vm370 ... but really took off when I had moved from cp67
to vm370.

One of my major hobby/customers was the HONE system ... lots of
past posts
http://www.garlic.com/~lynn/subtopic.html#hone

HONE had been created after the 23jun69 unbundling announcement
... originally cp67 virtual machine systems originally targeted
at giving branch office SEs hands-on to operating systems
running in virtual machines. The HONE system even got
special CP67 modifications that simulated the initial
new instructions in 370 ... allowing running/testing of
370 operating systems that used the new instructions (i.e.
allowing them to run in virtual machine under cp67 on
360/67.

The science center had also ported apl\360 to cp67 cms for
cms\apl. A lot of sales  marketing support applications were
developed in APL and started to be offered to sales  marketing.
Eventually that use came to dominate HONE activity and the
virtual machine experience for branch office SEs evaporated.

APL had been restructured to shared memory operations and
originally HONE had a special 

Re: DCSS addenda

2009-07-13 Thread Anne Lynn Wheeler

re:
http://www.garlic.com/~lynn/2009j.html#67 DCSS

Some of the other stuff in CSC/VM was released in my resource manager
(which appeared with vm370 release 3 plc9)

the 23jun69 unbundling announcement started charging for (application)
software and se services (but they managed to make the case that
kernel software should still be free). some posts mentioning unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

When I was undergraduate ... I had added tty/ascii terminal support
to cp67 ... and tried to make the 2702 do something it couldn't quite
do. that somewhat was motivation behind the univ. starting a project
for a clone controller using interdata/3 ... discussed some in this
recent post
http://www.garlic.com/~lynn/2009j.html#60 A Complete History Of Mainframe 
Computing

four of us got written up being responsible for clone controller business.
some posts mentioning clone controller
http://www.garlic.com/~lynn/subtopic.html#360pcm

The clone controller business has been attributed as the motivation for
the FS project.

http://www.ecole.org/Crisis_and_change_1995_1.htm

quote from above:

IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead that
the competition would never be able to keep up, and to have such a high
level of integration that it would be impossible for competitors to
follow a compatible niche strategy. However, the project failed because
the objectives were too ambitious for the available technology.  Many of
the ideas that were developed were nevertheless adapted for later
generations. Once IBM had acknowledged this failure, it launched its
'box strategy', which called for competitiveness with all the different
types of compatible sub-systems. But this proved to be difficult because
of IBM's cost structure and its RD spending, and the strategy only
resulted in a partial narrowing of the price gap between IBM and its
rivals.

... snip ...

old post with somebody taking FS quotes from FergusMorris book on IBM
http://www.garlic.com/~lynn/2001f.html#33 IBM's VM for the PC c.1984??


Now allowing 370 product pipelines dry up is claimed to have given
the clone processors foothold in the market ... and success of
the clone processors is major motivation to decide to start
(also) charging for kernel software. My resource manager
got chosen to be the guinea pig for kernel software charging
... and as a result ... I had to spend some amount of
time with the business people  lawyers on policies
regarding software charging.

another mad rush to get products back into the 370 product
pipeline was the 303x stuff ... recent discussion
http://www.garlic.com/~lynn/2009j.html#59 A Complete History Of Mainframe 
Computing

basically after FS was killed, work on 3081 was started but that
was going to take 6-7 yrs ... and they needed something on
much shorter cycle ... so 3031 was repackaged 370/158, 3032
was repackaged 370/168, and 3033 started out as 168 wiring
diagram remapped to newer chips that were 20% faster.

Now one of the things that were in the page-mapped filesystem
stuff was location independence support. Carefully crafted
executable code could be loaded at any virtual location
in any virtual address space. The same shared object
could appear at different virtual addresses in different
virtual address spaces. Operating systems that had been
designed for paged-mapped operations had support for this
as a matter of course ... including IBM's TSS/360.

CMS inherited a lot of its structure, compilers and other
features from os/360 ... which had a real-storage orientation.
OS/360 Relocatable address constants ... were relocated at
load time ... and while executing were tied to a specific
address. This nominally prevented having the same shared
object appearing simultaneously in multiple virtual address
spaces at different addresses.

The 370 issue was that with only 256 64kbyte segments
(in 16mbyte virtual address space) ... there would be
great difficulty in finding unique locations for every
application that might be available at a large location.
Any single user wouldn't necessarily require more than
16mbytes ... but might require an arbitrary combination
of applications available at the installation. To support
shared fixed address applications which might be used
in arbitrary combination... a unique location
had to be chosen for ever application ... but the total
possible aggregate size of all available applications
exceeded 16mbytes. Lots of past posts mentioning
difficulty of modifying code so it would be
location independent while executing (in addition
to having to modify it for executing in a R/O protected
shared segment)
http://www.garlic.com/~lynn/submain.html#adcon


20 Years Ago Today: Birth of the Dot-Com Era

2009-06-08 Thread Anne Lynn Wheeler

20 Years Ago Today: Birth of the Dot-Com Era
http://www.pcworld.com/businesscenter/article/166302/20_years_ago_today_birth_of_the_dotcom_era.html

from above:

In those days, the Internet consisted of regional networks, who were mostly 
non-profit cooperatives, and the government funded 'NSFNet' backbone which linked them 
up, writes Templeton, a friend of many years' standing.

... snip 

misc. past posts mentioning NSFNet:
http://www.garlic.com/~lynn/subnetwork.html#nsfnet
and some old NSFNet related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

for other drift ... SLAC (slac vm370 system) first webserver outside
cern/europe
http://www.slac.stanford.edu/history/earlyweb/history.shtml

GML had been invented at the science center in 1969 and then standardized as 
SGML in the late 70s ... misc. past posts mentioning GML, SGML, etc
http://www.garlic.com/~lynn/submail.html#sgml

CMS script command did document formatting using dot commands ... somewhat 
from similar/earlier CTSS command. After, GML was invented, support for GML tag 
processing was added to script. Waterloo had done a clone of the cms command ... webpage 
tracking evolution from SGML into HTML at CERN:
http://infomesh.net/html/history/early/

above includes references to Waterloo SCRIPT GML User's Guide.


Re: radar interfering with computer gear

2009-05-30 Thread Anne Lynn Wheeler

Walter wrote:

When examined after each failure, the core (yes, real core) memory was
always wiped clean.  That computer (and its tech) was housed in a metal
box (IIRC, about 6'x10', 8' high) which was transportable on the back of a
2 1/2 ton (6-by) truck, or by helicopter  It was located about 15 feet
from another similar box with all the radar gear inside, and large radar
dish on the top.  After a few days of random core wipes, someone noticed
that the core wipe only happened when the door to the computer hut was
momentarily opened as the radar dish swept past.  While aimed much higher,
there was enough residual power from the dish to wipe the computer's core
memory clean.  Memory was reloaded (back on track now) from dependable
paper tape.



old thread about Mt. Umunhum interfering with (stanford) SAIL computer:
http://www.garlic.com/~lynn/aadsm25.htm#25

the referenced URL has gone 404
http://www-db.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html
but is here:
http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html

from above:

I got proper air conditioning a short time later, but unfortunately
developed a bad case of hiccups that struck regularly at 12 second
intervals. My assistants spent a number of days trying to find the
cause of this mysterious malady without success. As luck would have
it, somebody brought a portable radio into my room one day and noticed
that it was emitting a Bzz at regular intervals -- in fact, at the
same moment that I hicced. Further investigation revealed that the
high-powered air defense radar atop Mt. Umunhum, about 20 miles away,
was causing some of my transistors to act as radio receivers. We
solved this problem by improving my grounding.

... snip ...

for other drift ... later posts in the previously mentioned ibm-main thread
http://www.garlic.com/~lynn/2009h.html#42 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#44 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#47 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#48 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#55 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#56 Punched Card Combinations
http://www.garlic.com/~lynn/2009h.html#61 Punched Card Combinations
http://www.garlic.com/~lynn/2009h.html#62 Book on Poughkeepsie

and a recent, separate virtual machine thread in a.f.c:
http://www.garlic.com/~lynn/2009h.html#59 Operating Systems for Virtual Machines
http://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
http://www.garlic.com/~lynn/2009h.html#64 Operating Systems for Virtual Machines
http://www.garlic.com/~lynn/2009h.html#65 Operating Systems for Virtual Machines

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970


Re: IBM 1401

2009-05-28 Thread Anne Lynn Wheeler

rschuh wrote:

The smaller systems, the 360-20 and 360-30 had a 1401 emulator mode. It was=
  a h/w or mc based feature. I don't know whether larger machines had it. Th=
ere was also a 1410 emulator mode on the -40. I do not know of any 1401 sup=
port that ran under DOS, but my DOS experience is miniscule.=20


360/30 had 1401 microcode emulation ... actually 360/30 front panel switch that
selected 360 microcode emulation (since 360 was implemented as microcode on
360/30) and 1401 microcode emulation

recent stories in ibm-main mailing list about univ. getting 360/30 to replace
1401 (in staged processs of replacing 709/1401 combo with 360/67 which was 
suppose
to run with tss/360).
http://www.garlic.com/~lynn/2009h.html#12 IBM Mainframe: 50 Years of Big Iron 
Innovation
http://www.garlic.com/~lynn/2009h.html#41 Book on Poughkeepsie

709 ran ibsys, tape-tape, a lot of fortran student jobs. 1401 was front-end 
spooling
handling card reader- tape  tape-printer/punch for the 709 ... with tapes 
being
manually moved from 1401 tapes and 709 tapes.

Even tho the 1401 MPIO program ran perfectly fine on 360/30 in 1401 emulation
mode (switch to emulation mode and boot MPIO from 2504 reader, effectively same 
as
if running real 1401) ... I got a student job to re-implement it in 360 ...
I got to design my own monitor, interrupt handling, device drivers, storage 
management,
console interface, etc. Eventually was 2000 card program with assembler
directive that would either generate a stand-alone program or
version that ran under os/360. Stand-alone version took approx. 30
minutes to assemble ... version that would run under os/360 took
nearly an hour to assemble since it took approx. five minutes
elapsed time per DCB macro.

The univ. eventually got a 360/67 ... but since tss/360 wasn't ready, it
spent nearly all its time running os/360 as 360/65. 360/65 (and 360/67)
had 709x microcode emulation support (as opposed to 1401 emulation available
on lower-end 360s).

Last week of January 1968, three people from the science center ... some
past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

came out to the univ. to install (virtual machine) cp67. at the time, cp67 
wasn't
really up to the univ. os/360 production workload ... but I got to play with
it quite a bit on weekends. some discussion detailed in these posts:
http://www.garlic.com/~lynn/2009h.html#47 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#48 Book on Poughkeepsie

misc. other recent related posts in ibm-main mailing list thread
http://www.garlic.com/~lynn/2009h.html#14 IBM Mainframe: 50 Years of Big Iron 
Innovation
http://www.garlic.com/~lynn/2009h.html#42 Book on Poughkeepsie
http://www.garlic.com/~lynn/2009h.html#44 Book on Poughkeepsie


360/30 functional characteristics has reference to 1401/1440/1460 compatibiilty
feature (GA24-3255)
http://www.bitsavers.org/pdf/ibm/360/funcChar/GA24-3231-7_360-30_funcChar.pdf

1401 simulator for os/360 contributed program:
http://www.bitsavers.org/pdf/ibm/360/360D-11.1.019_1401simCorr_Sep69.pdf

it might not have been all the difficult to port above to CMS???

1401/1440/1460 Emulator Programs (under dos/360)
http://www.bitsavers.org/pdf/ibm/360/GC27-6940-4_360_1401emul.pdf

360/65 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/funcChar/A22-6884-3_360-65_funcChar.pdf
360/67 functional characteristics
http://www.bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf

lists optional feature: 709/7040/7044/7090/7094/7094II Compatibility

single processor 360/67 was nearly identical to single processor 360/65 except
with addition to virtual address translation hardware.





--
40+yrs virtualization experience (since Jan68), online at home since Mar1970


Re: remembering PGP

2009-03-04 Thread Anne Lynn Wheeler

On 03/04/2009 01:02 AM, IBMVM automatic digest system wrote:

Long ago, in a galaxy close to where I am this week, there was a PGP
MODULE.  It was built by a kind person at MIT (which is NOT close to
where I am this week) and worked exactly as one would expect it to work.
  Maybe some day we will have a PGP for CMS again.  Dunno.


Recently, PGP got interesting again (to me).  Actually, GPG is what I
use instead (for better or worse).  So it occured to me that when there
is a face-to-face opportunity, such as that enjoyed by those fortunate
souls who made the trek to Central Texas, the subject of key signing
should be kept in mind.  Therefore, if you happen to be in Austin this
week and have a GPG key and want a signature, let me know.  At least,
let SOMEONE know.  (My signature may not be what you need, and I am okay
with that. But ... get yer keys signed!)


some old email discussing pgp-like implementation for the internal
network
http://www.garlic.com/~lynn/2006w.html#email810505
in this post
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the 
network
and
http://www.garlic.com/~lynn/2007d.html#email810506
in this post
http://www.garlic.com/~lynn/2007d.html#49 certificate distribution

misc. old public key /or crypto email
http://www.garlic.com/~lynn/lhwemail.html#crypto

misc. past posts mentioning the internal network ... which was larger than the 
arpanet/internet from just about the beginning, until sometime late '85 or 
possibly early '86
http://www.garlic.com/~lynn/subnetwork.html#internalnet

--
40+yrs virtualization experience (since Jan68), online at home since Mar70


Re: Seeking (former) Adventurers

2008-06-05 Thread Anne Lynn Wheeler
following are a couple of emails from '78 regarding getting a copy of 
adventure for vm370/cms

http://www.garlic.com/~lynn/2006y.html#email780405
http://www.garlic.com/~lynn/2006y.html#email780405b

in this post
http://www.garlic.com/~lynn/2006y.html#18 The History of Computer 
Role-Playing Games

additional followup in this post
http://www.garlic.com/~lynn/2006y.html#19 The History of Computer 
Role-Playing Games



another old adventure email reference
http://www.garlic.com/~lynn/2007o.html#email790912

in this post
http://www.garlic.com/~lynn/2007o.html#15 Atuan - Colossal Cave in APL?

In the above, there was some amount of trouble caused by my making adventure
(executable) available internally (via the internal network). I had an 
offer that

anybody finishing the game (getting the points), i would send them a copy
of the (fortran) source. At least one of the people at the STL lab converted
the fortran source to PLI and added a bunch of additional rooms/pts.


whitehouse email

2008-05-02 Thread Anne Lynn Wheeler

you might find this news article interesting

Whitehouse Emails Were Lost Due to Upgrade
http://news.slashdot.org/news/08/04/30/1359209.shtml

and

The case of the missing e-mail
http://arstechnica.com/articles/culture/bush-lost-e-mails.ars

and for some more whitehouse email from 25yrs or so ago
(that weren't lost)
http://www.cnn.com/SPECIALS/cold.war/episodes/18/archive/

from nearly the start in the 70s, I had been quite rabid about backups
and backups of backups and backups of backups of the backups.  There
has been speculation that orientation carried over to PROFS
deployments. In any case, that supposedly was major factor in the
above reference.

somebody told me in the early 90s that similar email systems had been
deployed at numerous gov. agencies.

during the period I was getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk
and working on system/r (original relational/sql implementation):
http://www.garlic.com/~lynn/subtopic.html#systemr
and doing an internal sjr/vm distribution ... a recent refs:
http://www.garlic.com/~lynn/2006u.html#26 Assembler question
with this old email:
http://www.garlic.com/~lynn/2006u.html#email800501

I had also implemented what I called CMSBACK ... some old email refs
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and
http://www.garlic.com/~lynn/2006t.html#email791025
http://www.garlic.com/~lynn/2006w.html#email801211

which was deployed internally at several internal locations
... including the internal (vm370-based) HONE systems that provided
world-wide sales  marketing support
http://www.garlic.com/~lynn/subtopic.html#hone

misc. past posts mentioning backup and/or archive
http://www.garlic.com/~lynn/subtopic.html#backup

CMSBACK went thru several internal releases and then a morph for a
customer release under the product name workstation datasave
facility. The product name then morphed into ADSM ... and then the
name morphed again and is currently sold as TSM (tivoli storage
manager).

current Tivoli storage manager reference:
http://www-306.ibm.com/software/tivoli/products/storage-mgr/

reference to virtual machine use in the gov. even much earlier:
http://www.nsa.gov/selinux/list-archive/0409/8362.cfm

as an undergraduate in the 60s, i did a lot of system enhancements
that were picked up and shipped in the product. i even got requests
from ibm for some specific changes.

many years later ... having learned about some of the customers,
i interpreted some of the change requests as of a security nature
and possibly have originated from some such gov. agency.


Re: VTAM R.I.P. -- SNATAM anyone?

2008-04-25 Thread Anne Lynn Wheeler

Jim Bohnsack wrote:


 I remember the SNATAM name now.  There was an Englishman, Graham Pursey,
 who used to attend the  VNET Project Team meetings that were held once
 or twice a year.  It seems to me that he was involved in some kind of VM
 based VTAM project.  Was that it or was there something else?  It seems
 to me that there was something besides SNATAM.

 Getting old and memory is the second thing to go.  Don't remember what
 the first was.



from the 26-28feb80 VMITE schedule:

Graham Pursey - SNATAM. This system is being perfected in
Hursley to operate SNA devices from a CMS
based system. The current direction is to
make this into a product. 45 minutes to 1 hr

... snip ...

there were constant battles with the communication group ... I got into
all sort of problems with hsdt (high speed data transport) project ...
http://www.garlic.com/~lynn/subnetwork.html#hsdt

to place things in better perspective ... SNA wasn't networking
... it was dumb terminal communication.

example of gap between the communication group and hsdt project:

and also working with various parties associated with getting NSFNET going.
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

recent retelling
http://www.garlic.com/~lynn/2008e.html#45

of an announcement (one friday) by the communication group for a new 
internal conference.
included in the announcement were these definitions (to be used for the 
conference):


low-speed: 9.6kbits
medium-speed:   19.2kbits
high-speed  56kbits
very high-speed  1.6mbits

the next monday on a business trip to the far east, definition on the
conference room wall

low-speed 20mbits
medium-speed  100mbits
high-speed200-300mbits
very high-speed 600mbits

eventually we weren't allowed to bid on nsfnet backbone ... even tho a
nsf audit of the high-speed backbone claimed that what we already had
running (internally)  was at least five years ahead of all NSFNET bid
submissions. some related old email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

including some stuff forwarded to us about communication group spreading
FUD that sna  vtam could be used for NSFNET.
http://www.garlic.com/~lynn/2006w.html#email870109

for other topic drift, the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

(which wasn't SNA until the late 80s) was technology from the science 
center

http://www.garlic.com/~lynn/subtopic.html#545tech

and was larger than the arpanet/internet from just about the beginning
until sometime mid-85. this was about the time that serious efforts
were made to try and get the internal network converted over to sna
(and also contributed to the internet exceeding the internal network).

in this period there was a big explosion in internet nodes from workstations
and PCs. SNA was still treating internal network as something that was 
purely

(mainframe) host-to-host ... and the exploding numbers of PCs were to
continue to be served by terminal emulation. some past posts
http://www.garlic.com/~lynn/subnetwork.html#emulation


cp67 announced 40 yrs ago at spring 68 share in houston

2008-02-25 Thread Anne Lynn Wheeler

CP67 was announced 40yrs ago at spring 68 share in houston.

I was invited to attend. I was undergraduate at univ where the last
week of jan68, three people from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

had come out to install cp67.


40 yrs of cp67 and cms

2008-01-22 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.vmesa-l,alt.folklore.computers as well.


40 yrs of cp67 and cms ... not quite for the announce, since that
happened at the spring '68 share meeting in houston. however, three
people had come out from the cambridge science center the last week of
jan68 to install cp67 at the univ.


vm folklore, new, 40+ yr old technology

2007-12-25 Thread Anne Lynn Wheeler

some recent posts in other venues on the new, 40+ yr old technology
https://financialcryptography.com/mt/archives/000988.htm
http://www.garlic.com/~lynn/aadsm27.htm#66 2007: year in review
http://www.garlic.com/~lynn/aadsm28.htm#0 2007: year in review

i've only started won on the technology slightly less than 40yrs ago; last
week in jan68, three people from the science center had come out and
installed cp67 at the univ.

for other recent folklore thread ... this is series of posts of the 
precursor to

DCSS work; that was converted from cp67 to vm370 ... but only a small
subset was released (as part of DCSS) ... many of the features being
eliminated (from DCSS product release), including location independent
code (shared segments).

some related old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102

http://www.garlic.com/~lynn/2007u.html#81 IBM mainframe history, was 
Floating-point myths
http://www.garlic.com/~lynn/2007v.html#49 IBM mainframe history, was 
Floating-point myths
http://www.garlic.com/~lynn/2007v.html#50 IBM mainframe history, was 
Floating-point myths

http://www.garlic.com/~lynn/2007v.html#51 Education ranking


Re: ongoing rush to the new, 40+ yr old virtual machine technology

2007-11-16 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2007s.html#40 ongoing rush to the new, 40+ yr old 
virtual machine technology

IBM Ships 10,000th Storage Virtualization Engine
http://www.marketwire.com/mw/release.do?id=793928

from above:

IBM has been delivering virtualization capabilities for more than 40
years and today we unveil a milestone in the area of data storage
virtualization with the shipment of 10,000 storage virtualization
engines -- a fact no other storage company in the world can claim, said
Kelly Beavers, Director, Storage Software, IBM. By working across
multiple platforms, IBM's storage virtualization helps to lower energy
costs and unlocks the proprietary hold that other storage vendors have
had on customers for years -- which IBM believes makes storage
virtualization the killer application in the storage industry over the
next decade.

... snip ...

in 1967, the science center:
http://www.garlic.com/~lynn/subtopic.html#545tech

had moved cp40 to 360/67 for cp67 and also had it installed out at
lincoln labs. it wasn't installed at univ. where i was undergraduate
(3rd installation) until last week in jan68 ... and it wasn't
officially announced until spring share in houston, 1st week
mar68. minor reference mentioning 35th anniv of cp67 announce
http://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary

other recent posts
http://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
http://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM

other recent news articles:

Future Threats to Virtualization Security: Fact vs. Fiction
http://www.networkworld.com/news/2007/111407-future-threats-to-virtualization-security.html
Virtualization hot; ITIL, IPv6 not, survey says
http://www.arnnet.com.au/index.php/id;1522739452;fp;16;fpid;1
A Virtual Conversation
http://www.serverwatch.com/trends/article.php/3711431
Oracle's Ellison: Virtualization, Applications, We Got It All
http://www.informationweek.com/news/showArticle.jhtml?articleID=203100432
Sun Commits $2 Billion to Virtualization
http://news.yahoo.com/s/nf/20071115/bs_nf/56757
Sun Commits $2 Billion to Virtualization
http://www.sci-tech-today.com/news/Sun-Commits--2-Billion-to-Virtualization/story.xhtml?story_id=033003ONF579
Sun Introduces Its New Virtualization Offering in San Francisco
http://opensource.sys-con.com/read/462102.htm
Sun Launches Virtual Assault
http://www.byteandswitch.com/document.asp?doc_id=139308WT.svl=news2_1
Sun enters a suddenly crowded virtualization market
http://www.betanews.com/article/Sun_enters_a_suddenly_crowded_virtualization_market/1195147886
Push Toward Virtualization Continues, with Two-thirds of Large
Organizations Considering It Business-Critical, According to New
Research from TheInfoPro
http://home.businesswire.com/portal/site/google/index.jsp?ndmViewId=news_viewnewsId=20071115006066newsLang=en
VMware refutes Oracle VM performance claims
http://searchservervirtualization.techtarget.com/originalContent/0,289142,sid94_gci1282150,00.html
Oracle VM Launched
http://www.ddj.com/linux-open-source/203100799
VMware Updates Server Software
http://www.ddj.com/hpc-high-performance-computing/202805757
Sun's New Virtualization Manager Supports Windows, Linux
http://www.eweek.com/article2/0,1895,2217917,00.asp
Desktop Virtualization - What's the Best Approach?
http://linux.sys-con.com/read/462343.htm
Red Hat strategy spans virtualized  appliance deployments
http://www.moneycontrol.com/india/news/pressnews/red-hat-strategy-spans-virtualizedappliance-deployments/13/25/313286
Sun Commits $2 Billion to Virtualization
http://www.sci-tech-today.com/news/Sun-Commits--2-Billion-to-Virtualization/story.xhtml?story_id=1B5YYZEC
Microsoft Windows Virtualization Explained - Video with Microsoft's
Distinguished Engineer Eric Traut
http://www.dabcc.com/article.aspx?id=6404
Sun Introduces Its New Virtualization Offering in San Francisco
http://xml.sys-con.com/read/462102.htm
Microsoft Wants a Bigger Slice of VMware's Pie
http://www.eweek.com/article2/0,1895,2218057,00.asp


Age of IBM VM

2007-11-14 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.


Marty Zimelis wrote:
 Bob,
Right name, but I believe the wrong derivation.  The 67 in CP-67 comes
 form the fact that it ran on the S/360 model 67, the only production model
 of the S/360 line that implemented Dynamic Address Translation (DAT) --
 virtual storage.
  
Some would argue that was the first version of VM.  Others would argue
 that the line starts with VM/370, the first generally available version of
 VM, which was first released in August of 1972.  (FWIW, SHARE has been
 celebrating VM's birthdays using the VM/370 release date as the origin.
 Hence the 35th birthday was celebrated at SHARE 109 in San Diego last
 Summer.)

CP40 predated CP67. Cambridge Science Center had cp67 up and running and
had also installed it out at Lincoln Labs. The last week in Jan68, three
people came out to install it at the university where I was an
undergraduate. I was then invited to attend the spring 68 SHARE meeting
in Houston where cp67 was officially announced. In that sense, the
univ. was early beta test for cp67. For other topic drift, the
univ was also best test site for original CICS ... and I got
tasked to support/debug also ... misc. past posts mentioning CICS
http://www.garlic.com/~lynn/subtopic.html#bdam

I had been doing various work on os360, including a lot of workload
throughput optimization. When CP67 was installed, I also started doing
some work on it ... and then made a presentation on some of the
work at the Aug68 SHARE meeting in Boston. Old post with part of
that presentation
http://www.garlic.com/~lynn/94.html#18 CP/67  OS MFT14

part of this post I made earlier this yr, has been repeated
in this thread
http://www.garlic.com/~lynn/2007b.html#21 history question

some more recent posts mentioning cp40 (and early virtual machine
work)
http://www.garlic.com/~lynn/2007p.html#19 zH/OS (z/OS on Hercules for personal 
use only)
http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007q.html#3 Virtualization: Don't Ask, Don't Tell
http://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to 
C?
http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
http://www.garlic.com/~lynn/2007s.html#29 Intel Ships Power-Efficient Penryn 
CPUs
http://www.garlic.com/~lynn/2007s.html#30 Intel Ships Power-Efficient Penryn 
CPUs

The cp67 group split off from the science center and took over the
(IBM) Boston Programming Group on the 3rd flr of 545 tech sq; science
center was on the 4th flr, science center machine room was on the 2nd
flr.  
http://www.garlic.com/~lynn/subtopic.html#545tech

For other trivia, multics was on the 5th flr ... a couple recent refs
http://www.garlic.com/~lynn/2007s.html#24 multics source is now open
http://www.garlic.com/~lynn/2007s.html#31 multics source is now open

In the morph from cp67 to vm370, the group continued to expand,
eventually outgrowing the 3rd flr and moved out to the old SBC bldg in
Burlington Mall. During this period the company (and some amount
of the vm group) got distracted by the Future System effort
http://www.garlic.com/~lynn/subtopic.html#futuresys

However, I continued to work on various 360  370 things (and also
made some less than flattering references about FS). Old email
referencing some of that work
http://www.garlic.com/~lynn/lhwemail.html#1973
http://www.garlic.com/~lynn/lhwemail.html#1975

When FS was finally killed, there was a mad scramble to get things
back into the 370 hardware and software product pipeline. Possibly
somewhat as a result, the development group picked up quite a bit of
stuff that I had been doing and shipped it in vm370 release 3. Then
there was also a decision to release other stuff that I had been doing
as the resource manager. Misc. posts
http://www.garlic.com/subtopic.html#fairshare

It was also in this time-frame that the internal scramble was on to
get going on MVS/XA. POK finally convinced the company that it was
necessary to kill the vm370 product, shutdown the burlington mall
location and transfer all the people to POK as part of being able to
meet the MVS/XA delivery schedule. Eventually, Endicott was able to
salvage the vm370 product mission ... but effectively had to rebuild
an organization nearly from scratch.

Somebody from ibm forwarded me this photo from the vm370 b'day event
at SHARE 99
http://www.garlic.com/~lynn/LynnWheeler023.jpg

40th anniv. of when I first got acquainted with cp67 is coming up in
two months ... and the 40th anniv of cp67 announcement is later next
spring.

For other drift, 23jan69, the company announced unbundling ... somewhat
as the result of various litigation going on. However, the case was made
that unbundling and starting to charge separately for software only
applied to application software; kernel software still needed to be
bundled with the machine (and free).

A big part of the 

Re: Oracle Introduces Oracle VM As It Leaps Into Virtualization

2007-11-14 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.

[EMAIL PROTECTED] writes:
 Could you translate into layman's terms?  What exactly is server
 virtualization software?

 The concept of virtual storage, as I understand it, is making an
 application program think it had more core ('RAM') memory than it had
 by using disk storage for parts of a program not being used at that
 moment.  It's power was limited as trying to compress too much of a
 program slowed it down significantly.  To me, today the concept is
 almost obsolete since RAM memory is damn cheap, measured in many
 hundreds of meagabytes.  Virtual was developed when memory was still
 in kilobytes or at best a few megabytes.

 Anyway, I don't see how the above definition applies today to
 something like Oracle.

 How does this new product make it easier and more efficient for data
 processing centers?

 Thanks.

re:
http://www.garlic.com/~lynn/2007s.html#26 Oracle Introduces Oracle VM As It 
Leaps Into Virtualization
http://www.garlic.com/~lynn/2007s.html#27 Oracle Introduces Oracle VM As It 
Leaps Into Virtualization
http://www.garlic.com/~lynn/2007s.html#35 Oracle Introduces Oracle VM As It 
Leaps Into Virtualization

for some topic drift and x-over with this thread
http://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM

Endicott was working on the follow-on to 135/145 ... virgil/tully ...
which had spare room for microcode. There was starting to be mid-range
clone processor competition (primarily outside the US) and was looking
for new added value features ... in addition to simply better
price/performance. They had done a VS1 (kernel) microcode assist and
approached the VM group out in burlington about doing a VM370 microcode
assist. 

The VM group turned them down, saying that they were too busy doing
other stuff. As a result, they eventually showed up on my doorstep.  Old
post with some results into initial investigation into selecting
portions of vm kernel to drop into mcode:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

In addition to the other stuff I was doing in the same timeframe, not
only did i get roped into working on design for ECPS VM microcode assist
... Endicott then wanted me to run around the world with them to explain
what it all met to the business and forcasting people in different
countries around the world. unlike domestic/US, world trade countries
would forcast sales for the upcoming year ...  which then turned into
build orders at manufacturing plants ... which were then
bought/delivered to the countries ... which in turn had to be sold to
customers (by contrast, domestic forcasts were not directly tied to
actual sales volume ... so manufacturing plant sites had to do
significant amount of investigation regarding US forcasts since the
plant sites would have to eat any inaccuracies).

It turns out that 138/148 (virgil/tully) was just on the leading edge of
shift from hardware costs dominating customer budgets to change-over to
people costs starting to dominate (and skill availability representing
bottleneck to customer installs). As a result, Endicott pushed hard for
having VM370 preinstalled and transparently integrated into every
138/148 shipped from the factory (slightly akin to LPARS in the current
generation of mainframes). The problem was that large portions of the
corporation viewed vm370 as competitive with other operating system
offerings and for one reason or another were out to kill the product.
Having vm370 preinstalled and transparently integrated into every
138/148 shipped ran counter to this other political forces (for instance
POK was in the process of making the case for killing off vm370 and
having all the people in the burlington mall group transferred to pok as
part of helping mvs/xa schedules).  In any case, the vm370 preinstall
and transparently integrated for every 138/148 was shot down.

As i've posted before, the mid-range product sales really accelerated
with the 138/148 followin ... 43xx machines (as well as vax machines).
past posts mentioning the departmental server phenonama for 43xx (and
vax) machines (43xx had some edge over vax with some large commercial
customers placing 43xx machine orders in multiples of hundreds). a few
recent posts
http://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was 
Re: RACF - Password rules
http://www.garlic.com/~lynn/2007j.html#7 Newbie question on table design
http://www.garlic.com/~lynn/2007m.html#72 The Development of the Vital IBM PC 
in Spite of the Corporate Culture of IBM
http://www.garlic.com/~lynn/2007n.html#18 The Development of the Vital IBM PC 
in Spite of the Corporate Culture of IBM
http://www.garlic.com/~lynn/2007n.html#20 The Development of the Vital IBM PC 
in Spite of the Corporate Culture of IBM
http://www.garlic.com/~lynn/2007n.html#21 The Development of the Vital IBM PC 
in Spite of the Corporate Culture of IBM

Re: Age of IBM VM

2007-11-14 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
http://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It 
Leaps Into Virtualization

one could claim that the relationship of cp67 to vm370 is somewhat like
the relationship of HASP to JES2. misc past posts mentioning HASP,
JES2, and/or JES2/HASP networking support
http://www.garlic.com/~lynn/subtopic.html#hasp

other cp67 heritage ... in the transition from MVT to os/vs2 (i.e.
os/360 with virtual memory support) ... basically MVT was laid out in
single virtual address space ... thus the reference to OS/VS2 SVS
(single virtual storage) to distinquish from later OS/VS2 release MVS
(multiple virtual storage).

One might claim that there was little difference between OS/VS2 SVS and
MVT with VM handshaking laid out in 16mbyte virtual address space. The
biggest difference was needed to have channel program translation built
into MVT. The initial prototypes of OS/VS2 SVS was built with minimal
virtual address space support and a copy of CP67 CCWTRANS (and a couple
other CP67 routines associated with channel program translation) hacked
into the side.

misc. recent posts discussing channel program translation and
specifically getting os/vs2 to support it
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question
http://www.garlic.com/~lynn/2007f.html#34 Historical curiosity question
http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
http://www.garlic.com/~lynn/2007n.html#35 IBM obsoleting mainframe hardware
http://www.garlic.com/~lynn/2007o.html#37 Each CPU usage
http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation
http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007p.html#72 A question for the Wheelers - 
Diagnose instruction
http://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage 
backing up
http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar'
http://www.garlic.com/~lynn/2007s.html#2 Real storage usage - a quick question
http://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?

there was other technology from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

used in lots of transition to operating in virtual memory environment
for 370. the science center had a number of efforts going on in the area
of system and performance monitoring, modeling, and simulation (some of
it being the runup to capacity planning). one of the projects involved
tracing instruction and data storage references and then doing
semi-automated program reorganization to optimize for operation in
virtual memory environment. This was used for several yrs internally
before being turned into product and released to customers as
VS/REPACK (two moments before my vm370 resource manager was first
released).

An early version of the technology was used to help in the morph of
apl\360 to cms\apl (originally on cp67/cms) ... which required
completely redoing the apl storage allocation and garbage collection
implementation for operation in virtual memory environment.

vs/repack was also used by a number of product groups ... not only for
helping with transition from real storage to virtual memory environment
... but also for things like application hot spot identification
(i.e. where program is spending a lot of its time). For instance it was
used in STL by the IMS development group for extensive studies of IMS
operation and performance.

misc. recent posts mentioning VS/REPACK
http://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
http://www.garlic.com/~lynn/2007o.html#53 Virtual Storage implementation
http://www.garlic.com/~lynn/2007o.html#57 ACP/TPF

another tool was an system performance analytical model implemented in
APL which was eventually made available as sales and marketing support
tool on (internal cms-based timesharing service) HONE
http://www.garlic.com/~lynn/subtopic.html#hone

as the performance predictor.  branch people could input customer
configuration and workload details and ask what-if questions about
what would happen if there were configuration and/or workload changes.

misc. past posts mentioning performance predictor
http://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No 
More 'small machines'

Oracle Introduces Oracle VM As It Leaps Into Virtualization

2007-11-12 Thread Anne Lynn Wheeler

for something a little different ... also posted here
http://www.garlic.com/~lynn/2007s.html#26

latest in the new, 40+ yr old technology

Oracle Introduces Oracle VM As It Leaps Into Virtualization 
http://www.informationweek.com/news/showArticle.jhtml?articleID=202805289


couple items from above:

Oracle jumped into the virtualization market Monday, announcing Oracle
VM, or server virtualization software to run Oracle databases and
applications.

...

Oracle will supply preconfigured images -- or virtualized files that
combine the Oracle database with a preconfigured version of Linux -- for
ease of installation and deployment. The move is Oracle's way of picking
up on the use of virtualized appliances, software preconfigured with an
operating system to run in a virtual machine.

... snip ...

possibly another take on commodization (leaving more on the table for
the product vendor?) mentioned here
http://www.garlic.com/~lynn/2007s.html#20 Ellison Looks Back As Oracle Turns 30

... effectively another kind of virtual appliance or what we started
out calling server virtual machine ... misc. past posts
http://www.garlic.com/~lynn/2007i.html#36 John W. Backus, 82, Fortran 
developer, dies
http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
http://www.garlic.com/~lynn/2007k.html#48 John W. Backus, 82, Fortran 
developer, dies
http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted
http://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard?
http://www.garlic.com/~lynn/2007o.html#3 Hypervisors May Replace Operating 
Systems As King Of The Data Center
http://www.garlic.com/~lynn/2007q.html#25 VMware: New King Of The Data Center?
http://www.garlic.com/~lynn/2007s.html#4 Why do we think virtualization is new?


Re: Combining VM list threads

2007-08-05 Thread Anne Lynn Wheeler

Gabe Goldberg wrote:


There's

SET SHARE RELATIVE

and

VM's 35 Birthday Celebration and 2007 Knights of VM 


and

SHARE: Final chairbears needed (ribbon wearer time!!)

in which phsiii said

So bring your family...I am!

I guess he's SET SHARE RELATIVE. So should anyone using family vacation as 
excuse to not be there to chair sessions, hmmm?



post w/mention of 35th anniv. of cp67 announce (houston, spring '68 share 
meeting)
http://www.garlic.com/~lynn/2003d.html#72

some people from CSC had come out the last week of jan68 to install cp67 at
the univ. where i was undergraduate. i was then invited to attend the
cp67 announce at the spring '68 houston share meeting. next spring
will be 40th.

and photo from vm370 30th b'day party at share 99
http://www.garlic.com/~lynn/LynnWheeler023.jpg


Re: z/VM usability

2007-05-07 Thread Anne Lynn Wheeler

Alan Altmark wrote:

Well, it's been nigh on 40 years that CMS has been around.  Seems like a
committment to me.  CMS is here to stay.  If all the people with z/OS
get z/VM and [re]discover CMS, who knows what might happen?  Never say
die!


re:
http://www.garlic.com/~lynn/2007j.html#41 z/VM usability

well, cms (as in cambridge monitor system) started on cp40 (cambridge had
gotten a 360/40 and did the hardware modifications to implement virtual
memory ... pending getting 360/67) ...  cambridge then got 360/67 and
morphed cp40 into cp67 ... so it has been 40yrs (in part, CMS work
could even start on real 360/40 before cp40 was operational)

from Melinda's history
http://www.princeton.edu/~melinda/

By September of 1965, file system commands and macros already looked
much like those we are familiar with today: ``RDBUF'', ``WRBUF'',
``FINIS'', ``STATE'', etc

... snip ...

cambridge installed cp67 out at lincoln labs in 1967 and then last week
in jan68 came out to install cp67 at the univ where i was undergraduate.
Note, that in jan68, the cp67 people were still apprehensive about CMS
filesystem ... with cp67 source, assemble, and build still being done on
os/360 (keeping cp67 kernel build TXT files in card tray and modify/assemble
routine, punch new TXT file, update that file in the card tray and rebuild
kernel by doing IPL of real cards).

in the morph of cp67 to vm370 ... they changed the cms name to
conversational monitor system.

major change in cms from cp67 to vm370 was a little re-arranging of cms
kernel in anticipation of 370 (r/o) segment protection. However, in
doing the virtual memory hardware retrofit to 370/165 ... they ran into
problem with schedule slipping. In order to regain six months in the
schedule for 370/165 virtual memory, they dropped r/o segment protect
and some number of other features from the original 370 virtual memory
architecture (and to have compatibility across the 370 product line
... the same features had to also be removed from other 370 models that
already had implemented the full 370 virtual memory architecture).  With
370 hardware r/o segment protect dropped ... vm370 had to revert to the
page protect hack used by cp67 that involved fiddling the 360 storage
protect keys.

Then during the future system period ... much of the corporation was
distracted and a lot of 370 product activity fell by the way side.
Misc. past posts about future system:
http://www.garlic.com/~lynn/subtopic.html#futuresys

I had made some unflattering comments about practicallity of future
system stuff and continued to do both cp67 and cms enhancements ...  and
then ported them from cp67 to vm370 ... some old email
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

after FS was canceled, there was rush to get stuff back into 370 product
pipeline. Part of this was reason that small subset of the virtual
memory management enhancements ...  a lot of shared segment stuff
http://www.garlic.com/~lynn/subtopic.html#adcon
that had been integrated with the paged mapped filesystem stuff
http://www.garlic.com/~lynn/subtopic.html#mmap

was released as DCSS in vm370 release 3.

Canceling FS contributed to enabling me to also release the resource
manager (that included a lot of changes that were in cp67 that i had
done ...  which were dropped in the morph from cp67 to vm370)
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

It was also in the aftermath of killing FS that POK convinced the
corporation to kill the vm370 product, shutdown the vm370 product group
and move all the people to POK to help accelerate the mvs/xa development
schedule (again attempting to make up lost time in 370 product pipeline
resulting from the FS distraction). Eventually Endicott was able to
salvage the vm370 product mission.


Does anyone know of a documented case of VM being penetrated by hackers?

2007-04-27 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.vmesa-l,alt.folklore.computers as well.


George Haddad wrote:
 The last major hole I can recall was in the late 80s or early 90s IIRC, 
 where it was discovered that when sending an SMSG to an SVM (I think it 
 was RSCS) which included a CP command, the the SVM would blindly execute 
 multiple CP commands if they were chained with CP Newline character. If 
 the SVM had any special privs ...well you get the idea. It was patched 
 very quickly.

re:
http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case 
of VM being penetrated by hackers?
http://www.garlic.com/~lynn/2007i.html#29 Does anyone know of a documented case 
of VM being penetrated by hackers?

should have been mid-70s ... not late 80s ... at least on
the internal network 
http://www.garlic.com/~lynn/subnetwork.html#internalnet

from long ago and far away ...

To: wheeler
Date: 80/12/16 10:52:11
 
I never came back last night, but I did speak to XX about that
file.  He points out that RSCS probably translates a pound sign to a
question mark for display (to eliminate problems with embedded
commands in messages, which caused some difficulty a few years ago).
So the file is probably ok; perhaps you can check with X and find
out.
 
By the way, what I did last night was to tune my oil burner.  I
don't know how you people out there heat (on the 3 days a year
when the temperature gets below 68), but if you have an oil burner
(even just for hot water), you ought to look into doing a tune-up
on it.  The lab here has a kit we can borrow for the purpose, and
in less than an hour I got the efficiency from 60% to about 74%.

... snip ...

and now for some networking thing completely different from the late
80s

To: wheeler
From: somebody in corporate hdqtrs
Date: 7 May 1987, 19:35:58 EDT
Subject: Large Files...
 
Lynn:
 
  We are getting requests to handle 500MB files.. EC releases, Chip
Designs, software, and documentation...  (one group has a 3BB file
requirement!!)..  Altho' the 500MB req't is coming within the next few
months...
 
Have you done any thinking during your HSDT work.. on what the limits
might be on large files.. compared to our networking software and
communication link capacity...???  I'm thinking that there must be a
limit... or a knee on the curve which says.. use a courier service..
rather than the network...  Have you been able to develop any
attributes, characteristics, etc.. on 'large files'...that could be
helpful to planning and operating with these large files...??
 
... snip ...

i.e. HSDT (high-speed data transport) was working with handling large
files ... recent post referencing helping contribute to bringing in
the RIOS chipset a year early
http://www.garlic.com/~lynn/2007f.html#73 Is computer history taught now?
http://www.garlic.com/~lynn/2007h.html#61 Fast and Safe C Strings: User 
friendly C macros to Declare and use C Strings

misc. posts mentioning HSDT activities ... dating back to
1980 time-frame
http://www.garlic.com/~lynn/subnetwork.html#hsdt

as well as some activity working with various people at
univ. and NSF ... old email:
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

and misc. postings mentioning BITNET and/or EARN
http:/www.garlic.com/~lynn/subnetwork.html#bitnet


Re: Does anyone know of a documented case of VM being penetrated by hackers?

2007-04-26 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.vmesa-l,alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case 
of VM being penetrated by hackers?

for a little topic drift ... a little about virtual machine assurance
in recent post to ibm-main
http://www.garlic.com/~lynn/2007i.html#26 Latest Principles of Operation


Re: Historical curiosity question

2007-03-17 Thread Anne Lynn Wheeler

Rob van der Heij wrote

The mini disk is an imperfect (but cheap) implementation of a low
level abstraction. The main defect is the number of cylinders, but
for many purposes the virtualization is good enough because the guest
can live with that.

But at considerable additional cost, VM could have been designed to
span a mini disk over multiple volumes. With such support, you could
give out more perfect 3390's (i.e. not being restricted by the actual
models on the hardware). And VM could have played tricks not to
allocate real disk space for unused tracks. This is exactly what was
done in the RAMAC Virtual Array.


re:
http://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question

some long winded topic drift ...

as undergraduate i had done a lot of modifications to the cp67 kernel
... a lot oriented towards significantly improving pathlength for
os360 guest virtual machines, new resource management algorithms, and
dynamic adaptive resource control. 


I had also looked at doing some stuff for CMS ... one of the things I
realized that CMS was simulating the operation of doing syncronous
disk transfers with overhead of sio/lpsw-wait/interrupt paradigm. so i
defined a new CCW opcode that would do syncronous disk transfers
... and the virtual machine would get back CC=1, csw stored on the SIO
operation ... indicating the operation had already completed. 


This i got somewhat slammed on by the people at the science center as
having violated 360 machine architecture and principles of
operation. Eventually it was explained that it would be possible to
use the 360 DIAGNOSE instruction to implement such violations
... since the principles of operation defined the DIAGNOSE instruction
implementation as model dependent. Define an abstract 360/virtual
machine model and the way that the DIAGNOSE instruction work for that
model. However, the pathlength improvement for the change was pretty
significant ... so the syncronous disk transfer operation was
eventually re-implemented with DIAGNOSE instruction.

Later when I joined the science center ... one of the things that i
did for cp67/cms (that never shipped in the product) was the cp67 and
cms changes to support page-mapped filesystem operation. This
was combined in general set of stuff that I referred to as virtual
memory management. Later it was some of the stuff that was ported
to vm370 ... old communication with reference
http://www.garlic.com/~lynn/2006v.html#email731212

a small subset of the virtual memory management stuff made it out in
vm370 as DCSS (but not the page mapped filesystem stuff or a lot of
other stuff) ... recent post
http://www.garlic.com/~lynn/2007f.html#14 more shared segment archeology

Note that even with the DIAGNOSE operation there was still a lot of
overhead related to emulated channel programs that require real
addresses for execution (shadows of the virtual channel program
created with real addresses of the virtual pages which have been
pinned in real storage). This is not only true for virtual machine
emulation ... but anything that has applications building channel
programs in a virtual address space. For reference, posts about
originaly prototype OS/360 to VS2 virtual memory operation by
hacking a copy of cp67's CCWTRANS into the side of MVT (to create
translated, shadow channel programs with real address and pinned
virtual pages)
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems 
history

so the page mapped filesystem support, changes to both cp67 to provide
the DIAGNOSE instruction API) and changes to low-level CMS filesystem
function to use the cp67 API (pretty much leaving the high-level CMS
filesystem interfaces as-is) ... eliminated most of the rest of this
channel program translation overhead gorp. It also moved the CMS disk
paradigm interface even beyond the simplifications possible with FBA
disks.

Having drastically simplified the disk paradigm interface ... the
implementation of a lot of other features became significantly easier.
For instance, it was possible to dynamically adjust how requested page
transfers were actually performed being able to dynamically do either
syncronous transfers and/or even asyncronous transfers (transparent to
the syncronous CMS virtual machine paradigm by fiddling the page
invalid bits).

The simple, original implementation just mapped a set of contiguous
page records ... to contiguous cylinders supported leveraging existing
minidisk definitions. However, it also become extremely
straight-forward to do other types of enhancements (having removed the
binding for the cms virtual machine to explicit ckd disk hardware
characteristics), like having multiple sequentially chained blocks of
pages ... at 

Re: Historical curiosity question

2007-03-16 Thread Anne Lynn Wheeler

McKown, John wrote:

This is not important, but I just have to ask this. Does anybody know
why the original designers of VM did not do something for minidisks
akin to a OS/360 VTOC? Actually, it would be more akin to a partition
table on a PC disk. It just seems that it would be easier to maintain
if there was something on the physical disk which contained
information about the minidisks on it. Perhaps with information such
as: start cylinder, end cylinder, owning guest, read password, etc. CP
owned volumes have an allocation map, this seems to me to be an
extention of that concept.


CP67 had a global directory ... that was indexed and paged ... so it
didn't need individual volume index.

it also avoided the horrendous overhead of multi-track search that
os/360 used to search the volume VTOC on every open. lots of past
posts mentioning multi-tract paradigm for VTOC  PDS directory was
io/memory trade-off ... os/360 target in the mid-60s was to burn
enormous i/o capacity to save having in-memory index.
http://www.garlic.com/~lynn/subtopic.html#dasd

that resource trade-off had changed by at least the mid-70s ...  and
it wasn't ever true for the machine configurations that cp67 ran on.

the other characteristic was that both cp67 and cms treated disks as
fixed-block architecture ... even if they were CKD ... CKD disks would
be formated into fixed-blocks ... and then treated as fixed-block
devices ... and avoid the horrible i/o performance penalty of ever
doing multi-track searches for looking up location and/or other
information on disk.

recent thread in bit.listserv.ibm-main
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#39 FBA rant
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#42 FBA rant
http://www.garlic.com/~lynn/2007e.html#43 FBA rant
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007e.html#51 FBA rant
http://www.garlic.com/~lynn/2007e.html#59 FBA rant
http://www.garlic.com/~lynn/2007e.html#60 FBA rant
http://www.garlic.com/~lynn/2007e.html#63 FBA rant
http://www.garlic.com/~lynn/2007e.html#64 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#2 FBA rant
http://www.garlic.com/~lynn/2007f.html#3 FBA rant
http://www.garlic.com/~lynn/2007f.html#5 FBA rant
http://www.garlic.com/~lynn/2007f.html#12 FBA rant

the one possible exception was loosely-coupled single-system-image
support done for HONE system. HONE mini-disk volumes had an in-use
bitmap directory on each volume ... that was used to manage LINK
consistency across all machines in the cluster. it basically used
a channel program with search operation to implement i/o logical 
equivalent to the atomic compareswap instruction ... avoiding having 
to do reserve/release with intervening i/o operations. I have some 
recollection talking to the JES2 people about them trying a similar 
strategy for multi-system JES2 spool allocation. post from above 
mentioning HONE compareswap channel program for multi-system 
cluster operation

http://www.garlic.com/~lynn/2007e.html#38 FBA rant

HONE was vm-based online interactive for world-wide sales, marketing,
and field people. It originally started in the early 70s with a clone
of the science center's cp67 system
http://www.garlic.com/~lynn/subtopic.html#545tech

and eventually propogated to several regional US datacenters ...  and
also started to propogate overseas. I provided highly modified cp67
and then later vm370 systems for HONE operation for something like 15
yrs. I also handled some of the overseas clones ... like when EMEA
hdqtrs moved from the states to just outside paris in the early 70s.
In the mid-70s, the US HONE datacenters were consolidated in northern
cal. ... and single-system-image software support quickly emerge
... running multiple attached processors in cluster operation.  HONE
applications were heavily APL ... so it was quite compute intensive.
With four-channel controllers and string-switch ... you could get
eight system paths to every disk. Going with attached processors
... effectively two processors made use of a single set of channels
... so you could get 16 processors in single-system-image ... with
load-balancing and failure-fallover-recovery.

Later in the early 80s, the northern cal. HONE datacenter was
replicated first in Dallas and then a third center in Boulder ... for
triple redundancy, load-balancing and fall-over (in part concern about
natural disasters like earthquakes). 


lots of past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

At one point in SJR after the 370/195 machine ... recent reference
http://www.garlic.com/~lynn/2007f.html#10 Beyond multicore
http://www.garlic.com/~lynn/2007f.html#11 Is computer history taught now?
http://www.garlic.com/~lynn/2007f.html#12 FBA rant

was replaced with mvs/168 system ... and vm was running on 

more shared segment archeology

2007-03-12 Thread Anne Lynn Wheeler

One of the reasons the vm development group picked up so much from the
science center
http://www.garlic.com/~lynn/subtopic.html#545tech

to ship in the (vm370 product) release 3/4 time-frame was a lot of
the resources had been diverted to FS ... similar to the reference in
this post
http://www.garlic.com/~lynn/2007f.html#10 Beyond multicore
in this old email
http://www.garlic.com/~lynn/2007f.html#email800117

and some followup to the above
http://www.garlic.com/~lynn/2007f.html#11 Is computer history taught now?
http://www.garlic.com/~lynn/2007f.html#12 FBA rant

and lots of past posts mentioning future system project
http://www.garlic.com/~lynn/subtopic.html#futuresys

and then there was a relatively short window between the demise of FS
(and return of the diverted resources) and when POK convinced
corporate to completely shutdown VM and move all the people to POK to
support MVS/XA development ... as part of trying to get MVS/XA out the
door (everybody was scrambling to make up for lost time because of
FS). The reference in this post to POK convincing corporate about VM
shutdown and all the people diverted to MVS/XA (and Endicott salvaging
some of the VM mission) ... glosses over the period with lots of
resources diverted to work on FS
http://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems 
history

In addition to development group having to play catch-up (by picking
up stuff from the science center for product ship; because of period
when a lot of resources were diverted to FS), there were also some
amount of stuff that had been dropped in the morph from cp67 to
vm370. I continued to work on cp67 during this period and then when
the science center got a 370/155-II ... ported a bunch of stuff to
vm370. Old communication from 1973 about porting/enhancing a bunch of
science center stuff from cp67 to vm370
http://www.garlic.com/~lynn/2006v.html#email731212 
in this post 
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?


and related email from 1975 ... creating an enhanced vm rel2 system
for shipment to internal accounts:
http://www.garlic.com/~lynn/2006w.html#email750102
in this post
http://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
and
http://www.garlic.com/~lynn/2006w.html#email750430
in this post
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?

One of the items was whole restructuring for what I called virtual
memory management ... which included a bunch of bells and whistles
related to virtual memory segments as well as being integrated with
page mapped filesystem changes for CMS.

Much of the internal restructuring for handling virtual memory
segments was picked up as part of release 3 DCSS ... however only a
small subset of the function that utilized that restructuring was
actually shipped as part of release 3 DCSS.

I had also gotten roped into doing a lot of the ECPS work for Endicott,
old reference here
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

and at the same time various SMP work ... old VAMPS email from 1975
http://www.garlic.com/~lynn/2006w.html#email750827
in this post
http://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from 
early/mid 70s

lots of other posts mentioning VAMPS (5-way SMP project)
http://www.garlic.com/~lynn/subtopic.html#bounce

and lots of posts making general mention of SMP and/or compareswap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

Another shared segment feature was a line item called DWSS that was
part of system/r technology transfer from SJR to Encidott for SQL/DS
product ... which allowed for shared segments that were writable
(i.e.  not r/o protected). A big issue with DWSS was what to do about
ECPS microcode already in the field that supported r/o protection
games for all segments identified as shared.

This was another downstream fall-outs of letting the 165 hardware
engineers gain six months in the vritual memory hardware change
schedule for 165-ii ... by letting them drop various features from the
370 virtual memory architecture. recent references
http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems 
history

One of the features dropped was the hardware segment table
protection bit ... which allowed specification of hardware r/o
protection for virtual memory segments on a virtual address space
basis (i.e.  some address spaces could be allowed r/w segment storage
operation while other address spaces were not allowed to change the
same shared segment). This also caused a big retrofit hit to vm370
since cpcms had already been re-organized to take advantage of the
feature ... when it was dropped ... a real cludge was created ... also
referenced here
http://www.garlic.com/~lynn/2007d.html#32 Running OS/390 on z9 BC

effectively the original basis 

I/O in Emulated Mainframes

2007-03-02 Thread Anne Lynn Wheeler

oft repeated story about mainframe emulation and I/O with regard to 370/158 and 
integrated channels; most recent telling
http://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction

basically the 158 microcode engine had both microcode for emulating 370 and also for emulating channels. in the transition to 303x machines ... the (158) integrated channel microcode was split-off into a dedicated box called the channel director. A 3031 was basically two 158 microcode engines, one with a dedicated engine for running integrated channel microcode and the other engine dedicated to running the 370 emulation microcode (potentially could be considered a two-processor SMP ... which would make a 3031 SMP ... actually four processor system ... since each emulated 370 engine had its corresponding channel director). A 3032 was a repackaged 370/168 with one or more (158 microcode engine integrated channel) channel directors.  A 3033 was 168 wiring diagram mapped to fast chip technology ... along with one or more (158 microcode engine integrated channel) channel directors ... i.e. 158 integrated channels supported 6 channels ... to get a 16 channel configuration, you 
needed three channel directors. A two processor 3033 SMP ... was actually typically an eight processor system ... two 3033 processor with each processor (typically) having three channel directors.


note that splitting off the integrated channel microcode into dedicated 
processor made
the 3031 benchmarks better than 370/158 (even tho the microprocessor engines 
were the
same) ... with 3031 benchmarking almost as fast as 4341 ... the above URL 
reference
also contains results of RAIN benchmark on 158, 3031 and early 4341 engineering 
machine
(ran about 10-15 percent slower than production machines shipped to customers).

similarly, 370 115/125 had a memory bus that provided 9 slots for up to nine 
processors. A 115 had a microcode engine running dedicated 370 microcode emulation ... 
and up to eight other (identical) processors running other microcode loads (communication 
controller microcode load, disk controller microcode load, etc). A 125 was identical to 
115 except the processor engine running 370 microcode emulation was 50percent faster than 
the other processor engines. recent posts with discussion of 370 115/125
http://www.garlic.com/~lynn/2007d.html#71 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007d.html#72 IBM S/360 series operating systems 
history

lots of past posts referring to 303x channel director being dedicated 158 
microcode engine with
integrated channel microcode from 370/158
http://www.garlic.com/~lynn/97.html#20 Why Mainframes?
http://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
http://www.garlic.com/~lynn/99.html#7 IBM S/360
http://www.garlic.com/~lynn/99.html#176 S/360 history
http://www.garlic.com/~lynn/99.html#187 Merced Processor Support at it again
http://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
http://www.garlic.com/~lynn/2000c.html#69 Does the word mainframe still have 
a meaning?
http://www.garlic.com/~lynn/2000d.html#7 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#11 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#12 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#21 S/360 development burnout?
http://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off 
topic)
http://www.garlic.com/~lynn/2001c.html#3 Z/90, S/390, 370/ESA (slightly off 
topic)
http://www.garlic.com/~lynn/2001c.html#6 OS/360 (was LINUS for S/390)
http://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
http://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
http://www.garlic.com/~lynn/2001j.html#14 Parity - why even or odd (was Re: 
Load Locked (was: IA64 running out of steam))
http://www.garlic.com/~lynn/2001l.html#24 mainframe question
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What 
specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2002.html#48 Microcode?
http://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
http://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
http://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002i.html#21 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002i.html#23 CDC6600 - just how powerful a machine 
was it?
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2002p.html#59 AMP  vs  SMP
http://www.garlic.com/~lynn/2003.html#39 Flex Question
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, 
and other rambling folklore
http://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
http://www.garlic.com/~lynn/2003m.html#31 SR 

Re: The Future of CPUs: What's After Multi-Core?

2006-12-20 Thread Anne Lynn Wheeler

Brian Inglis [EMAIL PROTECTED] writes:

IME the IBM VM guys had very good ideas for interaction using the
corporate products and facilities, even though it has never been
funded adequately and often nearly terminated. 
They were much better than the batch guys at letting the users fully

use all the machines' capabilities, providing nearly 100% capacity and
keeping terminal response averages close to 0.1s; the batch guys were
better at using up all the machines' capabilities, and the users
considered themselves lucky to have 70% capacity available and get 1s
terminal response times. 
The really cool thing about VM systems is that you can do anything

with the software under timesharing: develop a new OS, test a changed
OS, trace the execution of an OS. 
Once found a bug crashing a DB product only after tracing about a

million instructions, a few times over to get it exactly right, with
very selective output, sufficient to pinpoint the faulty code: try
doing that on a real front panel or console! 


for some total drift ... a different reference to tracing in support of
semi-automated program reorganization to optimize execution for virtual 
memory environment

http://www.garlic.com/~lynn/2006x.html#1 IBM sues make of Intel-based Mainframe 
clones

as an undergraduate in the 60s, i had done dynamic adaptive resource
management ... it was sometimes referred to as fair share scheduling
since the default resource management policy was fair share. this
was shipped as part cp67 for 360/67.

in the morph from cp67 to vm370 ... much of it was dropped. charlie's cp67
multiprocessor support also didn't make it into vm370.

i had done a lot of pathlength optimization and fastpath stuff for cp67
which was also dropped in the morph to vm370 ... i helped put a small 
amount of that back into vm370 release1 plc9 ... a couple past posts

mentioning some of the cp67 pathlength stuff
http://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/94.html#18 CP/67  OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67  OS MFT14

i then got to work on porting a bunch of stuff that i had done for cp67 to
vm370 ... some recent posts (includes old email from the early and mid 70s)
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from 
early/mid 70s

and of course mentioned in the above referenced email ... a small amount
of the virtual memory management stuff showed up in vm370 release 3 as
DCSS.

there was eventually a decision to release some amount of the features
as the vm370 resource manager.  some collected posts on scheduling
http://www.garlic.com/~lynn/subtopic.html#fairshare
and other posts on page management
http://www.garlic.com/~lynn/subtopic.html#wsclock
and for something really different old communication (from 1982) 
about work i had done as undergraduate in the 60s (also in this

thread):
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's after 
Multi-Core?

in any case, some resources manager issues/features

* by continually doing real time dynamical monitoring and adjusting
operations, I was able to operate at much higher resource utilization
and still provide decent level of service. prior to resource manager
ship, somebody from corporate stated that the current state of the art
for resource managers were large number of static tuning parameters
and that the resource manager couldn't be considered really advanced
unless it had some number of static tuning parameters (installation
system tuning expert would look at daily, weekly and monthly activity
... and would select some set of static tuning values that seemed to
be suited to that installation). it did absolutely no good explaining
that real-time dynamic monitoring and adapting was much more advanced
that static tuning parameters. so, in order to get final corporate
release approval ... i had to implement some number of static tuning
parameters. I fully documented the implementation and formulas and the
source code was readily available. Nobody seemed to realize that it
was a joke ... somewhat from operations research ... it had to do
with degrees of freedom ... aka the static tuning parameters had
much less degrees of freedom than the dynamic adaptive features.

i had always thot that real-time dynamic adaptive control was preferable
to static parameters ... but it took another couple decades for a lot of
the rest of the operating systems to catch up. it is now fairly evident ... 
even showing up in all sorts of embedded processors for real-time control

and optimization. for some slight boyd dynamic adaptive drift
http://www.garlic.com/~lynn/94.html#8 scheduling  dynamic adaptive
and collected posts mentioning boyd

intersection between autolog command and cmsback (more history)

2006-12-09 Thread Anne Lynn Wheeler

Another CMSBACK status, a year later than the email referenced here
http://www.garlic.com/~lynn/2006t.html#24 CMSBACK

i.e. i had done the original CMSBACK implementation and deployment in
the late 70s on sjr systems and hone ... the following refers to
there additional systems where it was installed:

To: distribution 
Date: 12/11/80  15:21:31


CMSBACK is now installed in: Los Gatos (2, CPUs), GPD SNJ (3), GPD STL
(1) Austin (1) Note: Austin has (re)installed CMSBACK in other
locations.  It is also being installed in Yorktown.

... snip ...

other recent CMSBACK references:
http://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; 
Now : Programming practices
http://www.garlic.com/~lynn/2006t.html#33 threads versus task
http://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#26 Assembler question
http://www.garlic.com/~lynn/2006u.html#30 Why so little parallelism?
http://www.garlic.com/~lynn/2006v.html#24 Z/Os Storage Mgmt products

all of this preceeding the work mentioned in Melinda's history paper
by a couple years (even preceeding the hiring of the people that
Melinda's history paper mentions as having done the original work).

now as to automated operator /or automated operations ...
the *autolog* command ... mentioned here 
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?


helped significantly with the automation of service virtual machines
... some recent references here:
http://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
http://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
http://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
http://www.garlic.com/~lynn/2006v.html#22 vmshare

CMSBACK would be one example of service virtual machine ... another
would be VNET ... another would be the facility developed for
automated benchmarking.

I had originally created the *autolog* command (and the automated
flavor that was done at kernel boot) in support of automated
benchmarking
http://www.garlic.com/~lynn/subtopic.html#bench

aka, i needed automatically (and unattended) to stop a current benchmark,
generate a new kernel with various modifications, reboot to the new kernel,
and start the next set of benchmarks. this could be repeated a couple
times an hour for extended period straight (say 6-10 8hr shifts).

now some rudimentary stuff could be done for automated operations using
combination of service virtual machines (*autolog* command being one
of the enablers) and special message facility ... which allowed
application to capture all text (messages, cp command responses, etc)
that normally would be written to the terminal/screen. a couple recent
posts mentioning spm
http://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
http://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?

another service virtual machine would be CJNTEL using special
message ... mentioned here
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the 
network

some drift ... the above post also includes a mention from 1981 for public key 
infrastructure kind of operation. 


however, simple text/message processing was lacking sophisticated parsing ... 
like
found in parasite/story ... misc. references
http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
http://www.garlic.com/~lynn/2003i.html#73 Computer resources, past, present, 
and future
http://www.garlic.com/~lynn/2003j.html#24 Red Phosphor Terminal?
http://www.garlic.com/~lynn/2004e.html#14 were dumb terminals actually so 
dumb???
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel 
x86 design
http://www.garlic.com/~lynn/2006.html#3 PVM protocol documentation found
http://www.garlic.com/~lynn/2006c.html#14 Program execution speed
http://www.garlic.com/~lynn/2006f.html#37 Over my head in a JES exit
http://www.garlic.com/~lynn/2006m.html#35 Draft Command Script Processing Manual
http://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 
Wasn't As Bad As All That
http://www.garlic.com/~lynn/2006p.html#31 25th Anniversary of the Personal 
Computer

or later hllapi-like implementations. also missing was sophisticated
rule infrastructures (allowing specification about what should be done
in different kinds of circumstances).

for other references, part of an old SPMS application document 

SPMS: CMS/SPM Interface Program  1

1.0  INTRODUCTION

SPMS is  a transient CMS  command that  can be called  by an
EXEC  or by  a  program running  in  a  virtual machine.  It
enables the user  to use the CP Special  Message Facility to
intercept messages from  CP or from other  users and process
them  within a  CMS program  or EXEC.  A CP  command may  be
passed to SPMS  to cause the response to that  command to be
returned to the caller, or SPMS can be used 

Re: long ago and far away, vm370 from early/mid 70s

2006-12-08 Thread Anne Lynn Wheeler

ref:
http://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from 
early/mid 70s

while only a small subset of the virtual memory management stuff was released as
DCSS vm370 ... and none of the cms paged mapped filesystem stuff ... a little
more of the virtual memory management stuff was used for the original 
relational/sql
implementation in system/r . which provided r/w (unprotected) shared 
segments
between semi-privileged processes (different system/r database tasks running in
different virtual addresss spaces). 


misc. past posts mentioning system/r work
http://www.garlic.comm/~lynn/subtopic.html#systemr

It was also part of the technology transfer of system/r to endicott for what was to become sql/ds. 
This addition feature took on the name DWSS (or dynamic writeable shared segments) ... and

there was lots of issues on whether or not it impacted ECPS and microcode on 
existing customer machines.

recent posts mentioning system/r and DWSS
http://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant 
HLASM coding practices ever defensible?
http://www.garlic.com/~lynn/2006t.html#39 Why these original FORTRAN quirks?

recent posts on early virtual memory management stuff (a small subset which was 
going to be released as DCSS in vm370 release 3)
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#9 dcss and page mapped file system

i also worked on a semantic network related activity about the same time as the 
system/r stuff
which used more of the virtual memory management and page mapped technology ... 
recent reference
http://www.garlic.com/~lynn/2006v.html#48 Why so little parallelism?


more secure communication over the network

2006-12-08 Thread Anne Lynn Wheeler

In the following, CJNTEL was an online corporate directory that I had
initially put up on SJRLVM1 that could be queried by anybody on the
internal network.

The following has a suggestion for registering a person's public key
with CJNTEL and making (effecitvely publishing) it available for
retrieval by anybody with access to the internal network.

following is some 15 years or so before being brought in to do some
consulting with a small client/server startup that wanted to do 
payment transactions on their server

http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

To: wheeler
Date: 05/15/81 13:41:12
re: more secure communication over the network

One of the obvious concerns that will surely surface from the CJN work
will be the problem of confidential information being exchanged over
the network.

I have a package from ** called CRYPT that may be a solution. The
package implements a public key encryption system proposed by Diffie
and Hellman (see recent vm newsletter).  The problem we have with
using CIPHER is that we must know an agreed upon key and we have to
exchange the key in a secure manner prior to communication.

The public key system works as follows: I publish a key which anyone
can look up. They use that key to CRYPT the file. That key can only
lock the safe. In order to DECRYPT the file (unlock the safe) I
have a private key which no-one knows. Only the private key can unlock
the safe.

As an implementation I suggest we update out CJNTEL entry to include a
public key for each of us.  The package includes a procedure for
generating keys. In this way I can look up your key in CJNTEL and send
you ENCRYPTED confidential data.

Cheap and simple.

... snip ...

effectively certificateless public key operation ... misc. past posts
mentioning certicateless public key operation
http://www.garlic.com/~lynn/subpubkey.html#certless

and somewhat similar to the discussion about publishing public keys in
the domain name infrastructure ... a few posts this year discussing
the subject:
http://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
http://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
http://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh
http://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
http://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
http://www.garlic.com/~lynn/2006d.html#29 Caller ID spoofing
http://www.garlic.com/~lynn/2006f.html#16 trusted repositories and trusted 
transactions
http://www.garlic.com/~lynn/2006f.html#32 X.509 and ssh
http://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
http://www.garlic.com/~lynn/2006f.html#34 X.509 and ssh
http://www.garlic.com/~lynn/2006h.html#27 confidence in CA
http://www.garlic.com/~lynn/2006p.html#7 SSL, Apache 2 and RSA key sizes
http://www.garlic.com/~lynn/2006t.html#8 Root CA CRLs
http://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that 
increases password security

as well as the whole account authority digital signature stuff
http://www.garlic.com/~lynn/x959.html#aads

... CJNTEL is different that the internal online telephone book
recently mentioned here
http://www.garlic.com/~lynn/2006v.html#32 Effi[ci]ency of branch table vs 
individual compare  branch

the internal online telephone book capture the original source from
various corporate locations (that were used to generate the printed
copy), converted the source to desired format and made them available
for distribution. this normally was loaded on local systems that
allowed users to do online lookups on their local machine.

CJNTEL could be accessed remotely over the internal network using
special message ... misc. past posts about the internal network
... which was larger than the arpanet/internet from just about the
beginning until sometime mid-85.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

note that the above reference is in addition to the requirement that
all links leaving corporate facilities required link encryptors ...
at one point there was the claim that the internal network had
move than half of all link encryptors in the world.

misc. recent posts mentioning special message
http://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
http://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?

We did do a modification to CJNTEL so that in addition to doing
various operations on its corporate directory, it was also possible
(for remote network user) to request CJNTEL to execute the telephone
directory command on SJRLVM1 ... returning the results over the
network.

for some topic drift ... some references to the hsdt (high speed data transport)
project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

and some recent posts about various interactions with NSF:
http://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe 

long ago and far away, vm370 from early/mid 70s

2006-12-07 Thread Anne Lynn Wheeler

Concurrent with the benchmarking, shared segment, page mapped
filesystem, resource manager, and other misc. stuff referred
to in these posts
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#9 dcss and page mapped filesystem

I was also at the same time doing a lot of the work on VIRGIL/TULLY
(138/148) and ECPS
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

and at the same time doing system and microcode design for VAMPS,
a five-way multiprocessor effort ... collected past posts mentiong VAMPS
http://www.garlic.com/~lynn/subtopic.html#bounce

For some reason, the product people responsible for both VIRGIL/TULLY
and VAMPS somewhat viewed the other as competing products; even though
i was doing much of the design for both products. There were some
corporate product escalation meetings where I was expected to sit on
both sides of the table simultaneously and argue it out with myself.
VIRGIL/TULLY shipped, but VAMPS was canceled.


Memo to: Endicott
cc: wheeler
Date: August 27, 1975
Subject: Simplifying Technical Considerations for VAMPS
References: 1) Technical Review of VAMPS Proposal
  by KK and W dated 8/5/75
   2) Update of above dated 8/21/75

As you pointed out last week during the Boeblingen Technical Review
sessions, the VAMPS schedule is a critical PSE factor.

During these sessions it became evident that the technical
considerations of the original VAMPS proposal had not been fully
adjusted to account for a marketing strategy similar to that for the
VIRGIL/TULLY VM proposal.

As described in the Attachment, this revised marketing strategy for
VAMPS provides a number of technical simplifications.  These
simplifications are being proposed for resource and schedule reasons
and not for reasons of technical feasibility.

While the possible tradeoffs that these simplifications could allow in
the major redesign MSC and dispatcher areas are not clear, the
significant reduction in the total number of changes should not only
alleviate the resources required, but also possibly the schedule due
to the reduction in dependencies and testing.

If the VAMPS forecast look promising early in September, a more detail
technical design review of the MSC and dispatcher areas should be
immediately arranged in Boeblingen after which more accurate resource
and schedule estimates can be derived.

... snip ...


CMSBACK

2006-10-28 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to alt.folklore.computers,bit.listserv.vmesa-l as well.


Anne  Lynn Wheeler [EMAIL PROTECTED] writes:
 search engine even turns up one of my old pasts mentioning VOL1 and
 HDR1:
 http://www.garlic.com/~lynn/2004q.html#20 Systems software versus 
 applications software definitions

 which discusses an old backup/archive system that i had written for
 internal use ... which then went thru several iterations and
 eventually released as workstation datasave facility, morphed into
 adsm and is now known as tsm
 http://www.garlic.com/~lynn/subtopic.html#backup

re:
http://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; 
Now : Programming practices

old email from the other person working on CMSBACK version 2. this is
on enhancing the pattern matching capability in the user interface
file retrieval process (from the backup/archive repository).

as noted in the above reference, Melinda's history starts with what
would be CMSBACK version 3 (or possibly 4 ... depending on how you
classify all the work done between the initial CMSBACK deployment and
assignment of the people mentioned in Medlinda's history).

To: wheeler
Date; 10/25/79  02:11:58

I had to make one change to OSPATM to make it work. The macro SREGS is
a bit too much for me tonite so I replaced the one to set up
addressability to the work area using R5 with the L R5,8(,R1) and
USING PATSPC,R5. AND..  it works like a super star Try issuing
the CMSBACK exec, ask for a report, and enter whatever patterns you
like.. Disk load the returned RPT file and see what you get.. I've
been playing with it now for a while and it really works great. (I
will make the matching available on date and time tommorrow..  why
stop with just the filename and filetype).

... snip ...

for the original low-level CMSBACK interface to tape, I used a highly
modified version of VMFPLC (mentioned in the old email below) that I
had renamed VMXPLC. Not mentioned in the below ... but it was also
enhanced to provide optimal processing for the paged mapped filesystem
implementation that I had done
http://www.garlic.com/~lynn/subtopic.html#mmap

start of buffers were page aligned ... and 15 800 byte data blocks
could be three 4096 byte blocks. For small files, the minimum size
data block on tape could be 800 bytes. With separate FST record and
data block, a tape with a lot of small files would have half (or more)
of the length could be taken up with interrecord gaps. Merging the FST
record and the first data record into the same physical tape record,
would cut the tape devoted to interrecord gaps in half (when lots of
small files was involved).

From: wheeler
Date: 03/21/80  13:41:39
To: somebody in endicott

The original VMFPLC was an update to release 2 DMSTPE. The code took
the FST record which is placed in a trailing 800 byte record following
the file dumped and placed it in a 59 byte record in front of the file
dumped. It also blocked up to 5 800 byte data blocks per physical tape
block. I captured the update early and maintained it (I heard that the
development lost it and didn't have a copy) against the current
release  plc level. I also enhanced the update to block up to 15 800
byte data blocks, to merge the FST record into the 1st physical data
block dumped and to avoid rewriting the MFD after each file loaded.

VMFPLC2 appears to have several new features which would indicate that
it is not a modification of the original VMFPLC (since as far as I
know, I'm the only one with the source) but it still maintains the
original tape format.  I must confess that I've not looked at the new
release 6/bsepp tape source (although I've heard that it has grown
substantially and has been split into several files).

... snip ...

and for an even earlier CMSBACK related email. standard CMS could
share various filesystem areas across multiple virtual
machines. However, the filesystem status information would be
duplicated in every virtual address space. A hack was done to place a
copy of some shared CMS filesystem information in shared (r/o
protected) memory ... so there only need to be a single physical copy
of the filesystem metadata (shared across everybody). The problem
for DMSDSK, DMSTPE, VMFPLC and VMXPLC was that there was a hack where
they temporarily modified the file metadata that forced the logical
record size to match the physical disk record ... and then restore it
when they were done.  This hack would fail if the file metadata
information was located in shared (r/o protected) memory.

From: wheeler
Date: 04/02/79  19:22:35
 
Yorktown has done an update for DMSDSK (DISK DUMP) to handle the
problem of dumping from a disk with FST in shared memory
(DMSDSK YK187DMS). Can you merge with what you have been doing to
DMSTPE for TAPE, VMFPLC,  VMXPLC??

... snip ...


Re: real core

2006-10-13 Thread Anne Lynn Wheeler
Tom Duerbush wrote:
So I guess the question I'm wondering...

How many others have shipped dumps, online, back before high speed
Internet connections?

re:
http://www.garlic.com/~lynn/2006s.html#16 memory, 360 lcs, 3090 extended
store, etc
http://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was:
real core)

we had done hsdt (high speed data transport) project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

in the 80s ... with high-speed backbone connected to the internal
network.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

in the late 80s, they used to ship chip designs to high-speed hardware
logic simulator/checker in the late 80s. this was claimed to
contributed to helping bring in rios/power chip a year early.

we were also interested in participating in nsfnet-1 backbone
(which could be considered the operational precurser to the modern
internet). we weren't allowed to bid ... but did get an technical
review, one of the conclusions was what we had running and operational
was at least five years ahead of all the nsfnet-1 bids (RFP
responses) to build something new. slightly related
http://www.garlic.com/~lynn/internet.htm#0

and
http://www.garlic.com/~lynn/2002k.html#12 nsfnet program announcement
http://www.garlic.com/~lynn/2000e.html#10 nsfnet award announcement

for other drift, in the early days of rex (rexx), i wanted to
demonstrate that rex wasn't just another batch command processor
(exec, exec2) but could be used to implement very complex
application. I chose the vm problem/dump analyzer ... which was a
fairly large application written in assembler. i wanted to demonstrate
that in 3 months working half-time, i could implement in rex something
that had ten times the function and ten times the performance of the
existing assembler implementation. the result was dumprx
http://www.garlic.com/~lynn/subtopic.html#dumprx

which could be used to analyze a dump interactively ... even over the
internal network w/pvm (terminal emulation) ... w/o having to actually
ship the dump. part of dumprx was library of automated analysis
scripts ... the results could be saved and restored; aka you could run
the automated analysis scripts ... that batched the most common
sequence of manual analysis processes.

the library of batched analysis routines effectively automated most of
the most common (manual) analysis procedures (examined storage for
a broad range of failure signatures).


Re: bandwidth of a swallow (was: Real core)

2006-10-12 Thread Anne Lynn Wheeler
Paul B. Nieman wrote:
 In the early 1990's we consolidated a data center from Sydney into
 Philadelphia.  We used SYBACK to do a full dump of specific (most)
 minidisks to tape and shipped the tapes.  We then performed daily
 incrementals to disk, and sent the incrementals via RSCS, via a 9600
 baud line at most.  I think we had a 9600 baud line that was shared
 for RSCS and VTAM traffic, but the telecom part wasn't mine to worry
 over.  Each minidisk intended to move was a separate file and sent via
 SENDFILE.  There were service machines written to send and receive
 them.  I think the first incrementals arrived before the tapes.  In
 any case, we kept track of different day's incrementals for a whole
 week and applied them as they finished arriving.  The line was kept
 very busy and watched closely, but it was easy to restart if it
 dropped.

 Our actual cutover the following weekend went fairly quickly and met
 whatever target we had, which I certainly think wasn't enough to allow
 for backing up, shipping, and applying the tapes.

in the later part of the mid-70s, one of the vm370 based commercial
time-sharing services had datacenter on the east coast and put in
datacenter on the west coast connected via 56kbit link.

the had enhanced vm370 to support process migration between
loosely-coupled machines in the same datacenter cluster ... i.e. for one
thing, as they moved to 7x24 worldwide service ... there was no window
left for doing preventive maintenance. process migration allowed them to
move everything off a complex (that needed to be taken down for maint).
the claim was that they could even do process migration over the 56kbit
link ... modulo most of the file stuff having been replicated (so that
there wasn't a lot needing movement in real time).

misc. past posts mentioning vm time-sharing service
http://www.garlic.com/~lynn/subtopic.html#timeshare

they had also implemented page mapped filesystem capability with lots of
advanced bells and whistles ... similar to the cms page mapped
filesystem stuff that i had originally done in the early 70s for cp67.
http://www.garlic.com/~lynn/subtopic.html#mmap

which also included a superset of the memory segment stuff ... a small
subset was later released as DCSS
http://www.garlic.com/~lynn/subtopic.html#adcon

for other drift ... as mentioned before ... the internal network was
larger than the arpanet/internet from just about the beginning until
around sometime mid-85. misc. posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the issues was what to do about JES2 nodes on the internal
network. one of the issues was that relatively trivial changes in JES2
headers between releases would precipitate JES2 ( MVS) system crashes.
for that reason (and quite a few others), JES2 nodes were pretty well
limited to a few boundary nodes. A library of vnet/rscs line drivers
grew up for JES2 that supported a cannonical JES2 header format ... and
the nearest VNET/RSCS node would have the specific line-driver started
that would make sure that all JES2 headers sent to the JES2 system ...
met the requirements of that specific JES2 version/release.
Sporadically, there were still some (infamous) cases where JES2 systems
on one side of the world would precipitate JES2 systems on the other
side of the world to crash (one particular well known case was JES2
systems in san jose causing JES2/MVS systems in hursley to crash). misc.
past posts mentioning hasp /or jes2
http://www.garlic.com/~lynn/subtopic.html#hasp

Another scenario was there was some work to do load-balancing offload
between STL/bld90 and Hursley around 1980 (since they were offset by a
full shift). the test was between two jes2 systems (carefully checked to
be at compatible release/version) ... over a double-hop 56kbit satellite
link (i.e. up from west coast to satellite over the us, down to the east
coast, up to satellite over the atlantic, down to UK). JES2 couldn't
make connection ... but all error indicators were clean. So finally it
was suggested to try the link between two vnet systems. The link came up
and ran with no problem.

The person overseeing the operations was extremely sna/vtam
indoctrinated. So the first reaction was what ever caused the problem
went away. So it was time to switch it back between JES2 ... it didn't
work. Several more switches were made ... always ran between VNET, never
ran between JES2. The person overseeing the operation finally declared
that the link actually had severe error problems but the primitive VNET
drivers weren't seeing them ... and only the advanced VTAM error
analysis was realizing there was lots of errors.

it turned out the problem was with the double-hop satellite roundtrip
(four hops, 44k miles per hop, 22k-up, 22k-down) propogation delay ...
which VNET tolerated and vtam/jes2 didn't (not only was the majority of
the internal network not jes2 ... it was also not sna/vtam)


memory, 360 lcs, 3090 extended store, etc

2006-10-11 Thread Anne Lynn Wheeler
Schuh, Richard wrote:
 Yeah, but 3090 memory was not ferrite core, was it? IIRC, it was much
 cheaper and more reliable. I wasn't privy to the bean-counting
 specifics, but the rumored cost of the LCS storage on our 360 class
 machines was in the neighborhood or $2.5-3M per 2MB unit. And they were
 real core - you could look through the glass panels and see the
 individual planes of wires and doughnuts. The stuff was not reliable, so
 we had 3 boxes in order to always have 1 available for the production
 ACP system. There was usually 1 in use, 1 being repaired, and 1 just out
 of repair that was on standby. The care and feeding of those animals was
 a career.

small spill-over from bit.listserv.ibm-main mentioning extended store,
3090 memory, 360 memory, 360 LCS and a couple other memory related
topics
http://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#42 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#43 REAL memory column in SDSF


Re: The Fate of VM - was: Re: Baby MVS???

2006-08-15 Thread Anne Lynn Wheeler

re:
http://www.garlic.com/~lynn/2006o.html#49 The Fate of VM - was: Re: Baby
MVS???
http://www.garlic.com/~lynn/2006o.html#51 The Fate of VM - was: Re: Baby
MVS???
http://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby
MVS???

email from long ago and far away

X-To: wheeler
Subject: GENDMOD module

Could I ask you to get an assembly listing of the GENMOD module so I can
study it to determine what changes I have to make to VS APL.  If we make
this change, it could have a tremendous impact on the HONE system, above
what their SEQUOIA system is giving them.  There are several very large
APL applications on there  particularly the configurators.  If
users are sharing that code as well, we could get some big savings ...
not to mention the time savings in just loading the workspaces.

 snip ...

shared variables was eventually created as an alternative paradigm (to
that originally implemented in cms\apl) for accessing system api/functions.

apl\cms and apl\sv eventaully evolved into vs apl.

the (cms) apl interpreter had been structured into code that could be
shared across all the virtual address spaces. however, HONE apl
operations had another couple hundred kbytes of commonly used apl
applications that appeared in nearly all workspaces
http://www.garlic.com/~lynn/subtopic.html#hone

i had originally done paged mapped filesystem for cms in the early 70s
along with a mechanism that allowed loading of objects from cms paged
mapped filesystem as memory objects shared across multiple virtual
address spaces.
http://www.garlic.com/~lynn/subtopic.html#mmap

one of the internal datacenters this was installed on was at HONE. HONE
used it to create a shared executable image of the APL interpretor
(under cms). I had modified the standard CMS executable creation command
(GENMOD) and executable load command (LOADMOD) to add support for the
shared executable object option.

A small subset of the cp  cms memory mapped feature/function was
released in vm/370 release 3 called DCSS.

However, the full function was in use at a number of internal
datacenters, like HONE.

The issue in this particular email ... was that there were several large
APL applications (like the HONE mega-application mentioned in the
previous post, SEQUOIA). These programs were mostly static data that
was interpreted by the APL interpreter (and executable in the 370
sense). The idea here was to modify the APL workspace loader to play
some of the same games that I had done in CMS GENMOD/LOADMOD ... so that
static APL workspace applications could be loaded as shared memory
objects (allowing a single common image to be used across all virtual
address spaces). This could significantly cut both the aggregate HONE
application real-storage requirements ... as well as I/Os involved in
retrieving unique copies off disk for every HONE user.


hardware virtualization slower than software?

2006-08-13 Thread Anne Lynn Wheeler
Hardware virtualization slower than software?
http://developers.slashdot.org/developers/06/08/12/2028223.shtml

... from above:

One example given is compilation of a Linux kernel under a virtualized
Linux OS. Native wall-clock time: 265 seconds. Software-assisted
virtualization: 393 seconds. Hardware-assisted virtualization: 484
seconds. Ouch. It sounds to me like a hybrid approach may be the best
answer to the virtualization problem. 

... snip ...

similar, but different posting made here not too long ago
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

the above has a discussion about hardware/software virtualization
trade-off in 3081 vis-a-vis 3090. note that this was pre-PR/SM
(which has since evolved into LPARS) ... where the microcode support
can create virtual machines ... w/o requiring separate hypervisor
monitor running (i.e. dropping everything into hardware was no
longer a performance trade-off issue).

for other drift, the performance characterization in the article is
reminisent of presentation i made at boston share meeting in aug68.
three people had came out from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

the last week in jan68 to install cp67 at the university. during the
spring and summer of 68, i rewrote significant poritions of the
kernel, some cases descreasing pathlengths by factor of 10 to 100
times. past posting of parts of that presentation
http://www.garlic.com/~lynn/94.html#18 CP/67 and OS MFT14

which has a bare-machine (native wall clock) time of 322 sec.
original virtualization elapsed time 856 sec. virtualization elapsed
time (after rewrites of the spring and summer) 435 secs
(virtualization processing was reduced from 534 cpu secs. to 113 cpu
secs.).


Re: oops

2006-08-06 Thread Anne Lynn Wheeler
Phil Smith III wrote:
 Gabe reminds me that the 360 didn't run VM; I did use it, but it was
 the 370/158 with 2MB that I used to use VM on.

360/67 was the only (standard) 360 with virtual memory support. it had
both 24-bit and 32-bit virtual addressing options (you didn't see more
than 24-bit again until 370-xa with 3081). 360/67 multiprocessor also
had channel director ... which supported all processors accessing all
channels (standard 360  370 multiprocessors only provided for common
memory addressing ... the rest of the infrastructure, including
channels, were partitioned, specific to processors).

cp67 was developed by the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

... supporting virtual machines and virtual memory. cp67 was released to
customers. there had been an earlier cp40 developed on a custom modified
360/40 with virtual memory ... pending availability of a 360/67.

there was joint project between cambridge and endicott to add a lot of
370 stuff to cp67 kernel ... this was discussed recently in the series
of posts on sequence numbers and cms multi-level source maintenance
... which mostly evolved out of the cp67 cambridge/endicott 370 effort
(*CMS* originally stood for the cambridge monitor system, but morphed to
conversational monitor system for vm370)

modified version of cp67 ran internally extensively on 370s ... pending
availability of vm370. also CCWTRANS (supporting virtual memory ccws
translated to shadow real CCWs) was used in initial prototype of os/vs2
(i.e. mvt hacked to directly support 370 virtual memory).

gobs of posts just this year mentioning cp/67
http://www.garlic.com/~lynn/2006.html#5 Page fault question (zero-filling)
http://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped
files on z/VM V5.1
http://www.garlic.com/~lynn/2006.html#13 VM maclib reference
http://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for
z/Linux
http://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006.html#40 All Good Things
http://www.garlic.com/~lynn/2006b.html#7 Mount a tape
http://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
http://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
http://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
http://www.garlic.com/~lynn/2006b.html#39 another blast from the past
http://www.garlic.com/~lynn/2006b.html#40 another blast from the past
... VAMPS
http://www.garlic.com/~lynn/2006c.html#2 Multiple address spaces
http://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
http://www.garlic.com/~lynn/2006c.html#21 Military Time?
http://www.garlic.com/~lynn/2006c.html#22 Military Time?
http://www.garlic.com/~lynn/2006c.html#28 Mount DASD as read-only
http://www.garlic.com/~lynn/2006c.html#45 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#18 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#35 Fw: Tax chooses dead language
- Austalia
http://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#28 MCTS
http://www.garlic.com/~lynn/2006e.html#40 transputers again was: The
demise of Commodore
http://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
http://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
http://www.garlic.com/~lynn/2006f.html#1 using 3390 mod-9s
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion -
DISAPPOINTMENT
http://www.garlic.com/~lynn/2006f.html#21 Over my head in a JES exit
http://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#3 The Pankian Metaphor
http://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS
clock in PCs?
http://www.garlic.com/~lynn/2006g.html#58 REP cards
http://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#20 Binder REP Cards (Was: What's
the linkage editor really wants?)
http://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#55 History of first use of
all-computerized typesetting?
http://www.garlic.com/~lynn/2006h.html#57 PDS Directory Question
http://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection
Bits: what does it really mean?

Re: the more things change, the more things stay the same

2006-08-04 Thread Anne Lynn Wheeler
re:
http://www.garlic.com/~lynn/2006n.html#52 the more things change, the more
things stay the same
http://www.garlic.com/~lynn/2006n.html#53 the more things change, the more
things stay the same

the following article:

How Secure Is That Device?  As device software joins the larger world,
security becomes ever more vital
http://dso.com/news/showArticle.jhtml?articleID=191501076

Has statements that are almost exact quotes of some statements about
virtualization made in the late 60s, nearly 40 years ago.

the article is also related to the thread raised in this crypto topic drift
http://www.garlic.com/~lynn/2006n.html#56 The very first text editor

started with this article
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=190900759

the most recent in that thread
http://www.garlic.com/~lynn/aadsm24.htm#52 Crypt to defend chip IP: snake oil
or good idea?

and even more thread drift related to the subject
http://www.garlic.com/~lynn/aadsm24.htm#53


Re: SEQUENCE NUMBERS

2006-08-04 Thread Anne Lynn Wheeler

R.S. wrote:

Even some mainframe programs interpret it as data, with funny effects
somtimes. For example SYSIN DD * for FTP program cannot contain the
numbers. AFAIK some TCPIP config files as well.

AFAIK the sequence numbers are completely useless nowadays. It was used
for punched card sorter. Is there any other application ?


they were also used for a long time as part of the cp67/cms multilevel
source update infrastructure (later vm370/cms). since it was the
pervasive internal platform for a long time  ... even some number of mvs
components would be start life with cms multilevel source update and
then have to morph to smp for external release (there were some
folktales of mvs components having periodic difficulty converting their
cms source development and maintained environment to smp as part of
customer ship).

in vm/cms ... before the oco-wars, not only did source ship as standard,
but maint. was done by shipping the source changes.

recent thread that started out discussing card sorting but drifted into
description of cms multi-level source update:
http://www.garlic.com/~lynn/2006n.html#45 sorting


Re: Source maintenance was Re: SEQUENCE NUMBERS

2006-08-04 Thread Anne Lynn Wheeler

Shmuel Metz , Seymour J. wrote:

I'm not sure when it came along, but by VM/SE there was a somewhat
more sophisticated UPDATE facility[1] with aux files, control files
and update files. I'd love to see a similar facility integrated with
ISPF.

[1] Not only could the XEDIT editor process them, but it could
generate update files to reproduce the effects of an edit
session. That's one of the CMS facilities I miss the most.



re:
http://www.garlic.com/~lynn/2006o.html#19 Source maintenance was Re:
SEQUENCE NUMBERS

the aux files, control files, update files scheme was what was created
for the effort for building 370 virtual machine support into a cp67
kernel (running on 360/67). however, it was implemented all in EXEC with
EXEC processing figuring out the control  aux files and making
interative calls to UPDATE command.

this was picked up as part of the original vm370 release and update
command was enhanced to directly process control, aux, and update files
(in one pass) and spitting out the (temporary, working) source file for
compile/assemble.

a little later, editors were enhanced to directly support the control,
aux, and update files as part of loading a source file for editing ...
with option that all changes made in the edit session resulted in an
update file (as opposed to a new complete source).

this recent posting, in a different thread, has more detailed
description of some of the operations ... as well as URLs to current CMS
documentation (including an example update from an internal editor
that predated xedit).
http://www.garlic.com/~lynn/2006n.html#45 sorting

a few other recent postings that happen to also mention xedit:
http://www.garlic.com/~lynn/2006n.html#34 Not Your Dad's Mainframe:
Little Iron
http://www.garlic.com/~lynn/2006n.html#43 MTS, Emacs, and... WYLBUR?
http://www.garlic.com/~lynn/2006n.html#55 The very first text editor


Re: the more things change, the more things stay the same

2006-07-23 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] (John Savard) writes:
 Not surprising: the NSA's Security-Enhanced Linux includes, as its
 unique security feature, something called mandatory security. A file can
 be flagged as having restrictions on it, and then any program which
 accesses this file, and writes on other files, causes the restrictions
 to propagate.

 This, of course, was one of the features of Multics, which indeed dates
 from about 40 years ago.

re:
http://www.garlic.com/~lynn/2006n.html#52 the more things change, the more
things stay the same

they were on the 5th floor of 545 tech sq ... while we were on the
4th floor
http://www.garlic.com/~lynn/subtopic.html#545tech


Re: Any resources on VLIW?

2006-07-20 Thread Anne Lynn Wheeler
Brian Inglis [EMAIL PROTECTED] writes:
 Might be relevant if Lynn Wheeler could expand on the unreleased VAMPS
 microcode to speed up 370 SMP, and also provided logical processors
 with similarities to those on current zSeries LPARs, although that may
 just have dropped parts of 370 sequential code down into microcode.

so presumably this recent post vis-a-vis vamps and the later i432
http://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?

misc. collected  past vamps postings
http://www.garlic.com/~lynn/subtopic.html#bounce

early microcode effort was VMA original for 370/158 that helped
virtual machine performance. for subset of supervisor state
instructions, microcode was added to execute the instruction using
virtual machine rules (to avoid interrupting into the virtual
machine hypervisor where the instruction was simulated).

concurrent with VAMPS effort was ECPS for 370 138148. ECPS did some
more stuff like VMA on the 158 (direct supervisor state instruction
execution) ... but it also identified parts of the hypervisor kernel
and moved that kernel code into microcode. the issue on 138148
machines was that there was an avg. of 10:1 microcode instructions
executed for every 370 instruction. Much of the kernel code moved to
microcode on straigh 1:1 basis resulting in ten times performance
speed up. old posting identifying specific kernel code segments for
migrating into microcode.
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

the VMA-related efforts eventually evolved into SIE ... where nearly
all supervisor state instructions had microcode enhancement for
directly executing with regard to virtual machine rules (avoiding a
lot of interruption into virtual machine hypervisor to simulate
supervisor state instructions). SIE was a state change instruction
that gathered up all the fields needed by various supervisor state
instructions to execute according to virtual machine rules. post of
old SIE discussion about implementation issue differences between 3081
and trout (3090)
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

there were still things like page faults for the virtual machine that
resulted in interruptions into the hypervisor kernel for handling.  a
special case was defined involving things like dedicated real storage
for a virtual machine ... eliminating need to interrupt into the
hypervisor kernel. This resulted in being able to operate a virtual
machine subset directly supported by hardware ... w/o the need for a
virtual machine kernel. This was called PR/SM ... and PR/SM
capability eventually evolved into the current LPARs (logical
partitions). a reference discussing some current LPAR and PR/SM
http://researchweb.watson.ibm.com/journal/rd/483/siegel.html

current machines can have a configurable limited number of LPARs ...
and it is possible to run a virtual machine hypervisor in an LPAR,
which in turns supports a much larger number of virtual machines.  The
has been an evoluation of the SIE support. Initially, SIE was not
virtualized but LPARs make use of SIE for support. That met that a
virtual machine hypervisor running in an LPAR wouldn't have
performance assist of SIE for running its virtual machines (all
virtual machine supervisor instructions would interrupt into the
hypervisor for simulation). Enhancements were required to virtualize
SIE for at least one level (so it could be used both by LPAR function
and also by hypervisor running in an LPAR).

Since I was doing both VAMPS and ECPS ... I borrowed a lot of stuff
done for ECPS for doing VAMPS. However, for VAMPS, I wanted it
extended in a much more architected way ... rather than simply doing a
1-fo-1 movement of existing kernel 370 code into microcode. VAMPS was
to have up to five processors ... and I defined a microcode hardware
queued work interface where the hypervisor put units of work on the
queued work interface (and the microcode took the queued work and
executed on whatever available processor there were). The hardeware
microcode also placed queued work for the hypervisor to handle ...
like things that were i/o interrupts in traditional 370 or page fault
interrupts (from executing virtual machines), etc.

The VAMPS abstraction of queued work for multiprocessor environment
was somewhat akin to the later defintion found later in i432. Some of
the VAMPS abstraction for i/o work queueing was somewhat akin to what
showed up later for 370-xa i/o operations.

After VAMPS was killed, I adapted the multiprocessing microcode queued
processing to an software implementation. A lot of the SMP kernel
implementations used a single, global kernel SPIN lock to serialize
all kernel execution. This drastically minimized the amount of code
changes to adapt a single-processor operating system to support a
multiprocessor operation.

In adapting the VAMPs multiprocessing microcode support to software, I
took the equivalent kernel software functions (that had been moved to
microcode in VAMPS) and made 

Re: Any resources on VLIW?

2006-07-20 Thread Anne Lynn Wheeler
re:
http://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
http://www.garlic.com/~lynn/2006n.html#47 Any resources on VLIW?

as an aside ... some number of the relatively recent 370 emulators
written for intel platforms have quoted avg. instruction ratio numbers
around 10:1 also (have to play some real tricks to get it much below
10:1).


Re: Not Your Dad's Mainframe: Little Iron

2006-07-18 Thread Anne Lynn Wheeler
Jim wrote:
 DUMPRX was one of the slickest tools available for VM sysprogs in the
 '80s.  With the overall level of code quality at that time, it was really
 needed.  I have always thought that the only reason it wasn't included in
 the product was the NIH mentality that was common in IBM at that time in
 Endicott and Kingston.

re:
http://www.garlic.com/~lynn/2006n.html#11 Not Your Dad's Mainframe: Little Iron

i may have alienated endicott ...  which had a whole group supporting
dumpscan (which was a large program written all in assembler). I had
made a comment that i would implement dumprx in less 3 months elapsed
time only working half time on it. It would have ten times the
function of dumpscan as well as ten times the performance. also since
this was the leading edge of the OCO-wars ... i pointed out that it
would be implemented all in REXX (except for possibly a hundred
assembler instructions) so that the source would have to be shipped.

I got all the basic stuff done early ... and since i had been building
up a knowledge base of failure scenarios ... i started a library of
dumprx/rexx routines that searched for particular classes of failure
signatures/characteristics.

it could be run either as cms terminal line-mode ... or as a xedit
rexx macro ... and then have full xedit capability for the dumprx
session

since they wouldn't ship it, i eventually got approval to give a
detailed implementation dumprx talk at SHARE ... in case anybody else
wanted to implement their own.

re:
http://www.garlic.com/~lynn/subtopic.html#dumprx


Re: dumprx

2006-07-17 Thread Anne Lynn Wheeler

Jim wrote:
DUMPRX was one of the slickest tools available for VM sysprogs in the 
'80s.  With the overall level of code quality at that time, it was really 
needed.  I have always thought that the only reason it wasn't included in 
the product was the NIH mentality that was common in IBM at that time in 
Endicott and Kingston.


re:
http://www.garlic.com/~lynn/2006n.html#11 Not Your Dad's Mainframe: 
Little Iron


i may have alienated endicott ...  which had a whole group supporting 
dumpscan (which was a large program written all in assembler). I had 
made a comment that i would implement dumprx in less 3 months working 
half time and it would have ten times the function of dumpscan as well 
as ten times the performance. also since this was the leading edge of 
the OCO-wars ... i pointed out that it would be implemented all in REXX 
(except for possibly a hundred assembler instructions) so that the 
source would have to be shipped.


I got all the basic stuff done early ... and since i had been building 
up a knowledge base of failure scenarios ... i started a library of
dumprx/rexx routines that searched for particular classes of failure 
signatures/characteristics.


since they wouldn't ship it, i eventually got approval to give a 
detailed implementation dumprx talk at SHARE ... in case anybody else 
wanted to implement their own.


re:
http://www.garlic.com/~lynn/subtopic.html#dumprx


Re: Old IBM protocol IBM 1006

2006-07-11 Thread Anne Lynn Wheeler

Charles Mills wrote:

Here's a good start:
http://listserv.uark.edu/scripts/wa.exe?A2=ind0509L=ibmvmP=8109


note in the above referenced archive ... it mentions transmission as 
reverse inverted


 • ALC is transmitted reverse inverted. For example, capital A is
 0x31, but it's transmitted as 0x73. This makes it a major PITA to 
read  off the wire.


my off-repeated story of doing our own mainframe terminal controller
at the univ.
http://www.garlic.com/~lynn/subtopic.html#360pcm

and somebody writing an article blaiming four of us for the mainframe 
pcm controller business.


the ibm mainframe channel interface had been reverse engineered and a 
channel interface controller card built for an interdata/3


the second or third bug we encountered was passing ascii from the 
interdata/3 (programmed to emulate 2702 over the channel interface to 
the mainframe) ... showed data was coming in all garbage. it took some 
time to realize that the linescanner on the interdata/3 was taking the 
leading bit off the line and storing it it in the high order bit 
position of the byte ... while the linescanner in the 2702 was taking 
the leading bit off the line and storing it in the low order bit 
position of the byte.


as a result ascii arriving in the mainframe memory from a 2702 
(linescanner) was bit-reversed ... and the ibm translate tables were 
taking that into account.


Re: Not Your Dad's Mainframe: Little Iron

2006-07-10 Thread Anne Lynn Wheeler

Rostyslaw J. Lewyckyj [EMAIL PROTECTED] writes:
 Well I had exposure, access, to an IBM MVT product called TESTTRAN
 which was sort of available on our system with assembler F and , if
 I remember correctly, Fortran G, both in batch and TSO.  One had to
 compile/assemble with the TEST option, which produced and kept a
 variables symbol table and a statement location table.  One could
 then set breakpoints, examine program values etc. etc.  But it was
 very awkward to use, badly documented, and buggy.  When I asked I
 was told that it wasn't developed because it wasn't used. Which I
 considered to be a very much a chicken and egg kind of argument.

 I had a little exposure to TEST under our CMS systems. But as we
 didn't really support VM/CMS too well at TUCCUNC I can't comment
 much except that I didn't find out much about what it was capable
 of.

huge amount of OS/360 TESTRAN was outputing all the (12-2-9) SYM
cards as part of assemble/compile ... so that you effectively could
support symbolic debugging. I don't know anybody that actually used it
... I remember having a TESTRAN manual at one point and running with 
option to generate SYM cards (just to see what they looked like) ... but 
never actually using it.


old post that has mention of SYM cards (as well as formats of most of 
the other 12-2-9 cards):

http://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers

PER and TRACE command were initially machine level stuff ... hex
addresses, hex locations ... etc .. not real symbolic stuff.

I wrote a debug tool in REX(X) called DUMPRX that eventually was
used extensively internally and at one point by nearly all the (vm/cms) 
field support people

http://www.garlic.com/~lynn/subtopic.html#dumprx

that had some amount of symbolic capability

this has comment about locate command and symbolics (and testran) ... 
from 2001:

http://listserv.uark.edu/scripts/wa.exe?A2=ind0103L=vmesa-lD=0P=39641

I had originally done pageable CP kernel function for cp67 3.1 the
summer I did a stint with BCS in Seattle (i was still undergraduate) ... 
which didn't get released in product until vm370. Part of the thing that 
I did as part of the pageable CP kernel function was to capture the 
loader table symbolics (all the external entries) and write it out as 
part of the kernel image. this feature including the complete loader 
table symbolics as part of the (pageable) kernel image wasn't included 
in the pageable kernel that went out in vm370 release 1. As a result, 
there were periodic games played with a compiled module frequently 
called DMKSYM ... that had symbolics for some amount of kernel external 
entries.


however, when i got around to doing dumprx, i recreated the capture of
the complete loader table symbolics as part of saving the kernel
image. Then the locate command was changed to use all the complete 
loader table entries saved out in the pageable kernel (instead of 
DMKSYM) ... and other things were enhanced to utilize the actual loader 
table entries.


vm/cms postmortem kernel dump processing had facility of importing an 
image of the loader printed output. I modified the kernel dump process 
to initialize the dump image with a copy of the complete loader table 
image (from saving the original kernel image) as part of system startup.




this has title of A Survey of current debugging concepts
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19690026235_1969026235.pdf

it looks somewhat like a cms/script document printed on 1403 with TN
train (some of the rendering looks wavy ... which might be scanning
the original, it not, then it isn't likely to have been 1403 output).
It is dated August, 1969 ... and it has section 3-34 TESTRAN, for IBM
System/360

I now some number of gov. installations had cp67 (and cms, and script, 
etc) in 69 ... but I don't know if Goddard had one(?).


The above does list TESTRAN reference as:

IBM Corporation. IBM System/360 Operating System TESTRAN.
   Systems Reference Library. Form C28-6648-0 File
   Number S360-37. February 1967


Re: DCSS

2006-07-07 Thread Anne Lynn Wheeler

Jim Bohnsack wrote:
 I had an XT/370 PC sometime in the mid-late 80's.  It used PAM as it's
 native or default CMS file system if I remember correctly.
 Jim

re:
http://www.garlic.com/~lynn/2006m.html#53 DCSS
http://www.garlic.com/~lynn/2006m.html#54 DCSS

part of the issue was that even tho xt/370 hard disk was single user, it 
used the xt hard disk which had 110ms access with block at a time ... 
(plus the latency communicating backforth between the 370 processor and 
cp/88 running on the pc processor). The combination of the significantly 
slower disk (vis-a-vis mainframe) and CMS being quite a bit more disk 
intensive (vis-a-vis applications developed for the PC environment) ... 
there was noticeable perception about poor performance.


PAM offers a much better matchup between filesystem operations and 
virtual memory paradigm ... resulting in significant efficiencies
(most ibm mainframe operating systems inherited filesystems that had a 
significant real memory and real i/o paradigm orientation).

http://www.garlic.com/~lynn/subtopic.html#mmap

CMS filesystem operations had diagnose I/O for pathlength reduction 
but continued to retain channel program (a real memory) orientation. I 
had done the precursor to CMS diagnose I/O as an undergraduate ... 
demonstrating significant pathlength reduction for CMS I/O intensive 
operation (which was sort of continuation of a lot of cp67 kernel 
pathlength operations I was already doing).


The other place that PAM helped was minimizing real storage requirements 
for filesystem operations ... especially evident in environments with 
limited real storage configurations. Actually the CP kernel PAM 
infrastructure could dynamically adapt filesystem operations to the 
level of real storage contention and/or amount of real storage 
available. In one high-end (real mainframe) filesystem intensive 
benchmark, PAM demonstrated something like 300percent increased 
efficiency in filesystem operations (compared to  highly optimized 
standard CMS EDF filesystem).


XT/370 was significantly real memory contrained for lots of cms 
operations. I had done various performance analysis on pre-announce 
XT/370 washington boxes and found significant page thrashing. When the 
information was distributed, I got blaimed for delay in XT/370 announce 
and ship while they retrofitted the hardware with larger real storage.


However, XT/370, being a single user system, saw no benefit from the 
extensive segment/page sharing capability available via PAM (especially 
compared to the limited capability offered thru the namesys/dcss method).

http://www.garlic.com/~lynn/subtopic.html#adcon

misc. past washington, xt/at/370 postings:
http://www.garlic.com/~lynn/94.html#42 bloat
http://www.garlic.com/~lynn/96.html#23 Old IBM's
http://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: 
Computer of the century)

http://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
http://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries 
workstation?
http://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries 
workstation?

http://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001f.html#28 IBM's VM for the PC c.1984??
http://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: 
why the machine word size ...)
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: 
why the machine word size ...)
http://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go 
again.
http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP 
Unix Box?]
http://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP 
Unix Box?]

http://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
http://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and 
System/390 - do we need it?

http://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
http://www.garlic.com/~lynn/2003h.html#40 IBM system 370
http://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
http://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM 
PC software?
http://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM 
PC software?
http://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM 
PC software?
http://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM 
PC software?
http://www.garlic.com/~lynn/2005f.html#6 Where should the type 
information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#10 Where should the type 
information be: in tags and descriptors
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped 
files on z/VM V5.1

http://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s
http://www.garlic.com/~lynn/2006j.html#36 The Pankian Metaphor


amadeus

2006-07-03 Thread Anne Lynn Wheeler
jim bohnsack wrote:
 Colin--I was thinking about Amadeus last week anyway.  I am still shaking 
 my head in disbelief that last week, 4 years ago, I was sitting there with 
 job offers from Amadeus, MIT, and Cornell.  I have always said that in 20 
 years I hope that I'm not sitting in a rocking chair in my nursing home 
 saying that I should have taken the Amadeus job.

Anne had done stint as chief architect on Amadeus long ago (still have
hard copy of specification, 24apr87) ... however she backed going with
x.25 ... contrary to a lot of SNA forces ... which got her replaced.
However, Amadeus went with x.25 anyway.

misc. past posts mentioning amadeus
http://www.garlic.com/~lynn/2001g.html#49 Did ATT offer Unix to Digital
Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#50 Did ATT offer Unix to Digital
Equipment in the 70s?
http://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
http://www.garlic.com/~lynn/2003d.html#67 unix
http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a
mainframe?
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good
architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good
architecture for interactive workloads
http://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
http://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies
(w/o RM/SQL)
http://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit
addressing
http://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs.
Microcoded
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back


Re: virtual memory

2006-05-23 Thread Anne Lynn Wheeler
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
 Was there an acronym/initialism for this COMMON area?  My memory of
 doing junior-level systems work on MVS systems is telling me that
 there was one, but not what.   ?

common segment ... having started out as a 1mbyte shared segment
in every address space.

the hardware table look-aside buffers (TLBs) were STO (virtual address
space) associative. the result was that there were unique entries for
the same common segment entries from different virtual address spaces.

in the early 80s, some of the high-end hardware started adding special
TLB treatment for the MVS common segment ... so that there would
only be one set of TLB entries for MVS common segment areas across all
MVS address spaces. However, this references a bug when MVS was
running as a virtual guest ... and a temporary fix ... pending
availability of MVS APAR.

Date: 18 February 1983, 12:30:13 EST
From: 

Hi - the 'official' fix to the STBVR 0Cx abend problem is MVS APAR/
PTF OZ67587.  I don't know if it's available yet.  In the meantime,
the zap works just fine.

I'll send the zap following this note.

//x JOB (6007,X003),,MSGLEVEL=1,MSGCLASS=O,CLASS=B,
// REGION=1024K,NOTIFY=PREISM
//ZAP EXEC PGM=AMASPZAP
//SYSPRINT DD SYSOUT=*
//SYSLIB DD DSN=SYS1.NUCLEUS,DISP=SHR,UNIT=3330,VOL=SER=D00126
**
** FOR VM MVS GUEST, STBVR, TURN OFF USE OF COMMON SEGMENTS
**
 NAME IEAVNPX1 IEAVNPX1
 VER 0DA2 96026003OI  SGTCB=1
 REP 0DA2 4700NOP
 VER 0DCE 96026003OI  SGTCB=1
 REP 0DCE 4700NOP
//

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Re: Code density and performance?

2006-05-17 Thread Anne Lynn Wheeler
Peter Flass [EMAIL PROTECTED] writes:
 Don't know about HP, bit it's also IBM:
   http://www.research.ibm.com/journal/rd/483/heller.html

re:
http://www.garlic.com/~lynn/2006j.html#32 Code density and performance?

the above journal artcile reference mentions 1997 for millicode. it
mentions millicode being needed for (at least) sie and pr/sm. pr/sm
was done about a decade earlier on 3090 (and was significantly more
difficult in native microcode).

this is a recent posting including email from jun81 mentioning
sie effort for 3090
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

this mention of macrocode
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

has piece of email from mar81 that mentions the (then) announced
amdhal 5880 with macrocode (and used for hypervisor implementation).

i had first started running into references regarding amdahl's work on
macrocode the previous year (mar80).

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Re: virtual memory

2006-05-14 Thread Anne Lynn Wheeler
 memory to 370/165 said that shared segment protect and a
couple other features would cause an extra six month delay). a few
past posts mentioning virtual memory retrofit to 370/165:
http://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits:
what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in
S/370

as a result, the shared page protection had to redone as a hack to
utilize the storage protect keys that had been carried over from 360.
this required behind the scenes fiddling of the virtual machine
architecture ... which prevented running cms with the virtual machine
assist microcode activated (hardware directly implemented virtual
machine execution of privilege instructions). later, in order to run
cms virtual machines with the VMA microcode assist, protection was
turned off. instead a scan of all shared pages was substituted that
occured on every task switch. an application running in virtual
address space could modify shared pages ... but the effect would be
caught and discarded before task switch occured (so any modification
wouldn't be apparent in other address spaces). this sort of worked
running single processor configurations ... but got much worse in
multi-processor configurations. now you had to have a unique set of
shared pages specific to each real processor. past post mentioning
the changed protection hack for cms
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits:
what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in
S/370

past posts mention pageable kernel work:
http://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re:
Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
http://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring,
wandering story
http://www.garlic.com/~lynn/2004b.html#26 determining memory size
http://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY
OT!] Overuse of symbolic constants]
http://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be:
in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#16 Where should the type information be:
in tags and descriptors
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2006.html#40 All Good Things

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Re: virtual memory

2006-05-12 Thread Anne Lynn Wheeler
 now were in separate address space) to directly
access the calling application's parameters in the application's
virtual address space.

The initial solution was something called a COMMON segment, a (again
initially) 1mbyte area of every virtual address space where
applications could stuff parameter values that they needed to be
accessed by called subsystems, resident in other address spaces.  Over
time, as customer installations added a large variety of subsystems,
it was unusual to find the COMMON segment taking up five megabytes.
While these were MVS systems, with a unique 16mbyte virtual address
space for every application, the kernel image was taking 8mbytes out
of every virtual address space, and with a five megabyte COMMON area,
that would leave a maximum of 3mbytes for application use (out of a
16mbyte virtual address space).

Dual-address space mode was introduced in the late 70s with 3033
processor (to start to alleviate this problem caused by the extensive
use of pointer passing paradigm). This provided virtual address space
modes ...  a subsystem (in its own virtual address space) could be
called with a pointer to parameters in the application address
space. The subsystem had facilities that allowed it to reach into
other virtual address spaces. A call to one of these subsystems still
required passing through the kernel to swap virtual address space
pointers ... and some other gorp.

recent mention of some connection between dual-address space and
itanium
http://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly
changed?
http://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces

Along the way there was a desire to move more of the operating system
library stuff that resided as part of the application code. So
dual-address space was generalized to multiple address space and a new
hardware facility was created called program call. It was attempting
to achieve the efficiency of branch-and-link instruction calling some
library code with the structured protection mechanisms required to
switch virtual address spaces by passing through priviledge kernel
code. the privilege program call hardware table had a bunch of
permission specification controls ... including which collection of
virtual address space pointers could be moved into the various access
registers. 31-bit virtual addressing was also introduced.

today there are all sorts of combinations of 24-bit virtual
addressing, 31-bit virtual addressing, 64-bit virtual addressing
... as well as possibly several such virtual address spaces be able to
be accessed concurrently.

3.8 Address spaces ... some overview including discussion about
multiple virtual address spaces:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.8?SHELF=E
Z2HW125DT=19970613131822

2.3.5 Access Registers ... discussion of access registers 1-15
can dissignate any address space
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.3.5?SHELF
=EZ2HW125DT=19970613131822

10.26 Program Call
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF
=EZ2HW125DT=19970613131822

10.27 Program Return
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.27?SHELF
=EZ2HW125DT=19970613131822

10.28 Program Transfer
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.28?SHELF
=EZ2HW125DT=19970613131822

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


11may76, 30 years, (re-)release of resource manager

2006-05-11 Thread Anne Lynn Wheeler
30 years since (re-)release of resource manager; some amount of the
stuff i had done as an undergraduate in the 60s for cp67 ... but was
dropped in the morph from cp67 to vm370.

collected past posts mentioning the scheduler
http://www.garlic.com/~lynn/subtopic.html#fairshare
and page replacement
http://www.garlic.com/~lynn/subtopic.html#wsclock

misc. past posts mentioning the date for resource manager:
http://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
http://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran
as slow as today's supercomputers?
http://www.garlic.com/~lynn/2006d.html#27 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor
http://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in
S/370

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Re: virtual memory

2006-05-11 Thread Anne Lynn Wheeler
 it unmanned
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
http://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph
circuits

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Virtual memory implementation in S/370 (a.f.c x-post)

2006-05-10 Thread Anne Lynn Wheeler
http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment
protection hack
http://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
http://www.garlic.com/~lynn/2005b.html#8 Relocating application architecture
and compiler support
http://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the
line
http://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the
line
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005t.html#39 FULIST
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on
z/VM V5.1
http://www.garlic.com/~lynn/2006.html#13 VM maclib reference
http://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#18 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#28 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2006b.html#4 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006b.html#7 Mount a tape
http://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/


Virtual memory implementation in S/370 (a.f.c x-post)

2006-05-10 Thread Anne Lynn Wheeler
Marten Kemp [EMAIL PROTECTED] writes:
 The recent thread about virtual memory sparked a (kind of)
 idle question: why did the implementation in the S/370
 have a two-level scheme (segment and page)? My original
 thought was that it facilitated definition of discontiguous
 parts of an address space.

re:
http://www.garlic.com/~lynn/2006i.html#22 virtual memory
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in
S/370

i had also done page migration as well as table migration ...  which
were released in my resource manager product ... the blue letter
product announcement gives product release as 11may76 ... 30 years ago
tomorrow.  partial reproduction of the resource manager blue letter:
http://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

page migration would look make judgements about different speed paging
devices ... and if it found high speed paging devices filling up, it
would start looking for idle virtual pages (resident on high speed
devices), that it could migrate to slower speed devices.

when real storage started getting constrained ... it would also look
for idle portions of virtual address spaces. each 64kbyte virtual
segment consumed ten bytes of real storage, 2bytes for the page table
entry and 8bytes of adminstrative stuff (shadow storage protect keys,
and location on paging device for the virtual page). for idle
segments, it would turn on the invalid bit in the segment table entry,
write the administrative stuff to special disk locations, and then
discard the real memory for the page table and administrative stuff
... typically picking up 160bytes of real storage per 64kbyte of idle
virtual address space or 2560bytes of real storage per 1mbyte of idle
virtual address space ... little over 4kbytes of real storage per
2mbytes of idle virtual address space.

the defined virtual address space might or not be contiguous ...  but
the segment table could have large gaps in the pointers to page tables
... potentially because the space wasn't defined in the particular
virtual address space ... or because the virtual address space area
was deamed to be idle at the moment and the associated tables had been
removed from real storage.

the administrative table containing the disk backing store address for
virtual pages (and the shadow storage protect keys) was called a
SWAPTABLE ... so the feature allowing the SWAPTABLE to be removed from
real storage was called SWAPTABLE migration or paging SWAPTABLEs.

a few other posts mentioning SWAPTABLE migration:
http://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2006.html#36 Charging Time

...

and a few posts mentioning shadowing process:
http://www.garlic.com/~lynn/2005h.html#17 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2006i.html#10 Hadware Support for Protection Bits:
what does it really mean?

--
Anne  Lynn Wheeler | http://www.garlic.com/~lynn/