The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

t...@harminc.net (Tony Harminc) writes:
> I'm still not convinced they are related. Hardware-level TLB
> management would still be there for the shadow tables. In the early
> days where the only TLB invalidating instruction was PTLB, which
> clobbered the whole thing, the trick would presumably lie in avoiding
> that instruction like the plague.

recent threads mentioning shadow tables
http://www.garlic.com/~lynn/2010e.html#1 LPARs: More or Less?
http://www.garlic.com/~lynn/2010e.html#2 LPARs: More or Less?
http://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water 
chilled)

The "TLB" rules followed by shadow table operation also did implicit
invalidation everytime address space pointer changed.

the shadow tables operation followed the same rules as for TLB.  PTLB,
ISTO, ISTE, & IPTE were all instructions in the original 370 virtual
memory architecture. When 370/165 hardware group ran into problems with
the virtual memory hardware retrofit ... and wanted to drop several
features in order to buy back six months in schedule ... ISTO, ISTE, and
IPTE were part of the things from the base architrecture that were
dropped (leaving only PTLB ... i.e. everytime any invalidation occured,
everything got invalidated).

Also, original cp67 and vm370 "shadow table" only had a single "STO
stack" ... this is analogous to the 360/67 and 370/145 TLB ... where
everytime there was control register address space pointer changed (CR0
in 360/67 and CR1 in 370/145) ... there was an implicit TLB purge (aka
all TLB entries implicitly belonged to same/single address space). The
corresponding vm370 implementation was that all shadow table entries
were purged ... anytime there was a CR0/CR1 change.

The 370/168 had seven entry STO-stack ... aka every TLB entry had a 3bit
identifier (8 states, invalid, or belonging to one of seven address
spaces, 370/145 TLB entries had single bit, either valid or invalid).
Loading new CR1 value on 370/168 didn't automatically purge the whole
TLB ... it would check if the enw value was one of the already loaded
saved values ... and if there was match ... it would continue. If the
new address space value loaded into CR1 didn't match a saved value ...
it would select one of the seven saved entries to be replaced ...  and
invalidate/reset all TLB entries that had the matching 3bit ID.

VM370 product didn't support multiple shadow tables until the priced
kernel addon to VM370 release 5. MVS did extremely frequent change to
the CR1 value ... even w/o doing explicit PTLB ... and evertime ...  VM
had to do the full invalidation of all shadow table entries ...
corresponding to the similar implicit operation that went on with
370/145 (not having a multiple entry STO-stack at least up until the
priced kernel add-on to vm370 release 5).

There was somewhat analogous issue on real 3033 hardware with the
introduction of dual-address space mode. The 3033 was effectively the
same logic design as 370/168 ... remapped to slightly faster chips ...
and as such ... the TLB had the same seven entry STO-stack. When using
dual-address space mode ... the increase in number of different address
space pointers was overruning the 3033 (seven entry) STO-stack and the
frequency of (implicit) TLB entry invalidations went way up ... to the
point that dual-address space was running slower than common segment
implementation.

Dual-address space mode was somewhat a subset retrofit of the 370-xa
multiple address spaces. The common segment problem on 370 was MVS
kernel was taking half the 16mbyte address space and common segment
started out taking only single mbyte segment. The common segment was to
address the pointer passing paradigm from MVT&SVS days for subsystems
... which had resided in the same address space as the application. With
the move to MVS, the subsystems were now in different address space
(from the calling applications) and broke the pointer passing API
paradigm. The solution was to have common segment that was the same in
applications and subsystems. The problem was common segment grew with
the subsystems installed and applications using subsystems ... and
larger installations had common segment area pushing over five mbytes
(threatening to leave only 2mbytes for applications use).

Burlington lab was large MVS shop with large chip design fortran
applications and a very carefully crafted MVS that held common segment
area to one mbyte ... so the applications still had seven
mbytes. However, with increases in chip complexity ... it was forcing
the fortran applications over seven mbytes ... threatening to convert
the whole place to vm/cms ... since the fortran applications under CMS
could get very nearly the whole 16mbytes.

-- 
42yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to