Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> For relocate shared segment support, a shared segment may appear
> anywhere within a virtual machine's address space (it does not need to
> be at the position specified in the VMABLOK). The way I handled it in
> DMKVMA was to use the PTO pointers in the VMABLOK to check the shared
> segments, rather than using the segment index number to displace into
> the segment table and pick-up the STE.
>
> One of the co-op students that helped me write the original shared
> segment support for release 2 VM (included the sub-set that is now in
> the product DCSS) is now with Interactive Data Corporation (IDC). They
> have taken the idea and put a whole group on expanding the idea. They
> now call it Floating segments (instead of relocating segments). They
> have a modified assembler for generating adcon free code and are
> working on the compilers. All this work they have done has greater
> significance than they realize. It would greatly simplify conversion
> to an increased address space size.

re:
http://www.garlic.com/~lynn/2006y.html#20 moving on
and specifically the old email
http://www.garlic.com/~lynn/2006y.html#email790326
and other posts in this thread
http://www.garlic.com/~lynn/2006y.html#21 moving on
http://www.garlic.com/~lynn/2006y.html#23 moving on

one of the big problems was that the original 370 virtual memory
architecture included "segment protect" feature (i.e. turn on a bit in
segment table entry for a specific virtual address space ... and
everything in that segment became read/only for that address space).

the cp67 to vm370 morph of cms was somewhat structured around having
that feature. i've posted before how the retrofit of virtual memory
hardware to 370/165 ran into schedule problems and they dropped a
number of features from the 370 virtual memory architecture to buy
back six months in the schedule (and then made all other processors
drop them also so that there was compatibility across the 370 line).

this resulted in forcing vm370 to revert to the cp67 convention of
protecting shared pages ... which played games with storage protect
keys. this resulted in addition vm370 overhead for cms ... but a
little later also met than VMA (virtual machine assist) microcode
performance enhancements couldn't be used with shared-system CMS (VMA
implemenation of storage key operations didn't know the rules for
protecting shared segment pages).

preparing for vm370 release 3 ... somebody came up with a hack that
would allow VMA to be used with CMS ... the storage key game was
eliminated and instead the currently running CMS would be allowed to
modify shared pages (as well as being able to run with VMA turned
on). however before switching to a different process, the dispatcher
would scan all shared pages ... searching for ones that had been
changed. If any were found ... they were discarded (new process
requiring discarded shared page would have an unmodified copy
refreshed from disk). The trade-off of reduced overhead being able to
use VMA tended to offset the increased overhead that the dispatcher
had to scan 16 shared pages on (nearly) every process switch.

then it was dediced to pick up a very small subset of my virtual
memory management support (both cp and cms changes) as "DCSS" for
release 3
http://www.garlic.com/~lynn/subtopic.html#mmap
http://www.garlic.com/~lynn/subtopic.html#adcon

this initially resulted in at least doubling the typical number of
shared pages from 16 to 32. However, the overhead of the dispatcher
scanning of 32 shared pages (on every process switch) changed the
overhead trade-off vis-a-vis overhead reduction being able to use
VMA. Somebody then decided they had to go with the change page
scanning hack anyway ... since CMS intensive customers had already
been told that they would get performance benefit in vm370 release 3
with using VMA.

This was further aggrevated when developing multiprocessor support
that would be shipped in vm370 release 4. The change in release 3 for
scanning for changed shared pages was predicated on a single process
having exclusive access to the shared pages at a time. With
multi-processer support, there could be multiple, concurrent running
processes. To preserve the "exclusive access" assumption, it was then
necessary to have processor specific copies of shared pages. Now, not
only did the dispatcher have to scan (and increasing number of) shared
pages for changes (on nearly every task switch) ... but it also had to
make sure that the "switched-to" process had its (virtual address)
segment table entries pointed to the processor specific shared
segments. Now, things were getting entirely out of control.

Another issue that had come up was that the original relational/sql,
system/r had been developed on vm370 and was taking advantage of more
sophisticated virtual memory support. One of the things allowed some
processes address spaces to have r/w shared access to a shared
segments while other processes had only r/o shared access
http://www.garlic.com/~lynn/subtopic.html#systemr
it was referred to as "DWSS" in the technology transfer effort from
SJR to Endicott for what became SQL/DS. few recent posts mentioning
DWSS
http://www.garlic.com/~lynn/2006t.html#16 Is the teaching of non-reentrant 
HLASM coding practices ever defensible?
http://www.garlic.com/~lynn/2006t.html#39 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#11 long ago and far away, vm370 from 
early/mid 70s

Futhermore, the processing load on the US HONE system was increasing
and they were upgrading their 370/168s to multiprocessors ... HONE was
vm370 based infrastructure that provided world-wide support to
marketing, sales, and field people.
http://www.garlic.com/~lynn/subtopic.html#hone

HONE was doing this 6-9 months before release 4 (and official
multiprocessor support) was going to be available.

As i had been involved in a lot of the multiprocessor support
http://www.garlic.com/~lynn/subtopic.html#smp
http://www.garlic.com/~lynn/subtopic.html#bounce

I undertook to build a version of multiprocessor support on their
production vm370 release 3 (which I had heavily modified already).
While, I was doing that, I went ahead and put in the code to revert to
protection games with storage keys (eliminating having to have unique
shared page copies for every processor) that had existing in cp67 and
in vm370 prior to release 3.

for a little drift, pieces of recent thread in comp.arch on virtual
address space mappings
http://www.garlic.com/~lynn/2006w.html#23 Multiple mappings
http://www.garlic.com/~lynn/2006x.html#23 Multiple mappings
http://www.garlic.com/~lynn/2006x.html#26 Multiple mappings
http://www.garlic.com/~lynn/2006y.html#11 Multiple mappings

past posts mentioning problem/issues retrofitting virtual memory
support to 370/165
http://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs 
(was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 
bit?
http://www.garlic.com/~lynn/2000f.html#55 X86 ultimate CISC? No. (was: Re: 
"all-out" vs less aggressive designs)
http://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate 
CISC? designs)
http://www.garlic.com/~lynn/2000g.html#10 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a 
page fault ?
http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
http://www.garlic.com/~lynn/2001c.html#7 LINUS for S/390
http://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - 
why even or odd)
http://www.garlic.com/~lynn/2002.html#48 Microcode?
http://www.garlic.com/~lynn/2002.html#50 Microcode?
http://www.garlic.com/~lynn/2002.html#52 Microcode?
http://www.garlic.com/~lynn/2002g.html#47 Why are Mainframe Computers really 
still in use at all?
http://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
http://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
http://www.garlic.com/~lynn/2002m.html#68 Tweaking old computers?
http://www.garlic.com/~lynn/2002n.html#10 Coherent TLBs
http://www.garlic.com/~lynn/2002n.html#15 Tweaking old computers?
http://www.garlic.com/~lynn/2002n.html#23 Tweaking old computers?
http://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With 
>32 Bits of Text
http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
http://www.garlic.com/~lynn/2003g.html#19 Multiple layers of virtual address 
translation
http://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box??
http://www.garlic.com/~lynn/2003h.html#37 Does PowerPC 970 has Tagged TLBs 
(Address Space Identifiers)
http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment 
protection hack
http://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005b.html#62 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005e.html#57 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005e.html#59 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#1 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the 
line
http://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
http://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#39 A second look at memory access 
alignment
http://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005r.html#51 winscape?
http://www.garlic.com/~lynn/2005s.html#23 winscape?
http://www.garlic.com/~lynn/2006.html#13 VM maclib reference
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#5 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#12 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: 
what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006j.html#5 virtual memory
http://www.garlic.com/~lynn/2006j.html#31 virtual memory
http://www.garlic.com/~lynn/2006j.html#41 virtual memory
http://www.garlic.com/~lynn/2006k.html#57 virtual memory
http://www.garlic.com/~lynn/2006l.html#22 Virtual Virtualizers
http://www.garlic.com/~lynn/2006m.html#26 Mainframe Limericks
http://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than 
disks ?
http://www.garlic.com/~lynn/2006s.html#61 Is the teaching of non-reentrant 
HLASM coding practices ever defensible?
http://www.garlic.com/~lynn/2006t.html#1 Is the teaching of non-reentrant HLASM 
coding practices ever
http://www.garlic.com/~lynn/2006u.html#60 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
http://www.garlic.com/~lynn/99.html#7 IBM S/360
http://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc
http://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc

Reply via email to