Johnny Luo wrote:
> MVS(Multiple Virtual Storage) is the basic concept for
> z/os.However,after entering the mainframe world for eight months I
> still cannot understand it thoroughlyi"?1/2especially for ' multiple
> address spaces'.

in the initial translation from "real-storage" MVT operating system,
to VS2-SVS ... single virtual storage, a single 16mbyte virtual
address space was created and some paging code was hacked onto the
side of MVT ... and ccw translation routine from CP67 (CCWTRANS) was
glued into MVT. In effect, for most of MVT, it was as if it was
running on 16mbyte real machine (and there was little awareness that
it was running in a virtual machine environment. The MVT kernel
continued to occupy the same address space as all applications.

The real machine might have 4mbytes of real storage, but there was a
total of 16mbytes of virtual storage defined. The virtual memory paging
infrastructure would define to the hardware which virtual pages
were currently resident in real storage (and at what location). The rest
of the virtual pages (not currently resident in real storage) would be
loated out on (disk) secondary storage. If there was access to a virtual
page that wasn't currently in real storage, there would be an interrupt
into the (kernel) paging code, which would fetch the required page
into real storage (from disk). This mechanism of virtual memory
being larger than real storage and pages moving between real storage and
disk is similar in all operating systems with virtual memory support.

For the transition from VS2-SVS to VS2-MVS .... the MVT kernel and
address space was re-organized. A single virtual address space was
created for every applicaion ... with an image of the kernel code
occupying 8mbytes of every defined address space. Compared to some
systems that grew up in virtual memory environment that used message
passing between address spaces ... the real-storage heritage of MVT
(with everything in the same, real, address space) made heavy use of
pointer-passing paradigm. As a result, there are all sort of implicit
infrastructures that require application, kernel, and services to all
occupy the same address space when executing.

An issue in the transition from SVS to MVS was a number of sub-system
sevices ... that weren't directly part of the kernel (and therefor
present in the 8mbyte kernel image that shows up in every address
sapce) ... but did provide essential services for applications and
were dependent on the pointer-passing paradigm. In the transition from
SVS to MVS, where everything in the system no longer occupied the
same, single address space ... these subsystem services got their own
address space ... different from each application address space. This
created a complication when the application would pass a pointer to
some set of parameters that a subsystem service in a different virtual
address space needed to access.

To address the pointer-passing paradigm between application address
space and subsystem services address space ... the "common segment" was
defined. In much the same way, the same 8mbyte kernel image occupied
every virtual address space, the "common segment" also occupied every
address space. Applications could stick parameters in the common segment
and make a call to some subsystem service (which pop'ed into the kernel,
the kernel figured out which address space was being called, and
trasnfered control to that address space ... using the pointers to
parameters located in the common segment that was the same in all
address spaces).

This was back in the days when only 24bit/16mbyte addressing was
available. For large installations, with lots of subsystems and
applications ... it wasn't unusual to find common segments being defined
as 4mbytes-5bytes. This was causing problems for some applications ...
given you started with a 16mbyte virtual address space for an
application; 8mbytes of that was taken for the kernel image (in every
address space) and potentially 5mbytes was taken for the common segment
image (in every address space). As a result some installations only were
left with maximum of 3mbytes (out of the 16mbytes) for application use
(instructions and data).

Introduced in 3033 was something called dual-address space. This was
special provisions that could be setup so that instructions in one
address space could access data in a different address space. This
somewhat alleviated the pressure on the "common segment" size
(potentially growing to infinity for large installations with lots of
applications and services). An application could call a subsystem
service (in a different address space), passing a pointer to some
parameters. Rather than the parameters having to be squirreled away in
the common segment ... the parameters could continue to reside in
private application address space area ... and the subsystem service
(in its own address space) could use the dual-address space support to
"reach" into the application address space to retrieve (or set) parameters.

3081 and 370-xa introduced 31-bit (virtual) addressing and also
generalized the duall-address space support with "access registers" and
"program call". These were special set of kernel hardware tables where
an application could make a "program call" to a subsystem in a different
address space. Rather than the whole process having to run thru kernel
code to switch address spaces ... the whole process was implemented in
hardware "program call" support (in theory you could have all sorts of
library code that instead of residing in the application address space
... can now reside in separate address spaces.

access-register introduction ... from esa/390 principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.7?SHELF=EZ2HW125&DT=19970613131822

program call instruction description ... from esa/390 principles of
operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF=EZ2HW125&DT=19970613131822

with regard to the guestion about maximum virtual memory for an
application and exactly how many virtual pages might exist at any moment
... there was a recent discussions on "zero" pages in some comp.arch
thread. most virtual memory systems (mvs, vm370, windows, unix, linux,
apple, etc) usually don't actually create a virtual page until it has
been access for the first time. on first access, the system allocates a
page in real storage and initializes it to all zeros. system utilities
typically also provide a process that allows individual pages to be
"discarded" when no longer needed. If an application attempts to access
a virtual page that has been previously discarded, the system will
dynamically create a new "zeros" page.

i mention the early zero page implementation in cp67 which actually had
a special page on disk that was all zeros. virtual memory pages were
initialized to point to the (same) zeros page on disk. this would be
read into storage on first access ... and then a new, unique location
allocated after first access. i modified it to instead recognize that
the virtual page didn't yet exist ... and create one dynamically on the
fly by storing zeros in a newly allocated real storage page location.

a couple past posts mentioning zeros page:
http://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes
in memory
http://www.garlic.com/~lynn/2005c.html#16 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005c.html#24 [Lit.] Buffer overruns

a couple comp.arch threads from google groups that mention zero page
http://groups.google.com/group/comp.arch/browse_thread/thread/db9e349754c2c0bd/ed3c64f4160ecf41?lnk=st&q=zero+page+group%3Acomp.arch&rnum=4&hl=en#ed3c64f4160ecf41
http://groups.google.com/group/comp.arch/browse_thread/thread/ae7e455f75d9ccc5/c6e5905ac0ffdb4f?lnk=st&q=zero+page+group%3Acomp.arch&rnum=5&hl=en#c6e5905ac0ffdb4f

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to