You should also be able to lock pages for the guest as in


Lock MVSGuest 0 0 map



This will lock page 0 in memory.







From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of George Henke/NYLIC
Sent: Friday, September 24, 2010 8:04 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Central vs. expanded storage





Anne & Lynn Wheeler wrote:

<Their is a
<pathological scenario if the virtual operation doesn't have all its
<own dedicated storage (like in LPARs); VM will be managing virtual
<pages using an LRU methodology (least-recently-used pages are the ones
<selected for replacement) ... at the same time the virtual guest/DBMS
<is also managing (what it thinks is real storage) with an LRU
<methodology. If both are operating simultaneously ... it is possible
<for VM to "replace" what it thinks is the least-recently-used page
<(the virtual page least likely to be used) ... at the same time the
<virtual guest/DBMS has decided that same page is exactly the next page
<it wants to use.>


Was not this the problem in the early days of MVS under VM.  There was
also a stopgap workaround performance option (bandaid) called PMA,
Preferred Machine Assist to circumvent this, so that MVS actually
controlled Page 0 and VM became the guest.

VSE did not have this problem in those days because there was the VM/VSE
Feature which did the paging handshake between a VSE guest and VM.

But MVS had no such feature and so the entire MVS virtual machine got
swapped out any time an address space in MVS got a page hit.

But these issues have all been addressed today and MVS now performs
efficiently under VM.  Also, paging now seems to be non-existent in the
MVS world.  Expanded storage there even has been eliminated.

So the question arises why is this problem rearing its head again.

Is "history repeating itself".  Is there no handshaking between LINUX
guests and VM just as there was not with MVS years ago?

Why now the need Expanded Storage in the VM world to accommodate LINUX
guests when Expanded Storage in the MVS world is a thing of the past?






Anne & Lynn Wheeler <l...@garlic.com>
Sent by: The IBM z/VM Operating System <IBMVM@LISTSERV.UARK.EDU>

09/24/2010 08:02 AM

Please respond to
The IBM z/VM Operating System <IBMVM@LISTSERV.UARK.EDU>

To

IBMVM@LISTSERV.UARK.EDU

cc


Subject

Re: Central vs. expanded storage








On Thu, Sep 23, 2010 at 2:14 AM, O'Brien, Dennis L
<dennis.l.o'br...@bankofamerica.com> wrote:
>I heard from a couple of performance people at SHARE that we should
have
>20% to 25% of the total storage in an LPAR configured as expanded
>storage.  Naturally, that's a guideline and the proper amount varies by
>workload.  What should I look at to determine if we have enough
expanded
>storage?  We use Velocity's ESALPS suite.  The systems that I'm most
>concerned about have a Linux guest workload.  One of them is all WAS,
>and the other is a mix of WAS, Oracle, and some other things.
>
>I've heard that WAS isn't the best choice for System z, but that's not
>the focus of my concern.  We have the workload that we have, and I just
>want to make it run as well as it can.

expanded store was originally done for 3090 because of physical
packaging problems ... it was not possible to locate all the memory
they needed for configuration within the latency of the standard
memory bus ... so they created the expanded store bus that was wider &
longer ... and used software control to move 4k pages back&forth
between regular storage and expanded store. a synchronous instruction
was provided for moving the data back&forth.

the expanded store bus was also used to attach HIPPI (100mbyte/sec)
channel/devices ... since the standard 3090 i/o interface couldn't
handle the data-rate. However, since bus didn't support channel
programs ... there was a peculiar (pc-liked) peek/poke convention used
(i.e. i/o control was done by moving 4k blocks to/from special
reserved addresses on the bus).

moving forward (after physical packaging was no longer an issue)
... expanded store paradigm has been preserved because of software
storage management &/or storage addressing deficiencies.

effectively, expanded store paradigm is used to partition real storage
into different classes ....  however, going back at least 40yrs
... there is large body of data that shows that single large store is
more efficient than partitioning the same amount of storage (assuming
that there aren't other storage management issues/shortcomings).

the simple scenario is 10000 storage pages and 10000 expanded storage
pages ... all occupied; when there is requirement for page that is in
expanded storage, it is swapped with a page in regular storage
(incurring some software overhead). The alternative is one large block
of 20000 pages ... all directly executable ... and doesn't require
swapping any pages between expanded store and regular store.

One of the efficiencies is dealing with application and/or operating
systems that perform their own caching/paging algorithm using some
sort of LRU mechanism (i.e. replacing their own pages/records using
some approximation to least-recently-used). This is characteristic of
large DBMS infrastructures that manage records in their own cache as
well as operating systems that support virtual memory. Their is a
pathological scenario if the virtual operation doesn't have all its
own dedicated storage (like in LPARs); VM will be managing virtual
pages using an LRU methodology (least-recently-used pages are the ones
selected for replacement) ... at the same time the virtual guest/DBMS
is also managing (what it thinks is real storage) with an LRU
methodology. If both are operating simultaneously ... it is possible
for VM to "replace" what it thinks is the least-recently-used page
(the virtual page least likely to be used) ... at the same time the
virtual guest/DBMS has decided that same page is exactly the next page
it wants to use.

Executing LRU replacement algorithms in a virtual guest/DBMS ... where
its storage is also being managed via an LRU replacement algorithm,
... can invalidate the assumption underlying LRU replacement
algorithms ... that the least-recently-used page is the least likely
to be used (a virtual guest/DBMS ... doing is own LRU algorithm is
likely to select the least-recently-used page as the next page most
likely
to be used).

misc. past posts mentioning expanded store
http://www.garlic.com/~lynn/2000c.html#61 TF-1
<http://www.garlic.com/~lynn/2000c.html#61>
http://www.garlic.com/~lynn/2001k.html#73 Expanded Storage?
<http://www.garlic.com/~lynn/2001k.html#73>
http://www.garlic.com/~lynn/2001k.html#74 Expanded Storage?
<http://www.garlic.com/~lynn/2001k.html#74>
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page
rates?
<http://www.garlic.com/~lynn/2002e.html#8>
http://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
<http://www.garlic.com/~lynn/2004e.html#2>
http://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
<http://www.garlic.com/~lynn/2004e.html#3>
http://www.garlic.com/~lynn/2004e.html#4 Expanded Storage
<http://www.garlic.com/~lynn/2004e.html#4>
http://www.garlic.com/~lynn/2006.html#13 VM maclib reference
<http://www.garlic.com/~lynn/2006.html#13>
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
<http://www.garlic.com/~lynn/2006.html#38>
http://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
<http://www.garlic.com/~lynn/2006b.html#14>
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
<http://www.garlic.com/~lynn/2006b.html#15>
http://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Re: Expanded Storage
<http://www.garlic.com/~lynn/2006b.html#16>
http://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
<http://www.garlic.com/~lynn/2006b.html#17>
http://www.garlic.com/~lynn/2006b.html#18 {SPAM?} Re: Expanded Storage
<http://www.garlic.com/~lynn/2006b.html#18>
http://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
<http://www.garlic.com/~lynn/2006b.html#34>
http://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
<http://www.garlic.com/~lynn/2006c.html#1>
http://www.garlic.com/~lynn/2006k.html#57 virtual memory
<http://www.garlic.com/~lynn/2006k.html#57>
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
<http://www.garlic.com/~lynn/2006r.html#35>
http://www.garlic.com/~lynn/2006r.html#42 REAL memory column in SDSF
<http://www.garlic.com/~lynn/2006r.html#42>
http://www.garlic.com/~lynn/2007o.html#26 Tom's Hdw review of SSDs
<http://www.garlic.com/~lynn/2007o.html#26>
http://www.garlic.com/~lynn/2007o.html#48 Virtual Storage implementation
<http://www.garlic.com/~lynn/2007o.html#48>
http://www.garlic.com/~lynn/2007p.html#11 what does xp do when system is
copying
<http://www.garlic.com/~lynn/2007p.html#11>
http://www.garlic.com/~lynn/2008.html#49 IBM LCS
<http://www.garlic.com/~lynn/2008.html#49>
http://www.garlic.com/~lynn/2008b.html#15 Flash memory arrays
<http://www.garlic.com/~lynn/2008b.html#15>
http://www.garlic.com/~lynn/2009d.html#29 Thanks for the SEL32 Reminder,
Al!
<http://www.garlic.com/~lynn/2009d.html#29>
http://www.garlic.com/~lynn/2009e.html#54 Mainframe Hall of Fame: 17 New
Members Added
<http://www.garlic.com/~lynn/2009e.html#54>
http://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's
access by dataset
<http://www.garlic.com/~lynn/2010i.html#18>


==========================
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity
to which they are addressed. If you have received this email in error please 
notify the system manager. This message
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you
should not disseminate, distribute or copy this e-mail. Please notify the 
sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your system. 
If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this
information is strictly prohibited.

Reply via email to