Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Jim Mulder
z/OS MVS Initialization and Tuning Guide:

Because SCM does not support persistence of data across IPLs, VIO data
can only be paged out to DASD. Therefore, even when SCM is installed you
must still maintain a minimum amount of storage that supports paging for 
all of your VIO data, and a minimum amount of local paging data sets.
 All other data types can be paged out to SCM. 

Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY

> Since we upgraded to the z13 we increased real drastically on all 
> lpars. Recently,  we also started using SCM (flash express). The 
> only thing we were baffled by was the fact that we still needed 
> *one* local page data set to satisfy the system. Other than that, 
> we're very happy with SCM (where we already have it active). 



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Anne & Lynn Wheeler
d10j...@us.ibm.com (Jim Mulder) writes:
> it might be faster to read it from the DB2 data set, because DB2
> (via Media Manager) uses zHPF, but z/OS has not been enhanced to 
> use zHPF for page data sets. 

in 1980, I got con'ed into doing channel-extender for STL that was
moving 300 people from the IMS group to offsite bldg. with service back
to STL datacenter. It had channel emulator box at the offsite bldg with
local channel attached 3270 controllers at the offsite bldg. there was
full-duplex streaming protocol with the offsite channel emulator with
channel programs downloaded ... significantly mitigated the latency of
the heavy duty half-duplex channel protocol chatter.
http://www.garlic.com/~lynn/submisc.html#channel.extendeer

The vendor tries to get IBM to release my support ... but there was
group in POK that was playing with some serial stuff that blocked it
because they were afraid that it would make it more difficult to get
their stuff released.

In 1988, I'm asked to help LLNL get some stuff they are playing with
standardized ... which quickly becomes fibre channel standard ...  it
includes full-duplex streaming (initially 1gbit/sec in both directions)
and downloading i/o programs (countermeasure to protocol chatter
latency).

In 1990, the POK serial stuff is finally released as ESCON, when it is
already obsolete.

Later some POK people become involved in fibre channel standard and
define a heavy weight, high latency protocol chatter protocol that
significantly cuts native throughput that is eventually released
as FICON
http://www.garlic.com/~lynn/submisc.html#ficon

Most recent published "peak I/O" benchmark I've seen is for z196 that
got 2M IOPS with 104 FICON (running over 104 fibre channel standard).
About same time there was a fibre channel standard announced for e5-2600
blade that claims over million IOPS (for single FCS, two would have
higher native throughput than 104 FICON).

zHPF w/TCW describes something a little like what I originally did back
in 1980 ... but says it only provides 30% improvmeent over standard
FICON (possibly only 70 FICON to get 2M IOPS, compared to only two fiber
channel getting over native 2M IOPS).

This recent post about 1987 (IBM) disk "wide-head" proposal capable of
something like 48-67mbytes/sec transfer ... by 1990, FCS would have
over 100mbyte/sec (1gibt/sec) full-duplex. At the time standard IBM had
3mbyte/sec channel moving to 4.5mbyte/sec ... and ESCON in 1990 would
only be be 17mbytes/sec ... so as an IBM product, "wide-head" wasn't
feasible.
http://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of 
the mainframe
http://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly

recent paging subsystem posts here
http://www.garlic.com/~lynn/2017d.html#58 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#63 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of 
bigass memory

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Jim Mulder
  And if the buffer page is paged out to SCM, it should be faster to read 
it from 
SCM than from the DB2 data set.  But if it is paged out to a page data 
set,
it might be faster to read it from the DB2 data set, because DB2
(via Media Manager) uses zHPF, but z/OS has not been enhanced to 
use zHPF for page data sets. 

Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY

> 30 years ago this type of logic, to reread from DASD rather than demand 
> page-in a buffer page, *might* have made sense, when the I/O bandwidth 
> to the paging subsystem was limited based on the number of page datasets 

> and a demand page rate of 30-100 pages a second was common to see.  But 
> the world has changed and it would not make sense to do it today.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Mark Zelden
On Tue, 18 Apr 2017 00:47:55 -0500, Barbara Nitz  wrote:

>>I don't want to turn this into a debugging session, but this is what I see in 
>>RMF3 on the non-DB2 utility system. 
>>All of the TPX 'regions' are below this list sorted by Aux Slots. No task 
>>shows any page-ins.
>
>Not sure that you haven't already done this: Use IPCS active on that system, 
>and use either ip ilrslotc or panel 2.6i. It'll show you exactly which address 
>space paged out anything to aux (to whatever medium).
>I recall that in a prior life I could see exactly which asid had a lot of 
>frames on aux (MQ, I think).
>

I've been using this REXX for years instead of getting into a monitor or IPCS. 
Very
quick to execute and see the results.   It was posted to the IBM-MAIN newsgroup 
only,
so it's not in the archives.  To browse the results I execute it using one of my
execs to capture TSO output instead of "TSO %execname".

https://groups.google.com/d/msg/bit.listserv.ibm-main/bVaW_PGzbaI/Wut56jByETMJ

Best regards / Mit freundlichen Grüßen,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
ITIL v3 Foundation Certified
mailto:m...@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Vernooij, Kees (ITOPT1) - KLM


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Greg Dyck
> Sent: 18 April, 2017 15:58
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Paging subsystems in the era of bigass memory
> 
> On 4/18/2017 2:25 AM, Vernooij, Kees - KLM , ITOPT1 wrote:
> > As I said, I remember reading this a long time ago, I don't know the
> details anymore, whether the source was reliable and whether it is still
> working this way. Only a DB2 internal expert should be able to tell.
> ...
> >> On 12 April 2017 at 10:16, Tom Marchant <
> >> 000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
> >> [DB2]
> >>
> >>> So, if it thinks it would be faster to read the record from DASD
> than
> >> for
> >>> MVS to page in the buffer page(s) containing the record, it will
> read
> >> the
> >>> record into different pages that have not been paged out?
> 
> While the DB2 buffer manager code does collect statistics on the
> frequency of buffer pages being latched that are currently paged out
> (using TPROT), I don't remember seeing any logic to reread a buffer page
> vs waiting for a demand page-in to occur in the code.  And given that
> the current copy of a page may be the one in the buffer pool, rather
> than on the copy on DASD, and that multiple threads can have shared
> ownership of a buffer pool page, I do not believe this processing exist
> (at least for the last 15 years) in DB2.
> 
> 30 years ago this type of logic, to reread from DASD rather than demand
> page-in a buffer page, *might* have made sense, when the I/O bandwidth
> to the paging subsystem was limited based on the number of page datasets
> and a demand page rate of 30-100 pages a second was common to see.  But
> the world has changed and it would not make sense to do it today.
> 
> Regards,
> Greg
> 

Thanks Greg.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Greg Dyck

On 4/18/2017 2:25 AM, Vernooij, Kees - KLM , ITOPT1 wrote:

As I said, I remember reading this a long time ago, I don't know the details 
anymore, whether the source was reliable and whether it is still working this 
way. Only a DB2 internal expert should be able to tell.

...

On 12 April 2017 at 10:16, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
[DB2]


So, if it thinks it would be faster to read the record from DASD than

for

MVS to page in the buffer page(s) containing the record, it will read

the

record into different pages that have not been paged out?


While the DB2 buffer manager code does collect statistics on the 
frequency of buffer pages being latched that are currently paged out 
(using TPROT), I don't remember seeing any logic to reread a buffer page 
vs waiting for a demand page-in to occur in the code.  And given that 
the current copy of a page may be the one in the buffer pool, rather 
than on the copy on DASD, and that multiple threads can have shared 
ownership of a buffer pool page, I do not believe this processing exist 
(at least for the last 15 years) in DB2.


30 years ago this type of logic, to reread from DASD rather than demand 
page-in a buffer page, *might* have made sense, when the I/O bandwidth 
to the paging subsystem was limited based on the number of page datasets 
and a demand page rate of 30-100 pages a second was common to see.  But 
the world has changed and it would not make sense to do it today.


Regards,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-18 Thread Vernooij, Kees (ITOPT1) - KLM
As I said, I remember reading this a long time ago, I don't know the details 
anymore, whether the source was reliable and whether it is still working this 
way. Only a DB2 internal expert should be able to tell. 

Greg, could you shed a light on this?

Kees.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Tony Harminc
> Sent: 12 April, 2017 18:45
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Paging subsystems in the era of bigass memory
> 
> On 12 April 2017 at 10:16, Tom Marchant <
> 000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
> [DB2]
> 
> > So, if it thinks it would be faster to read the record from DASD than
> for
> > MVS to page in the buffer page(s) containing the record, it will read
> the
> > record into different pages that have not been paged out?
> >
> > It still makes no sense to me. It certainly can't read the record into
> the
> > same page, because that would require that the page be paged in first.
> >
> 
> I've no idea what it *does* do, but it *could* Page Release the old page
> before reading from DASD into it.
> 
> 
> > And what does it do with the old buffer page? Stop using it? Freemain
> > and getmain again so that the page slot becomes superfluous?
> >
> 
> That would be taken care of by the Page Release.
> 
> Tony H.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-17 Thread Barbara Nitz
>I don't want to turn this into a debugging session, but this is what I see in 
>RMF3 on the non-DB2 utility system. 
>All of the TPX 'regions' are below this list sorted by Aux Slots. No task 
>shows any page-ins.

Not sure that you haven't already done this: Use IPCS active on that system, 
and use either ip ilrslotc or panel 2.6i. It'll show you exactly which address 
space paged out anything to aux (to whatever medium).
I recall that in a prior life I could see exactly which asid had a lot of 
frames on aux (MQ, I think).

Since we upgraded to the z13 we increased real drastically on all lpars. 
Recently,  we also started using SCM (flash express). The only thing we were 
baffled by was the fact that we still needed *one* local page data set to 
satisfy the system. Other than that, we're very happy with SCM (where we 
already have it active). 

Now, to tune sort so that it doesn't need 20 Mod54s as sortwrk anymore when SAS 
wants to sort 1.600.000.000 records.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-13 Thread Jesse 1 Robinson
I don't want to turn this into a debugging session, but this is what I see in 
RMF3 on the non-DB2 utility system. All of the TPX 'regions' are below this 
list sorted by Aux Slots. No task shows any page-ins.



-Service-- Frame Occup.-- - Active Frames - AUX   PGIN

- Jobname  C Class   Cr TOTAL  ACTV  IDLE  WSET FIXED   DIV SLOTS RATE

-

- *MASTER* S SYSTEM  271K  271K 0  271K  8513 0  526K0

- IOSASS SYSTEM 11702 11702 0 11702   313 0 584360

- ZFS  S SYSSTC 10217 10217 0 10217   575 0 351280

- OMCMSS SYSSTC 17310 17310 0 17310  1615 0 314990

- AUTONETV S STCLO  10093 10093 0 10093   452 0 154130

- AUTOAMGR S SYSSTC  2373  2373 0  2373   144 0 125400

- OMMVSS SYSSTC  3968  3968 0  3968   168 0 119290

- TN3270Z  S SYSSTC  9993  9993 0  9993   155 0 108010

- RASP S SYSTEM   281   281 0   281   276 0 100680

- OMVS S SYSSTC  4870  4870 0  4870   306 0  89940

- JES2 S SYSSTC  3552  3552 0  3552  3079 0  88810

- RMF  S SYSSTC  2884  2884 0  2884   113 0  68270

- NETVIEW  S SYSSTC  7620  7620 0  7620   276 0  68150

- TCPIPS SYSSTC  1641  1641 0  1641   141 0  67780

- VLF  S SYSSTC  1919  1919 0  1919   104 0  56310

- SMSPDSE  S SYSTEM  2852  2852 0  2852   181 0  51800

- RESOLVER S SYSSTC  3209  3209 0  3209   109 0  51360





.

.

J.O.Skip Robinson

Southern California Edison Company

Electric Dragon Team Paddler

SHARE MVS Program Co-Manager

323-715-0595 Mobile

626-543-6132 Office ⇐=== NEW

robin...@sce.com





-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Graham Harris
Sent: Thursday, April 13, 2017 1:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Paging subsystems in the era of bigass memory



Have you got RMF3?

Tried looking at STORF to see what address space(s) are consuming all your AUX 
slots?



Are you running Netview?  Got a CANZLOG dataspace?  That can chew through a lot 
of memory if not configured quite right in recent z/OS releases.





On 13 April 2017 at 20:26, Jesse 1 Robinson 
mailto:jesse1.robin...@sce.com>> wrote:



> This thread did seem to morph into a focus on DB2, but the paging

> problem for us is not confined to DB2. We have one utility system that

> was set up years ago to be a 'CMC'. It's still dedicated to 'network

> stuff', which for some time has narrowed down to CA-TPX, the SNA

> session manager. Very little else runs there. Certainly no DB2 or CICS. 
> Absolutely no end-user apps.

> We've sort of ignored this system recently as we turned attention

> elsewhere. It was last IPLed in January 2016, well over a year ago! It

> runs great except for this burr under the saddle. The local volumes

> are all Mod-3. Whatever we decide to do about DB2 will not help here.

>

> -  IEE200I 11.29.28 DISPLAY ASM

> -  TYPE FULL STAT

> -  PLPA 100%FULL

> -  COMMON 36% OK

> -  LOCAL53% OK

> -  LOCAL49% OK

> -  LOCAL43% OK

>

> .

> .

> J.O.Skip Robinson

> Southern California Edison Company

> Electric Dragon Team Paddler

> SHARE MVS Program Co-Manager

> 323-715-0595 Mobile

> 626-543-6132 Office ⇐=== NEW

> robin...@sce.com<mailto:robin...@sce.com>

>

>

> -Original Message-

> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]

> On Behalf Of Mike Schwab

> Sent: Wednesday, April 12, 2017 8:18 PM

> To: IBM-MAIN@LISTSERV.UA.EDU<mailto:IBM-MAIN@LISTSERV.UA.EDU>

> Subject: (External):Re: Paging subsystems in the era of bigass memory

>

> Here is an IBM presentation on how to tune z/OS and DB2 memory,

> including some parameters to set.

> http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.p

> df

>

> On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski 
> mailto:arthur.gutow...@gm.com>>

> wrote:

> > Did someone on this thread say DB2??

> >

> > We have been experiencing similar AUX storage creep on our DB2

> > systems,

> particularly during LARGE reorgs (more of a gallop than a creep).  Our

> DB2 guys did some research, opened an ETR with IBM, and found this relic:

> >

> > Q:

> > "[Why was] set realstorage_management to OFF when that zPARM was

> introduced in DB2 version 10?

> >

> > "Details

> > "IBM z/OS implemented a Storage Management Design change after DB2

> > v10

> was released.

> > "•  Before the d

Re: Paging subsystems in the era of bigass memory

2017-04-13 Thread Art Gutowski
On Wed, 12 Apr 2017 22:18:22 -0500, Mike Schwab  wrote:

>Here is an IBM presentation on how to tune z/OS and DB2 memory,
>including some parameters to set.
>http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf

Thanks for sharing! 

Regards,
Art Gutowski
General Motors, LLC

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-13 Thread Graham Harris
Have you got RMF3?
Tried looking at STORF to see what address space(s) are consuming all your
AUX slots?

Are you running Netview?  Got a CANZLOG dataspace?  That can chew through a
lot of memory if not configured quite right in recent z/OS releases.


On 13 April 2017 at 20:26, Jesse 1 Robinson  wrote:

> This thread did seem to morph into a focus on DB2, but the paging problem
> for us is not confined to DB2. We have one utility system that was set up
> years ago to be a 'CMC'. It's still dedicated to 'network stuff', which for
> some time has narrowed down to CA-TPX, the SNA session manager. Very little
> else runs there. Certainly no DB2 or CICS. Absolutely no end-user apps.
> We've sort of ignored this system recently as we turned attention
> elsewhere. It was last IPLed in January 2016, well over a year ago! It runs
> great except for this burr under the saddle. The local volumes are all
> Mod-3. Whatever we decide to do about DB2 will not help here.
>
> -  IEE200I 11.29.28 DISPLAY ASM
> -  TYPE FULL STAT
> -  PLPA 100%FULL
> -  COMMON 36% OK
> -  LOCAL53% OK
> -  LOCAL49% OK
> -  LOCAL43% OK
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> robin...@sce.com
>
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Mike Schwab
> Sent: Wednesday, April 12, 2017 8:18 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Re: Paging subsystems in the era of bigass memory
>
> Here is an IBM presentation on how to tune z/OS and DB2 memory, including
> some parameters to set.
> http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf
>
> On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski 
> wrote:
> > Did someone on this thread say DB2??
> >
> > We have been experiencing similar AUX storage creep on our DB2 systems,
> particularly during LARGE reorgs (more of a gallop than a creep).  Our DB2
> guys did some research, opened an ETR with IBM, and found this relic:
> >
> > Q:
> > "[Why was] set realstorage_management to OFF when that zPARM was
> introduced in DB2 version 10?
> >
> > "Details
> > "IBM z/OS implemented a Storage Management Design change after DB2 v10
> was released.
> > "•  Before the design change, DB2 used KEEPREAL(NO), virtual storage
> pages were really (physically) freed, high CPU cost if YES
> > DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to
> > manage those pages that are being freed immediately. That added to CPU
> > usage and also caused some CPU spin at the LPAR level to get that
> > serialization  -- excerpt from PTF
> >
> > "To get around/minimize the impact of the original design shortcomings
> that was introduced by IBM RSM z/OS,  setting zPARM realstorage_management
> to OFF, would probably have been prudent on most LPARs.  HP/EDS tried to
> address this new issue IBM created.
> >
> > "IBM create two PTFs and changed the way DB2 and RSM manages the page
> frames.
> >
> > "•  After a design change (now) DB2 uses KEEPREAL(YES), storage is
> only virtually freed
> > "If DB2 doesn't tell RSM that it doesn't need a frame, then the frame
> > will remain backed in real storage in some form. That causes the
> > growth of real storage and paging and everything that goes with using
> > up REAL storage. KEEPREAL(YES) allows DB2 to tell RSM that z/OS can
> > steal the page if it needs it, but DB2 retains ownership of the page,
> > and it remains backed with real storage. If z/OS needs the page, it
> > can steal it -- excerpt from PTF
> >
> > "V10 APAR PM88804 APAR PM86862 and PM99575"
> >
> > So...perhaps check your DSNZPARM and make sure it's coded appropriately
> for more modern times.  FYI, we are z/OS 2.2 and DB2 11.1, NFM.  We are in
> the process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM
> recommended setting) across our enterprise.
> >
> > HTH,
> > Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM)
> > General Motors, LLC
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions, send
> > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> --
> Mike A Schwab, Springfield IL USA
> Where do Forest Rangers go to get away from it all?
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-13 Thread Jesse 1 Robinson
This thread did seem to morph into a focus on DB2, but the paging problem for 
us is not confined to DB2. We have one utility system that was set up years ago 
to be a 'CMC'. It's still dedicated to 'network stuff', which for some time has 
narrowed down to CA-TPX, the SNA session manager. Very little else runs there. 
Certainly no DB2 or CICS. Absolutely no end-user apps. We've sort of ignored 
this system recently as we turned attention elsewhere. It was last IPLed in 
January 2016, well over a year ago! It runs great except for this burr under 
the saddle. The local volumes are all Mod-3. Whatever we decide to do about DB2 
will not help here. 

-  IEE200I 11.29.28 DISPLAY ASM   
-  TYPE FULL STAT
-  PLPA 100%FULL  
-  COMMON 36% OK   
-  LOCAL53% OK 
-  LOCAL49% OK
-  LOCAL43% OK 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Mike Schwab
Sent: Wednesday, April 12, 2017 8:18 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Paging subsystems in the era of bigass memory

Here is an IBM presentation on how to tune z/OS and DB2 memory, including some 
parameters to set.
http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf

On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski  wrote:
> Did someone on this thread say DB2??
>
> We have been experiencing similar AUX storage creep on our DB2 systems, 
> particularly during LARGE reorgs (more of a gallop than a creep).  Our DB2 
> guys did some research, opened an ETR with IBM, and found this relic:
>
> Q:
> "[Why was] set realstorage_management to OFF when that zPARM was introduced 
> in DB2 version 10?
>
> "Details
> "IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
> released.
> "•  Before the design change, DB2 used KEEPREAL(NO), virtual storage 
> pages were really (physically) freed, high CPU cost if YES
> DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to 
> manage those pages that are being freed immediately. That added to CPU 
> usage and also caused some CPU spin at the LPAR level to get that 
> serialization  -- excerpt from PTF
>
> "To get around/minimize the impact of the original design shortcomings that 
> was introduced by IBM RSM z/OS,  setting zPARM realstorage_management to OFF, 
> would probably have been prudent on most LPARs.  HP/EDS tried to address this 
> new issue IBM created.
>
> "IBM create two PTFs and changed the way DB2 and RSM manages the page frames.
>
> "•  After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
> virtually freed
> "If DB2 doesn't tell RSM that it doesn't need a frame, then the frame 
> will remain backed in real storage in some form. That causes the 
> growth of real storage and paging and everything that goes with using 
> up REAL storage. KEEPREAL(YES) allows DB2 to tell RSM that z/OS can 
> steal the page if it needs it, but DB2 retains ownership of the page, 
> and it remains backed with real storage. If z/OS needs the page, it 
> can steal it -- excerpt from PTF
>
> "V10 APAR PM88804 APAR PM86862 and PM99575"
>
> So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
> more modern times.  FYI, we are z/OS 2.2 and DB2 11.1, NFM.  We are in the 
> process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM 
> recommended setting) across our enterprise.
>
> HTH,
> Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM) 
> General Motors, LLC
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-13 Thread Carmen Vitullo
Just check with our DB2 SYSPROG, same here. 


Carmen 


- Original Message -

From: "Mike Schwab"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Wednesday, April 12, 2017 10:18:22 PM 
Subject: Re: Paging subsystems in the era of bigass memory 

Here is an IBM presentation on how to tune z/OS and DB2 memory, 
including some parameters to set. 
http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf 

On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski  wrote: 
> Did someone on this thread say DB2?? 
> 
> We have been experiencing similar AUX storage creep on our DB2 systems, 
> particularly during LARGE reorgs (more of a gallop than a creep). Our DB2 
> guys did some research, opened an ETR with IBM, and found this relic: 
> 
> Q: 
> "[Why was] set realstorage_management to OFF when that zPARM was introduced 
> in DB2 version 10? 
> 
> "Details 
> "IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
> released. 
> "• Before the design change, DB2 used KEEPREAL(NO), virtual storage pages 
> were really (physically) freed, high CPU cost if YES 
> DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to manage 
> those pages that are being freed immediately. That added to CPU usage and 
> also caused some CPU spin at the LPAR level to get that serialization -- 
> excerpt from PTF 
> 
> "To get around/minimize the impact of the original design shortcomings that 
> was introduced by IBM RSM z/OS, setting zPARM realstorage_management to OFF, 
> would probably have been prudent on most LPARs. HP/EDS tried to address this 
> new issue IBM created. 
> 
> "IBM create two PTFs and changed the way DB2 and RSM manages the page frames. 
> 
> "• After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
> virtually freed 
> "If DB2 doesn't tell RSM that it doesn't need a frame, then the frame will 
> remain backed in real storage in some form. That causes the growth of real 
> storage and paging and everything that goes with using up REAL storage. 
> KEEPREAL(YES) allows DB2 to tell RSM that z/OS can steal the page if it needs 
> it, but DB2 retains ownership of the page, and it remains backed with real 
> storage. If z/OS needs the page, it can steal it -- excerpt from PTF 
> 
> "V10 APAR PM88804 APAR PM86862 and PM99575" 
> 
> So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
> more modern times. FYI, we are z/OS 2.2 and DB2 11.1, NFM. We are in the 
> process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM 
> recommended setting) across our enterprise. 
> 
> HTH, 
> Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM) 
> General Motors, LLC 
> 
> -- 
> For IBM-MAIN subscribe / signoff / archive access instructions, 
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 



-- 
Mike A Schwab, Springfield IL USA 
Where do Forest Rangers go to get away from it all? 

-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Mike Schwab
Here is an IBM presentation on how to tune z/OS and DB2 memory,
including some parameters to set.
http://www.mdug.org/Presentations/Large%20Memory%20DB2%20Perf%20MDUG.pdf

On Wed, Apr 12, 2017 at 2:55 PM, Art Gutowski  wrote:
> Did someone on this thread say DB2??
>
> We have been experiencing similar AUX storage creep on our DB2 systems, 
> particularly during LARGE reorgs (more of a gallop than a creep).  Our DB2 
> guys did some research, opened an ETR with IBM, and found this relic:
>
> Q:
> "[Why was] set realstorage_management to OFF when that zPARM was introduced 
> in DB2 version 10?
>
> "Details
> "IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
> released.
> "•  Before the design change, DB2 used KEEPREAL(NO), virtual storage 
> pages were really (physically) freed, high CPU cost if YES
> DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to manage 
> those pages that are being freed immediately. That added to CPU usage and 
> also caused some CPU spin at the LPAR level to get that serialization  -- 
> excerpt from PTF
>
> "To get around/minimize the impact of the original design shortcomings that 
> was introduced by IBM RSM z/OS,  setting zPARM realstorage_management to OFF, 
> would probably have been prudent on most LPARs.  HP/EDS tried to address this 
> new issue IBM created.
>
> "IBM create two PTFs and changed the way DB2 and RSM manages the page frames.
>
> "•  After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
> virtually freed
> "If DB2 doesn't tell RSM that it doesn't need a frame, then the frame will 
> remain backed in real storage in some form. That causes the growth of real 
> storage and paging and everything that goes with using up REAL storage. 
> KEEPREAL(YES) allows DB2 to tell RSM that z/OS can steal the page if it needs 
> it, but DB2 retains ownership of the page, and it remains backed with real 
> storage. If z/OS needs the page, it can steal it -- excerpt from PTF
>
> "V10 APAR PM88804 APAR PM86862 and PM99575"
>
> So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
> more modern times.  FYI, we are z/OS 2.2 and DB2 11.1, NFM.  We are in the 
> process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM 
> recommended setting) across our enterprise.
>
> HTH,
> Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM)
> General Motors, LLC
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Jerry Whitteridge
Ditto ! Thanks Art

Jerry Whitteridge
Manager Mainframe Systems & Storage
Albertsons - Safeway Inc.
623 869 5523
Corporate Tieline - 85523

If you feel in control
you just aren't going fast enough.



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Carmen Vitullo
Sent: Wednesday, April 12, 2017 1:02 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: EXTERNAL: Re: Paging subsystems in the era of bigass memory

Yes, and Thank you Art for that - I've passed this info on to our DB2 SYSPROG


Carmen

- Original Message -

From: "Art Gutowski" 
To: IBM-MAIN@LISTSERV.UA.EDU
Sent: Wednesday, April 12, 2017 2:55:53 PM
Subject: Re: Paging subsystems in the era of bigass memory

Did someone on this thread say DB2??

We have been experiencing similar AUX storage creep on our DB2 systems, 
particularly during LARGE reorgs (more of a gallop than a creep). Our DB2 guys 
did some research, opened an ETR with IBM, and found this relic:

Q:
"[Why was] set realstorage_management to OFF when that zPARM was introduced in 
DB2 version 10?

"Details
"IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
released.
"• Before the design change, DB2 used KEEPREAL(NO), virtual storage pages were 
really (physically) freed, high CPU cost if YES DISCARDDATA KEEPREAL(NO), RSM 
has to get LPAR level serialization to manage those pages that are being freed 
immediately. That added to CPU usage and also caused some CPU spin at the LPAR 
level to get that serialization -- excerpt from PTF

"To get around/minimize the impact of the original design shortcomings that was 
introduced by IBM RSM z/OS, setting zPARM realstorage_management to OFF, would 
probably have been prudent on most LPARs. HP/EDS tried to address this new 
issue IBM created.

"IBM create two PTFs and changed the way DB2 and RSM manages the page frames.

"• After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
virtually freed "If DB2 doesn't tell RSM that it doesn't need a frame, then the 
frame will remain backed in real storage in some form. That causes the growth 
of real storage and paging and everything that goes with using up REAL storage. 
KEEPREAL(YES) allows DB2 to tell RSM that z/OS can steal the page if it needs 
it, but DB2 retains ownership of the page, and it remains backed with real 
storage. If z/OS needs the page, it can steal it -- excerpt from PTF

"V10 APAR PM88804 APAR PM86862 and PM99575"

So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
more modern times. FYI, we are z/OS 2.2 and DB2 11.1, NFM. We are in the 
process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM recommended 
setting) across our enterprise.

HTH,
Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM) General 
Motors, LLC

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 Warning: All e-mail sent to this address will be received by the corporate 
e-mail system, and is subject to archival and review by someone other than the 
recipient. This e-mail may contain proprietary information and is intended only 
for the use of the intended recipient(s). If the reader of this message is not 
the intended recipient(s), you are notified that you have received this message 
in error and that any review, dissemination, distribution or copying of this 
message is strictly prohibited. If you have received this message in error, 
please notify the sender immediately.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Carmen Vitullo
Yes, and Thank you Art for that - I've passed this info on to our DB2 SYSPROG 


Carmen 

- Original Message -

From: "Art Gutowski"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Wednesday, April 12, 2017 2:55:53 PM 
Subject: Re: Paging subsystems in the era of bigass memory 

Did someone on this thread say DB2?? 

We have been experiencing similar AUX storage creep on our DB2 systems, 
particularly during LARGE reorgs (more of a gallop than a creep). Our DB2 guys 
did some research, opened an ETR with IBM, and found this relic: 

Q: 
"[Why was] set realstorage_management to OFF when that zPARM was introduced in 
DB2 version 10? 

"Details 
"IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
released. 
"• Before the design change, DB2 used KEEPREAL(NO), virtual storage pages were 
really (physically) freed, high CPU cost if YES 
DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to manage 
those pages that are being freed immediately. That added to CPU usage and also 
caused some CPU spin at the LPAR level to get that serialization -- excerpt 
from PTF 

"To get around/minimize the impact of the original design shortcomings that was 
introduced by IBM RSM z/OS, setting zPARM realstorage_management to OFF, would 
probably have been prudent on most LPARs. HP/EDS tried to address this new 
issue IBM created. 

"IBM create two PTFs and changed the way DB2 and RSM manages the page frames. 

"• After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
virtually freed 
"If DB2 doesn't tell RSM that it doesn't need a frame, then the frame will 
remain backed in real storage in some form. That causes the growth of real 
storage and paging and everything that goes with using up REAL storage. 
KEEPREAL(YES) allows DB2 to tell RSM that z/OS can steal the page if it needs 
it, but DB2 retains ownership of the page, and it remains backed with real 
storage. If z/OS needs the page, it can steal it -- excerpt from PTF 

"V10 APAR PM88804 APAR PM86862 and PM99575" 

So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
more modern times. FYI, we are z/OS 2.2 and DB2 11.1, NFM. We are in the 
process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM recommended 
setting) across our enterprise. 

HTH, 
Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM) 
General Motors, LLC 

-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Art Gutowski
Did someone on this thread say DB2??

We have been experiencing similar AUX storage creep on our DB2 systems, 
particularly during LARGE reorgs (more of a gallop than a creep).  Our DB2 guys 
did some research, opened an ETR with IBM, and found this relic:

Q:  
"[Why was] set realstorage_management to OFF when that zPARM was introduced in 
DB2 version 10?

"Details
"IBM z/OS implemented a Storage Management Design change after DB2 v10 was 
released.  
"•  Before the design change, DB2 used KEEPREAL(NO), virtual storage pages 
were really (physically) freed, high CPU cost if YES
DISCARDDATA KEEPREAL(NO), RSM has to get LPAR level serialization to manage 
those pages that are being freed immediately. That added to CPU usage and also 
caused some CPU spin at the LPAR level to get that serialization  -- excerpt 
from PTF

"To get around/minimize the impact of the original design shortcomings that was 
introduced by IBM RSM z/OS,  setting zPARM realstorage_management to OFF, would 
probably have been prudent on most LPARs.  HP/EDS tried to address this new 
issue IBM created.

"IBM create two PTFs and changed the way DB2 and RSM manages the page frames.

"•  After a design change (now) DB2 uses KEEPREAL(YES), storage is only 
virtually freed
"If DB2 doesn't tell RSM that it doesn't need a frame, then the frame will 
remain backed in real storage in some form. That causes the growth of real 
storage and paging and everything that goes with using up REAL storage. 
KEEPREAL(YES) allows DB2 to tell RSM that z/OS can steal the page if it needs 
it, but DB2 retains ownership of the page, and it remains backed with real 
storage. If z/OS needs the page, it can steal it -- excerpt from PTF

"V10 APAR PM88804 APAR PM86862 and PM99575"

So...perhaps check your DSNZPARM and make sure it's coded appropriately for 
more modern times.  FYI, we are z/OS 2.2 and DB2 11.1, NFM.  We are in the 
process of rolling out REALSTORAGE_MANAGEMENT=AUTO (the current IBM recommended 
setting) across our enterprise.

HTH,
Art Gutowski (with assist from Doug Drain, Steve Laufer and IBM)
General Motors, LLC

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Avram Friedman
The topic seems to of morphed
from
Do I need to acheive a 30% page dataset utilization from virtual storage stable 
LPAR with a single large DB2
to
Running many DB2s in a single LPAR I seem to have a memory leak that requires 
periodic IPLs to avoid an aux storage problem

other than some common words like LPAR and DB2 the issues are not the same.

Avram Friedman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Carmen Vitullo
For you, I hope that never happens, I have some AUX pages that never free, and 
creep 1% or so every 2 days, been following the discussion and trying to 
undersand why suddenly DB2 seems to be the culprit, when I've seen DB2 
workloads 
not have an adverse effect on AUX storage, my last IPL of this system was Feb 
26th, and we have a Large Frame area defined 7740M 

my locals are on Mod9's 



LOCAL 20% OK 4371 PAGE.SYSA.VSYSPG1.LOCAL 


LOCAL 20% OK 4471 PAGE.SYSA.VSYSPG2.LOCAL 


LOCAL 20% OK 4571 PAGE.SYSA.VSYSPG3.LOCAL 


LOCAL 19% OK 4671 PAGE.SYSA.VSYSPG4.LOCAL 


LOCAL 19% OK 4771 PAGE.SYSA.VSYSPG5.LOCAL 


LOCAL 19% OK 4072 PAGE.SYSA.VSYSPG6.LOCAL 


LOCAL 20% OK 4172 PAGE.SYSA.VSYSPG7.LOCAL 


LOCAL 20% OK 4272 PAGE.SYSA.VSYSPG8.LOCAL 


LOCAL 20% OK 4070 PAGE.SYSA.VSYSPGH.LOCAL 


LOCAL 20% OK 4170 PAGE.SYSA.VSYSPGI.LOCAL 


LOCAL 20% OK 4270 PAGE.SYSA.VSYSPGJ.LOCAL 


LOCAL 20% OK 4370 PAGE.SYSA.VSYSPGK.LOCAL 


LOCAL 20% OK 4470 PAGE.SYSA.VSYSPGL.LOCAL 


LOCAL 20% OK 4570 PAGE.SYSA.VSYSPGM.LOCAL 


we have 8 DB2MSTR regions 43 misc regions total (DISTR,IMS...) 

if I decide not to IPL until June or July I see myself running the risk of 
getting AUX storage shortages 


what to do? add some more locals? NO! 




Carmen 

- Original Message -

From: "Jesse 1 Robinson"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Wednesday, April 12, 2017 12:56:19 PM 
Subject: Re: Paging subsystems in the era of bigass memory 

I'd like to pull this discussion back up to the jet stream level. We face the 
prospect of having to IPL z/OS just to relieve an ASM shortage. Weenie-ware 
servers seem to run forever without this encumbrance. We put our highest-value 
mission-critical applications on a platform that cannot keep its own shorts 
tidy. Enough already. Even if IBM gave us memory for free, we would still have 
endure interruptions to get it installed. And how much is sufficient? How many 
interruptions? Throwing more hardware at a software deficiency is not a 
solution. 

I'm afraid that some MBA PFK will propose running the whole shebang on *nix. 
Then where will we (all) end up? 

. 
. 
J.O.Skip Robinson 
Southern California Edison Company 
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager 
323-715-0595 Mobile 
626-543-6132 Office ⇐=== NEW 
robin...@sce.com 


-Original Message- 
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant 
Sent: Wednesday, April 12, 2017 10:15 AM 
To: IBM-MAIN@LISTSERV.UA.EDU 
Subject: (External):Re: Paging subsystems in the era of bigass memory 

On Wed, 12 Apr 2017 12:44:32 -0400, Tony Harminc wrote: 

>On 12 April 2017 at 10:16, Tom Marchant < 
>000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote: 
>[DB2] 
> 
>> It still makes no sense to me. It certainly can't read the record 
>> into the same page, because that would require that the page be paged in 
>> first. 
> 
>I've no idea what it *does* do, but it *could* Page Release the old 
>page before reading from DASD into it. 

Thanks, Tony. 

And BTW, Kees, I should have said "I don't understand" rather than "It makes no 
sense to me." Subtle difference, perhaps. 

-- 
Tom Marchant 


-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Jesse 1 Robinson
I'd like to pull this discussion back up to the jet stream level. We face the 
prospect of having to IPL z/OS just to relieve an ASM shortage. Weenie-ware 
servers seem to run forever without this encumbrance. We put our highest-value 
mission-critical applications on a platform that cannot keep its own shorts 
tidy. Enough already. Even if IBM gave us memory for free, we would still have 
endure interruptions to get it installed. And how much is sufficient? How many 
interruptions? Throwing more hardware at a software deficiency is not a 
solution. 

I'm afraid that some MBA PFK will propose running the whole shebang on *nix. 
Then where will we (all) end up?

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant
Sent: Wednesday, April 12, 2017 10:15 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Paging subsystems in the era of bigass memory

On Wed, 12 Apr 2017 12:44:32 -0400, Tony Harminc wrote:

>On 12 April 2017 at 10:16, Tom Marchant < 
>000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
>[DB2]
>
>> It still makes no sense to me. It certainly can't read the record 
>> into the same page, because that would require that the page be paged in 
>> first.
>
>I've no idea what it *does* do, but it *could* Page Release the old 
>page before reading from DASD into it.

Thanks, Tony.

And BTW, Kees, I should have said "I don't understand" rather than "It makes no 
sense to me." Subtle difference, perhaps.

--
Tom Marchant


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Anne & Lynn Wheeler
000a2a8c2020-dmarc-requ...@listserv.ua.edu (Tom Marchant) writes:
> Are you suggesting that before DB2 references a page containing a 
> buffer, it checks to see if it is paged out? And that if it is paged out, 
> it doesn't use the record in the buffer, but instead reads it into a 
> different page?  That makes no sense to me.

there is separate problem that i've repeatedly pontificated about since
the 70s ... originally involving running MVS in VM370 virtual
machine. If VM370 is running a "LRU" (least recently used) replacement
algorithm and MVS (in virtual machine) is running a "LRU" (least
recently used) replacement algorithm ... then MVS will tend to select to
replace and use a virtual machine page that hasn't been used in a long
time ... which is also the page that vm370 will have chosen to have
replaced and paged-out.

LRU recently used algorithm assumes that a page that hasn't been used
for a long time is the most likely not be used for a long time in the
future. However, if a LRU is running in memory space managed by a LRU
algorithm ... then the 2nd level LRU is likely to choose the next page
to use ... the exact same page that the lower level LRU has removed from
storage  a virtual LRU algorithm starts to look like MRU algorithm,
aka the least recently used page becomes the page that is most likely to
be used next (as opposed to the least likely to be used next).

past posts mentioning page replacement algorithm
http://www.garlic.com/~lynn/subtopic.html#clock

I was also involved in the original sql/relational implementation,
System/R ... some past posts
http://www.garlic.com/~lynn/submain.html#systemr

which was done on 370/145 running under VM370. The issue was that the
RDBMS (system/r) running in virtual address space manages its own cache
of records with algorithms similar to LRU ... which is running in
virtual memory managed by VM370 with an LRU algorithm. Then the next
least recently used cache area that RDBMS is likely to use ... is also
the least recently used area that VM370 has likely to have been replaced
& removed from real memory (a LRU managed area running in a virtual
memory LRU managed area, will violate assumption that the least recently
used area is the least likely to be used in the future and can be
replaced).

trivia, managed to do System/R technology transfer to Endicott for
SQL/DS "under the radar", while the corporation was pre-occupied doing
the next official follow-on DBMS "EAGLE". Then when "EAGLE" effort
imploded there was request for how fast could SystemR/SQLDS be ported to
MVS ... which is then later announced as DB2 (originally for decision
support *ONLY*).

DB2 will have its own area for caching DBMS records ... managed with a
LRU-like algorithm ... running in an MVS virtual memory managed with an
LRU-like algorithm ... assuming the least recently used DB2 cache area
is then the least likely to be used in the future and can be removed
from processor memory.

When Jim Gray leaves for Tandem ... he palms off some number of things
on to me ... included DBMS consulting with the IMS group ... old email
http://www.garlic.com/~lynn/2007.html#email801016

and recent long-winded post (facebook IBM discussion) about Jim Gray
(after he left for Tandem) and page replacement/management algorithms
http://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports

At DEC81 ACM SIGOPS, Jim asks if I could help a tandem co-worker get his
stanford PHD that involves quite a bit of "global LRU" ... and some
"local LRU" forces were pressuring his thesis advisor to not approve the
work.

I had done a lot of work on page replacement as undergraduate in the 60s
... including "global LRU" about the same time there were academic
papers on "local LRU" were being published in ACM Communications ...
and I had direct apples-to-apples comparisons between "global" and
"local" ... showing "global LRU" was significantly better than "local
LRU". Unfortunately IBM research management prevented me from responded
for nearly a year (even tho the work had been done before joining IBM)
... possibly thinking they were punishing me (which would be better than
they were doing it because they were taking sides in the academic
dispute). finally was able to send response the following Oct.
http://www.garlic.com/~lynn/2006w.html#email821019

trivia: I was being blamed for online computer conferencing (precursor
to discussion mailing lists like ibm-main as well as social media) on
the IBM internal network (larger than arpanet/internet from just about
the beginning until sometime mid-80s) in the late 70s and early 80s
... folklore is that when IBM corporate executive committee was told
about online computer conferencing (and the internal network), 5of6
wanted to fire me. i.e. IBM management may have considered that blocking
my response for nearly a year was part of punishment for doing online
computer conferencing. some past posts
http://www.garlic.com/~lynn/subnetwork.html#cmc

not getting fired for

Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Tom Marchant
On Wed, 12 Apr 2017 12:44:32 -0400, Tony Harminc wrote:

>On 12 April 2017 at 10:16, Tom Marchant <
>000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
>[DB2]
>
>> It still makes no sense to me. It certainly can't read the record into the
>> same page, because that would require that the page be paged in first.
>
>I've no idea what it *does* do, but it *could* Page Release the old page
>before reading from DASD into it.

Thanks, Tony.

And BTW, Kees, I should have said "I don't understand" rather than "It 
makes no sense to me." Subtle difference, perhaps.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Tony Harminc
On 12 April 2017 at 10:16, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
[DB2]

> So, if it thinks it would be faster to read the record from DASD than for
> MVS to page in the buffer page(s) containing the record, it will read the
> record into different pages that have not been paged out?
>
> It still makes no sense to me. It certainly can't read the record into the
> same page, because that would require that the page be paged in first.
>

I've no idea what it *does* do, but it *could* Page Release the old page
before reading from DASD into it.


> And what does it do with the old buffer page? Stop using it? Freemain
> and getmain again so that the page slot becomes superfluous?
>

That would be taken care of by the Page Release.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Joel C. Ewing
On 04/12/2017 10:16 AM, Paul Gilmartin wrote:
> On Wed, 12 Apr 2017 09:50:44 -0500, Joel C. Ewing wrote:
>> GETMAIN returns a success indication based purely on whether virtual
>> storage is available to the address space within the REGION size
>> constrains.
>>
> Does it not check for available paging space?  Does this imply that your
> job and my job can each do a GETMAIN for storage which in the aggregate
> exceeds the available paging space and both will succeed?
>
>> It is the SysProg's non-trivial job to configure the LPAR real memory
>> and configure z/OS to adequately constrain max REGION sizes, the number
>> of address spaces and size the paging subsystem so that page thrashing
>> doesn't occur and critical page slot thresholds in the paging subsystem
>> aren't exceeded.  There are tools, configuration parameters, and exits
>> that make that possible.
>>
> Then if you job touches all its pages then my job attempts to touch all
> its pages, may I experience a failure because there are no slots to page
> something out?
>
> -- gil
To reach a scenario where two jobs could potentially interfere with each
other in that way would  seemingly indicate a serious failure in
configuration if two ill-behaved jobs with such large REGION constraints
were allowed to run concurrently without an adequately sized page data
set pool to support them.  And the implication that each would also
impose a serious impact on real storage by them self further suggests
that such jobs should be constrained to a job class that disallows two
running concurrently. 

Any job or subsystem deliberately designed to use large amounts of
virtual memory above the bar or a significant percentage of the real
memory assigned to an LPAR should be under the control of people smart
enough to know they need to coordinate their plans to the SysProgs that
support that LPAR so that things can be sized appropriately and new page
datasets can be created and added.  If one is in an environment where
you can't trust that communication to take place, then the SysProgs need
to add REGION size enforcement so that people are forced to communicate
before their jobs can get authority to successfully allocate enough
virtual storage to become a problem.

I don't think your job would fail immediately, just wait.  I believe
there would be all sorts of nasty console messages which start at a
fairly low level of page slots in use (30%?) and maybe even include
indications if one job consumes an unreasonable percentage of aux paging
pool space, so there is potentially some advance warning.  If the
operator were able to dynamically add a sufficient number of new page
data sets, or cancel one of the jobs causing the problem, things should
be able to proceed.  If the page auxiliary storage becomes 100% full and
there is no way to add more space in time or determine the offending
address spaces and cancel them, I suspect the whole system, not just
your job, is toast.

In the environment I worked, the only really big users of virtual
storage at the time were CICS and DB2, and the people supporting those
subsystems understood quite well the need to communicate to z/OS support
any significant plans to change their virtual/real storage usage.  If
other application areas are planning to use 64-bit virtual storage, they
also need to be properly educated to communicate their plans and
understand the consequences of failing to do so.
Joel C. Ewing


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Anne & Lynn Wheeler
ibmsysp...@ibm-sys-prog.com (Avram Friedman) writes:
> While they do not grow in in perfect lock step The presence of Big ass
> memory comes with big ass dasd volumes (these are the technical terms
> of course) Do you know 3350's and 2314s were once used as paging
> devices?  For that matter do you no the number 2314 was chosen as the
> # for a model of disk drive because it was 4 times larger than its
> asexual parent the 2311.

original CP67 paged on 2301 fixed-head/track drum ... but it was only
4mbytes ... and "overflow" to 2314. The original code did single page
transfer per sio and pure fifo. as undergraduate in the 60s ... i redid
2301 support for "chained requests", so it ordered queued request for
maximum transfers per revolution and multiple transfer per sio (both
2301 & 2314 for same arm position) and added ordered seek queueing for
2314. 2301 peak transfers went from about 80/sec to nearly media
transfer ... around 270/sec. 2314 thruput increased 2-3 times and
degradation (service time increase as load increased) was much more
graceful.

trivia: later, 3350 had small fixed-head/track option/feature that could
be used for paging. however, it didn't have "multiple exposure" like
2305 (fixed-head/track disk, basically multiple subchannel addresses for
same device, that could do things like transfers while other requests
were still waiting for rotation). I had proposal to do 3350 multiple
exposures that would allow doing data transfers for fixed-head area
overlapped with arm seek motion.
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3350.html

however, there was group in POK that was planning on doing "vulcan"
... an "electronic disk" ... on the lines of "1655" mentioned upthread
http://www.garlic.com/~lynn/2017d.html#63

and blocked the 3350 multiple-exposure feature. They then got killed
when they were told that customers were ordering all the memory that IBM
could produce for processor memory (at higher markup) ... but I wasn't
allowed to resurrect the multiple-exposure. As it happened, the "1655"
vendor was using memory chips that failed processor memory tests, but
could be still be used for electronic disk.

longer discussion
http://www.garilc.com/~lynn/2006s.html#45

more 2301 mention in this recent comp.arch post (2301 was sort of 2303
but read/wrote four tracks in parallel, 4 times the data transfer rate,
1/4th the number of tracks, each track four times larger)
http://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly

later 2305-1 did something similar ... it put every other head on the
same track but offset 180degrees ... resulting in half the number of
tracks (and half the total capacity) ... and alternatating bytes were on
opposides sides. Avg. rotation delay was then only quarter of track
(rather than half) ... and data transfer was twice (3mbytes/sec rather
than 1.5mbytes/sec) ... doing read/writes from both heads in parallel
(on opposite sides of the track, odd on one side, even on the other
side).
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Paul Gilmartin
On Wed, 12 Apr 2017 09:50:44 -0500, Joel C. Ewing wrote:
>
>GETMAIN returns a success indication based purely on whether virtual
>storage is available to the address space within the REGION size
>constrains.
> 
Does it not check for available paging space?  Does this imply that your
job and my job can each do a GETMAIN for storage which in the aggregate
exceeds the available paging space and both will succeed?

>It is the SysProg's non-trivial job to configure the LPAR real memory
>and configure z/OS to adequately constrain max REGION sizes, the number
>of address spaces and size the paging subsystem so that page thrashing
>doesn't occur and critical page slot thresholds in the paging subsystem
>aren't exceeded.  There are tools, configuration parameters, and exits
>that make that possible.
>
Then if you job touches all its pages then my job attempts to touch all
its pages, may I experience a failure because there are no slots to page
something out?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Joel C. Ewing
On 04/11/2017 05:41 PM, Paul Gilmartin wrote:
> On Tue, 11 Apr 2017 16:45:10 -0500, Greg Dyck wrote:
>
>> On 4/11/2017 3:26 PM, Paul Gilmartin wrote:
>>> My understanding, ancient, probably outdated, and certainly naive is that
>>> there is little communication between GETMAIN/FREEMAIN and the paging
>>> subsystem.  If a program touches a page that was never GETMAINed no error
>>> occurs; simply a page slot is allocated up to the limit of the REGION
>>> parameter.  Conversely, on FREEMAIN the page slots are not released.
>> *Totally* wrong.  z/OS (or any predecessors or relatives like VS1) has
>> never worked like you describe.
>>
>> If a virtual page is not 'getmain assigned' by VSM it will never be
>> backed by a real frame by RSM and an 0C4-11 will occur.  When all
>> virtual storage on a page is freed VSM will return the allocated AUX
>> slot and real frame, if either have been allocated, and reset the
>> 'getmain assigned' indicator.
>>
> What about the other side?  Will GETMAIN return indication of success
> only if the requested page slots can be committed?
>
> A Google search for "lazy malloc" shows the preponderance of complaints
> concern Linux.
>
> My information came from a UNIX/C programmer before the ascendancy
> of Linux.
>
> And from an MVS wizard who insisted that for GETMAIN to fail if requested
> page slots could not be committed would cause enormous breakage of
> existing art.
>
> -- gil
>
>
How would GETMAIN have any way of predicting either the future
availability of page frames when a virtual storage page is later
referenced or the future availability of paging system page slots if and
when z/OS decides to page out a backing page frame?  I seem to recall
there are some inhibitions on starting new address spaces in a "storage
constrained" environment, but that doesn't apply to individual GETMAINs
from an already running address spaces.

GETMAIN returns a success indication based purely on whether virtual
storage is available to the address space within the REGION size
constrains. 

It is the SysProg's non-trivial job to configure the LPAR real memory
and configure z/OS to adequately constrain max REGION sizes, the number
of address spaces and size the paging subsystem so that page thrashing
doesn't occur and critical page slot thresholds in the paging subsystem
aren't exceeded.  There are tools, configuration parameters, and exits
that make that possible.
  Joel C. Ewing

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Tom Marchant
On Wed, 12 Apr 2017 13:08:32 +, Vernooij, Kees wrote:

>

>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Tom Marchant
>> Sent: 12 April, 2017 15:03
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: Paging subsystems in the era of bigass memory
>> 
>> On Wed, 12 Apr 2017 06:28:05 +, Vernooij, Kees wrote:
>> 
>> >From here, there the story still goes on IIRC: if DB2 again needs
>> >the data, it would be paged in in any normal task. However, DB2
>> >is more intelligent: it keeps track of how long it takes to page-in
>> >the page or read it again from disk, in a I/O that is already running.
>> >If the latter is faster this is used and the aux slot is really
>> useless.
>> 
>> Are you suggesting that before DB2 references a page containing a
>> buffer, it checks to see if it is paged out? 
>
>Yes.
>
>> And that if it is paged
>> out,
>> it doesn't use the record in the buffer, but instead reads it into a
>> different page?  
>
>No, it checks its(?) statistics about page-in time and reading it directly 
>from dasd and then decides which will be faster. I heard this years ago, 
>when heavy paging systems might produce slow page-ins and adding a 
>page to an in-progress pre-fetch could be faster.

So, if it thinks it would be faster to read the record from DASD than for 
MVS to page in the buffer page(s) containing the record, it will read the 
record into different pages that have not been paged out?

It still makes no sense to me. It certainly can't read the record into the 
same page, because that would require that the page be paged in first. 
And what does it do with the old buffer page? Stop using it? Freemain 
and getmain again so that the page slot becomes superfluous?
>
>> That makes no sense to me.
>
>This does?

What?

--
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Avram Friedman
While they do not grow in in perfect lock step
The presence of Big ass memory comes with big ass dasd volumes
(these are the technical terms of course)
Do you know 3350's and 2314s were once used as paging devices?
For that matter do you no the number 2314 was chosen as the # for a model of 
disk drive because it was 4 times larger than its asexual parent the 2311.

You may be aware that the reason for new addressing architecture like 32 and 64 
bit with corresponding changes to memory mapping came as a result of addressing 
older storage constraints.
Typically it was CICS, IMS and DB2 that were contained and the newer 
architectures are designed to hold, you guessed it CICS, IMS, and DB2.

The size of spinning page datasets is related to the amount of virutual memory 
in use.
Have you noticed that the use of the LPA dataset in any shop does not change 
after IPL?
That is because LPA is built at IPL time and can even be skipped by not doing a 
CLPA.
LPA does not require extra space for minute to minute operations because it 
does not change.
It may need space if a software update is done.

You mention that you have a near zero paging rate.
With that near zero rate for locals those datasets can enjoy some of the same 
allocation freedoms the LPA have.  If it is not going to grow one does not have 
to add much for minute to minute growth.

Most old rules of thumb are based on a TSO environment with lots of swapable 
address spaces.
This is not the case for large DBMS and TP monitors.

Avram Friedman 

On Tue, 11 Apr 2017 10:46:40 -0400, Pinnacle  wrote:

>Gone are the halcyon days when we could run an LPAR with three mod-3's
>as the local paging subsystem.  With today's large memory sizes, I'm
>faced with having to completely rethink my paging subsystems.  I've
>currently got a 133GB LPAR with 18 mod-9 locals at 44%.  I'm going to
>add 22 more mod-9's, which will get me just under the 30% threshold.
>That's 40 page datasets, which is about 30 more than the most I've ever
>managed. I'm thinking about going to 10 mod-54's as my final solution
>for this LPAR (roughly 4x the real memory).  I wondered what the rest of
>you are doing with your paging subsystems in the era of bigass memory sizes.
>
>Regards,
>Tom Conley
>
>--
>
>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Vernooij, Kees (ITOPT1) - KLM


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Tom Marchant
> Sent: 12 April, 2017 15:03
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Paging subsystems in the era of bigass memory
> 
> On Wed, 12 Apr 2017 06:28:05 +, Vernooij, Kees wrote:
> 
> >From here, there the story still goes on IIRC: if DB2 again needs
> >the data, it would be paged in in any normal task. However, DB2
> >is more intelligent: it keeps track of how long it takes to page-in
> >the page or read it again from disk, in a I/O that is already running.
> >If the latter is faster this is used and the aux slot is really
> useless.
> 
> Are you suggesting that before DB2 references a page containing a
> buffer, it checks to see if it is paged out? 

Yes.

> And that if it is paged
> out,
> it doesn't use the record in the buffer, but instead reads it into a
> different page?  

No, it checks its(?) statistics about page-in time and reading it directly from 
dasd and then decides which will be faster. I heard this years ago, when heavy 
paging systems might produce slow page-ins and adding a page to an in-progress 
pre-fetch could be faster.

> That makes no sense to me.

This does?

Kees.
> 
> --
> Tom Marchant



For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Tom Marchant
On Wed, 12 Apr 2017 06:28:05 +, Vernooij, Kees wrote:

>From here, there the story still goes on IIRC: if DB2 again needs 
>the data, it would be paged in in any normal task. However, DB2 
>is more intelligent: it keeps track of how long it takes to page-in 
>the page or read it again from disk, in a I/O that is already running. 
>If the latter is faster this is used and the aux slot is really useless.

Are you suggesting that before DB2 references a page containing a 
buffer, it checks to see if it is paged out? And that if it is paged out, 
it doesn't use the record in the buffer, but instead reads it into a 
different page?  That makes no sense to me.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-12 Thread Tom Marchant
On Wed, 12 Apr 2017 06:18:29 +, Vernooij, Kees wrote:

>The advantage of not freeing the aux slot was that a page could 
>be paged out without I/O if it had not been changed. Somewhat 
>the opposite op page reclaim. Freeing it after it has been changed 
>is of course a 100% useful reclaim of aux slots.

Yes, but that would require that someone periodically run through 
page frames looking for the change bit set, then looking to see if 
it is backed on AUX, and freeing it if it is. z/OS supports 4 TB of 
real storage. That's a billion pages to look at. How many CPU 
cycles is such a reclaim worth?

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Vernooij, Kees (ITOPT1) - KLM


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Blaicher, Christopher Y.
> Sent: 11 April, 2017 21:25
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Paging subsystems in the era of bigass memory
> 
> It has been a while since I worked on DB2, but it is sounding like your
> buffer pools are too big.
> 
> Consider this:
> DB2 will read a required page into 'new' buffer pool page before it will
> invalidate a page it already has in storage. Now we have a physical page
> in use.
> 
> The system periodically comes around and looks at the UIC for a page and
> if it is high enough, it will page it out.  Now we have a page on AUX
> storage.
> 
> If DB2 doesn't need the data on that page, or doesn't need to use that
> page for a different data page, then that page just hangs out on AUX
> storage.
> 

From here, there the story still goes on IIRC: if DB2 again needs the data, it 
would be paged in in any normal task. However, DB2 is more intelligent: it 
keeps track of how long it takes to page-in the page or read it again from 
disk, in a I/O that is already running. If the latter is faster this is used 
and the aux slot is really useless.

Kees.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Vernooij, Kees (ITOPT1) - KLM


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Greg Dyck
> Sent: 11 April, 2017 23:39
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Paging subsystems in the era of bigass memory
> 
> On 4/11/2017 1:46 PM, Jesse 1 Robinson wrote:
> > Part of the problem, I learned some time back at SHARE, is that there
> is no mechanism to 'reclaim' page slots that no longer need to remain on
> disk. Once storage gets paged out, it sits there like a sandbag until
> the owning task is stopped. Contrast that with JES2 spool track reclaim,
> which constantly munches through spool like Pacman and frees up unneeded
> space.
> 
> Many moons ago code was implemented in ASM and RSM to slowly reclaim ASM
> page slots for virtual pages that were changed in real storage.  I have
> vague memories of this functionality later being disabled for some
> technical concern, but can't remember what it was.
> 
> Regards,
> Greg
> 

Hi Greg,
You still can't resist to jump in on your old hobby of many moons ago?

Kees.


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Vernooij, Kees (ITOPT1) - KLM
> 
>   But like any design tradeoff, there are drawbacks as well, and with
> real
> storage
> less expensive and more plentiful, you can have a lot more virtual pages
> consuming both an aux slot and a real frame.   There has been some
> thought given to changing things so that we always free the aux slot
> after
> we
> page-in from it.
> 
> Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp.
> Poughkeepsie NY
> 
> 

The advantage of not freeing the aux slot was that a page could be paged out 
without I/O if it had not been changed. Somewhat the opposite op page reclaim. 
Freeing it after it has been changed is of course a 100% useful reclaim of aux 
slots.

Kees.


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Anne & Lynn Wheeler
l...@garlic.com (Anne & Lynn Wheeler) writes:
> The "dup" issue was if aggregate 3880-11 wasn't much larger than
> processor memory, then nearly every page in 3880-11 would also in
> processor memory.  The converse if a page was needed not in processor
> memory, then it would unlikely be in 3880-11 cache memory. Moving to
> "no-dup" means that a page in processor memory would almost never be
> in 3880-11 cache, so there is room for pages (not in processor memory)
> that might be needed in processor memory.

re:
http://www.garlic.com/~lynn/2017d.html#58 Paging subsystems in the era of 
bigass memory
http://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of 
bigass memory

dynamically switching between "dup" and "no-dup" ... "no-dup" was when
total processor memory was compareable in size to disk cache for paging
(or later, processor memory was compareable in size to space available
for disk paging) ... then "no-dup" was sort of like treating/optimizing
dynamic disk caches analogous to how the later 3090 extended store was
directly treated ... a page was either in processor memory or in
extended store ... but wouldn't maintain page in both places.

recent posts mentioning 3090 extended store (32M-256N processor storage,
64M-2048M extended storage)
http://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
http://www.garlic.com/~lynn/2017d.html#4 GREAT presentation on the history of 
the mainframe

long after all memory could be configured as straight processor memory,
LPARs continued to offer option of configuring some as extended store
... apparently because system software had been so structured into
supporting processor/extended split ... it took quite some time to adapt
to efficiently just using large processor storage.

I could also use it for (electronic disk) "1655" ... large number were
bought from a vendor for paging use in internal IBM datacenters. It
could ran as simulated 2305 (fixed head disk) or as "native" fixed-block
device.

Larger memory systems ... possibly much larger than total 1655 space, I
could dynamically select "no-dup" ... however in the mid-70s I also did
"page migration" (which was also included in my resource manager shipped
to customers later in the 70s) ... periodically sweaping low-use pages
from "fast" paging devices to slower paging devices (but had to be
dragged threw memory to move between devices). For 3090 extended store,
I wanted to be able to do direct I/O from 3090 extended store (when
cleaning pages w/o having to drag through processor memory).

(other) recent posts mentioning "1655":
http://www.garlic.com/~lynn/2017b.html#68 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS 
operations

posts mentioning page management & replacement algorithms
http://www.garlic.com/~lynn/subtopic.html#clock

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Jim Mulder
> > If a virtual page is not 'getmain assigned' by VSM it will never be 
backed
> > by a real frame by RSM and an 0C4-11 will occur.  When all virtual 
storage
> > on a page is freed VSM will return the allocated AUX slot and real 
frame,
> > if either have been allocated, and reset the 'getmain assigned' 
indicator.
> 
> 
> Though maybe not *quite* so dogmatically on a modern processor (a z13, 
say)
> and a modern z/OS...

  Yes, you can read about that here:

http://www-01.ibm.com/support/docview.wss?uid=isg1OA46291

Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Jim Mulder
> What about the other side?  Will GETMAIN return indication of success
> only if the requested page slots can be committed?

  There is no consideration of aux slot availability when virtual
storage is being allocated.  There used to be some capability to
reserve aux slots when an address space was created:


http://bitsavers.trailing-edge.com/pdf/ibm/370/MVS_XA/GC28-1143-2_MVS-XA_Conversion_Notebook_May84.pdf

Using the ASM Backing Slot Function

In Release 1.2, the constants that control the number of slots ASM 
reserves as
back up for each new address space or VIO data set are changed to prevent 
ASM
from reserving any. The changes were made because most installations 
provide
adequate paging space and prefer not to use the backing slot function to 
limit
address space and VIO data set creation.
If your installation wants ASM to reserve backing slots, you need to 
change the
constants. In Release 1.2, the constants are in the ASMSLOTC and ASMSLOTV
fields in the ASJ\1VT. ASM uses the ASMSLOTC value when calculating the
number of slots to reserve for address spaces. It uses the ASMSLOTV value 
when
calculating the number for VIO data sets. Earlier releases keep the same 
values in
the nucleus CSECTs ILRSLOTC and ILRSLOTV. Initialization and Tuning
describes how ASM uses ASMSLOTC and ASMSLOTV and how to change them.

Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Jim Mulder
>Many moons ago code was implemented in ASM and RSM to slowly reclaim ASM 
>page slots for virtual pages that were changed in real storage.  I have 
>vague memories of this functionality later being disabled for some 
>technical concern, but can't remember what it was.

  Slot Scavenger was introduced in OS3/390 2.10.  It did its thing during 
the UIC
Update function,   UIC Update was removed when Global Steal was 
implemented
(z/OS 1.8, I think), so at that point there was no more Slot Scavenger. 

  Back when real storage was expensive and scarce, then ratio of aux slots 
to
real frames tended to be higher than it is now, and there weren't that 
many pages
that were backed in both real and aux because there wasn't much real. 
There are some performance advantages to having an unchanged page backed
in real and aux. 
1. You can steal the frame synchronously, without waiting for page-out 
I/O.
2. When capturing it during SDUMP, you can give one of the copies to SDUMP
 with needing to copy it.

  But like any design tradeoff, there are drawbacks as well, and with real 
storage
less expensive and more plentiful, you can have a lot more virtual pages 
consuming both an aux slot and a real frame.   There has been some 
thought given to changing things so that we always free the aux slot after 
we
page-in from it. 

Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Anne & Lynn Wheeler
t...@harminc.net (Tony Harminc) writes:
> Not quite sure what you're saying. The old, constrained-memory
> technique was usually to issue a variable (Vx) GETMAIN, specifying the
> minimum required size as the low bound, and the maximum useful as the
> high. Then the system returns the actual amount obtained, or a return
> code or abend if it can't deliver even the minimum. In pre-MVS
> systems, the REGION= was a hard control on the max; in MVS, REGION=
> turned into a per-use limit on Vx GETMAINs, with the hard limit being
> the available private area.

pre-MVS, ... the application storage management was so bad it required
regions typically four times larger than actually used ... typical
1mbyte 370/165 would only run four regions ...  and systems were
becoming increasingly I/O bound (i.e. CPUs were getting increasingly
faster than disks were getting faster, keeping high-end CPUs buzy
required lots more concurrent multitasking).

justification for moving all 370 to virtual memory was that it would be
possible to increase the number of regions on 1mbyte 370/165 by a factor
of four times with little or no paging. old post with excerpts with
person involved in decision
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

trivia: I had big argument with the POK people doing the page
replacement algorithm. They eventually said it didn't matter anyway
... because there would be almost *NO* paging. It was several years into
MVS releases, that somebody "discovers" that side-effect of the
implementation, MVS was replacing high-used shared LINKPACK pages before
low-used application data pages. past posts mentioning paging algorithm
implementations
http://www.garlic.com/~lynn/subtopic.html#clock

MVS ran into a different kind of problem ... os/360 api paradigm was
extensively pointer passing. As a result, an 8mbyte image of the MVS
system was included in every application 16mbyte virtual address space
(pointer passed in system call and since system code was part of every
address space, the old os/360 use of directly using pointer address
continued to work).

Problem was that subsystem APIs were also pointer based ... and now they
were all in their own address space. MVS subsystem calls then invented
the common segment area (CSA) that appeared in every address space and
was used for allocating space for subsystem API parameters (reducing
applications space to 7mbytes, out of 16mbytes). However, CSA size
requirements were somewhat proportional to the number of subsystems and
the number of concurrent applications  and CSA frequently became
multiple segments and morphed into common system area. By 3033, CSA
space requirements was frequently 5-6mbytes for many customers (leaving
only 2mbytes for applications) and threatening to become 8mbytes
(leaving zero bytes for applications)

Eventually part of 370-xa access registers were retrofitted to 370 as
dual-address space mode  subsystems could be enhanced to directly
access application space w/o requiring CSA (person responsible for
dual-access retrofit leaves not long after for HP to work on their risc
processors).

other triva: early 80s I was saying that disk performance not tracking
was so bad that since the 60s, relative disk system throughput had
declined by a factor of ten times (i.e. disks got 3-5 times faster,
processors got 50 times faster). disk division took exception and
assigned division performance organization to refute my claims. after a
few weeks, they came back and basically said that I had understated the
"problem". They respin the analysis into SHARE presentation (B874 at
SHARE 63) recommending disk configurations to improve throughput.
old post with part of the early 80 comparison
http://www.garlic.com/~lynn/93.html#31
old posts with pieces of B874
http://www.garlic.com/~lynn/2001l.html#56
http://www.garlic.com/~lynn/2006f.html#3

for 3081 & 3880-11 paging caches ... processor memory were becoming
compareable in size ... or even larger than paging area ... so I did a
variation that dynamically switched between what I called "dup"
(duplicate) and "no-dup". "dup", when page was read into processor
memory, the original was left allocated (duplicate in memory and on
disk), if that page was later replaced and had not been changed, then it
could just be invalidated (and didn't have to be written out) since an
exact copy was already on disk. For large processor memory, it could
dynamically switch to "no-dup" and read into processor would always
deallocate the copy on disk (and would use 3880-11 no-cache read, if a
copy was in cache, it would be read and removed from cache, if not in
cache, it would read from disk bypassing cache).

The "dup" issue was if aggregate 3880-11 wasn't much larger than
processor memory, then nearly every page in 3880-11 would also in
processor memory.  The converse if a page was needed not in processor
memory, then it would unlikely be in 3880-11 cache memory. Moving to
"no-dup" means that a page in processor me

Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Paul Gilmartin
On Tue, 11 Apr 2017 16:45:10 -0500, Greg Dyck wrote:

>On 4/11/2017 3:26 PM, Paul Gilmartin wrote:
>> My understanding, ancient, probably outdated, and certainly naive is that
>> there is little communication between GETMAIN/FREEMAIN and the paging
>> subsystem.  If a program touches a page that was never GETMAINed no error
>> occurs; simply a page slot is allocated up to the limit of the REGION
>> parameter.  Conversely, on FREEMAIN the page slots are not released.
>
>*Totally* wrong.  z/OS (or any predecessors or relatives like VS1) has
>never worked like you describe.
>
>If a virtual page is not 'getmain assigned' by VSM it will never be
>backed by a real frame by RSM and an 0C4-11 will occur.  When all
>virtual storage on a page is freed VSM will return the allocated AUX
>slot and real frame, if either have been allocated, and reset the
>'getmain assigned' indicator.
>
What about the other side?  Will GETMAIN return indication of success
only if the requested page slots can be committed?

A Google search for "lazy malloc" shows the preponderance of complaints
concern Linux.

My information came from a UNIX/C programmer before the ascendancy
of Linux.

And from an MVS wizard who insisted that for GETMAIN to fail if requested
page slots could not be committed would cause enormous breakage of
existing art.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tony Harminc
On 11 April 2017 at 17:45, Greg Dyck  wrote:

> If a virtual page is not 'getmain assigned' by VSM it will never be backed
> by a real frame by RSM and an 0C4-11 will occur.  When all virtual storage
> on a page is freed VSM will return the allocated AUX slot and real frame,
> if either have been allocated, and reset the 'getmain assigned' indicator.


Though maybe not *quite* so dogmatically on a modern processor (a z13, say)
and a modern z/OS...

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Greg Dyck

On 4/11/2017 3:26 PM, Paul Gilmartin wrote:

My understanding, ancient, probably outdated, and certainly naive is that
there is little communication between GETMAIN/FREEMAIN and the paging
subsystem.  If a program touches a page that was never GETMAINed no error
occurs; simply a page slot is allocated up to the limit of the REGION
parameter.  Conversely, on FREEMAIN the page slots are not released.


*Totally* wrong.  z/OS (or any predecessors or relatives like VS1) has 
never worked like you describe.


If a virtual page is not 'getmain assigned' by VSM it will never be 
backed by a real frame by RSM and an 0C4-11 will occur.  When all 
virtual storage on a page is freed VSM will return the allocated AUX 
slot and real frame, if either have been allocated, and reset the 
'getmain assigned' indicator.


Regards,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Greg Dyck

On 4/11/2017 1:46 PM, Jesse 1 Robinson wrote:

Part of the problem, I learned some time back at SHARE, is that there is no 
mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
storage gets paged out, it sits there like a sandbag until the owning task is 
stopped. Contrast that with JES2 spool track reclaim, which constantly munches 
through spool like Pacman and frees up unneeded space.


Many moons ago code was implemented in ASM and RSM to slowly reclaim ASM 
page slots for virtual pages that were changed in real storage.  I have 
vague memories of this functionality later being disabled for some 
technical concern, but can't remember what it was.


Regards,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Greg Dyck

On 4/11/2017 2:24 PM, Blaicher, Christopher Y. wrote:

It has been a while since I worked on DB2, but it is sounding like your buffer 
pools are too big.


With large memory systems everyone should have all of the production 
(and, ideally, test systems too) buffer pools define with PGFIX(YES) 
specified.  Even if they are over defined in size.  This can 
*measurably* cut DB2 CPU overhead by eliminating the need to fix and 
unfix pages for I/O and buffer pool pages will not unexpectedly creep 
out to AUX if they are not changed for a long time.


Regards,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tony Harminc
On 11 April 2017 at 16:26, Paul Gilmartin <
000433f07816-dmarc-requ...@listserv.ua.edu> wrote:

> If a program touches a page that was never GETMAINed no error
> occurs; simply a page slot is allocated up to the limit of the REGION
> parameter.  Conversely, on FREEMAIN the page slots are not released.
>

I really don't think so... It's easy to experiment.


>
> Programmers with a strong UNIX/C background loathe this.
>

Maybe because they believe in all kinds of mainframe myths. The machines
are run by nerdy looking men in white lab coats, with ties mandatory, I've
heard. And everything requires punch cards to get any work done.

They are accustomed to do needed malloc()s up front and handle errors at
> that
> point; their code is not designed to handle out-of-storage conditions
> that occur unexpectedly, later when a page is touched.
>

Sounds like, uh z/OS.

>
> From the other point of view, traditional OS programmers are accustomed
> to code enormous fixed GETMAINs in case they need the storage years in
> the future -- just adjust the REGION and the program works again.
>

Not quite sure what you're saying. The old, constrained-memory technique
was usually to issue a variable (Vx) GETMAIN, specifying the minimum
required size as the low bound, and the maximum useful as the high. Then
the system returns the actual amount obtained, or a return code or abend if
it can't deliver even the minimum. In pre-MVS systems, the REGION= was a
hard control on the max; in MVS, REGION= turned into a per-use limit on Vx
GETMAINs, with the hard limit being the available private area.

In more recent systems there are exits that can limit most of these values,
and of course there is split BTL and ATL storage, not to mention ATB.

>
> Culture clash.
>

Perhaps, but it's unwise to ascribe to the "other" culture incorrect facts.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Paul Gilmartin
On 2017-04-11, at 12:47, Jesse 1 Robinson wrote:
> 
> Part of the problem, I learned some time back at SHARE, is that there is no 
> mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
> storage gets paged out, it sits there like a sandbag until the owning task is 
> stopped. ...
>  
My understanding, ancient, probably outdated, and certainly naive is that
there is little communication between GETMAIN/FREEMAIN and the paging
subsystem.  If a program touches a page that was never GETMAINed no error
occurs; simply a page slot is allocated up to the limit of the REGION
parameter.  Conversely, on FREEMAIN the page slots are not released.

Programmers with a strong UNIX/C background loathe this.  They are
accustomed to do needed malloc()s up front and handle errors at that
point; their code is not designed to handle out-of-storage conditions
that occur unexpectedly, later when a page is touched.

From the other point of view, traditional OS programmers are accustomed
to code enormous fixed GETMAINs in case they need the storage years in
the future -- just adjust the REGION and the program works again.

Culture clash.

Repairing this to make paging substem aware of FREEMAIN and release the
slots would be an enormous and risky design change.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Carmen Vitullo
Chris I think that's entirely possible, I am no DB2 person, but I have 
installed and configured tools for DB2 change data capture, DB2 to UDB, DB2 to 
SQL, and I think the DBA's at least my DBA's define the buffer pools larger for 
that purpose? 
I also have large frame area defined of 7740M 


Carmen 






From: "Christopher Y. Blaicher"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Tuesday, April 11, 2017 2:24:54 PM 
Subject: Re: Paging subsystems in the era of bigass memory 

It has been a while since I worked on DB2, but it is sounding like your buffer 
pools are too big. 

Consider this: 
DB2 will read a required page into 'new' buffer pool page before it will 
invalidate a page it already has in storage. Now we have a physical page in 
use. 

The system periodically comes around and looks at the UIC for a page and if it 
is high enough, it will page it out. Now we have a page on AUX storage. 

If DB2 doesn't need the data on that page, or doesn't need to use that page for 
a different data page, then that page just hangs out on AUX storage. 

I don't know what happens if you close a tablespace. DB2 probably just frees 
its logical use, but doesn't FREEMAIN the storage. 

Bottom line, you basically need enough AUX storage to hold all your buffer 
pools. 

If you have a lot of pages just hanging out in AUX and you don't have any 
demand paging, maybe you have buffer pools a little larger than you need. 

Of course, the DB2 guys want a minimum of reads because a read is much slower 
than a page-in, but that is something for individual shops to work out. 

Joel Goldstein can probably wax poetic on this topic much more than I can. 

Chris Blaicher 
Technical Architect 
Mainframe Development 
Syncsort Incorporated 
2 Blue Hill Plaza #1563, Pearl River, NY 10965 

P: 201-930-8234 | M: 512-627-3803 
E: cblaic...@syncsort.com 

www.syncsort.com 

CONNECTING BIG IRON TO BIG DATA 


-Original Message- 
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson 
Sent: Tuesday, April 11, 2017 2:47 PM 
To: IBM-MAIN@LISTSERV.UA.EDU 
Subject: Re: Paging subsystems in the era of bigass memory 

The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
usage for some period of time. Then usage starts to creep up until we get 
warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory to 
an LPAR slows the creep but cannot seem to stop it altogether. The problem is 
most pronounced on systems with large DB2 apps. 

Part of the problem, I learned some time back at SHARE, is that there is no 
mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
storage gets paged out, it sits there like a sandbag until the owning task is 
stopped. Contrast that with JES2 spool track reclaim, which constantly munches 
through spool like Pacman and frees up unneeded space. 

. 
. 
J.O.Skip Robinson 
Southern California Edison Company 
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager 
323-715-0595 Mobile 
626-543-6132 Office ⇐=== NEW 
robin...@sce.com 


-Original Message- 
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Conley 
Sent: Tuesday, April 11, 2017 11:13 AM 
To: IBM-MAIN@LISTSERV.UA.EDU 
Subject: (External):Re: Paging subsystems in the era of bigass memory 

On 4/11/2017 1:16 PM, van der Grijn, Bart , B wrote: 
> Largest LPARs we have are about 200GB with 6 MOD27 per LPAR. They all run DB2 
> for distributed workloads plus some application specific subsystems. 
> The two busiest of those LPARs each run one DB2 member of the same DB2 data 
> sharing group with a frame occupancy of about 39M. 
> Next to no paging. 
> 
> Bart 
> 

Bart, 

This is what has me puzzled. My two biggest users of AUX, according to TMONMVS, 
are our two DB2 production regions. They're like 90% of what's in the page 
datasets. I have the DB2 sysprog looking at DB2's virtual storage knobs to see 
if we have something misconfigured. 

Thanks, 
Tom Conley 


-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 

 



ATTENTION: - 

The information contained in this message (including any files transmitted with 
this message) may contain proprietary, trade secret or other confidential 
and/or legally privileged information. Any pricing information contained in 
this message or in any files transmitted with this message is always 
confidential and cannot be shared with any third parties without prior written 
approval from Syncsort. This message is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the in

Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Blaicher, Christopher Y.
It has been a while since I worked on DB2, but it is sounding like your buffer 
pools are too big.

Consider this:
DB2 will read a required page into 'new' buffer pool page before it will 
invalidate a page it already has in storage. Now we have a physical page in use.

The system periodically comes around and looks at the UIC for a page and if it 
is high enough, it will page it out.  Now we have a page on AUX storage.

If DB2 doesn't need the data on that page, or doesn't need to use that page for 
a different data page, then that page just hangs out on AUX storage.

I don't know what happens if you close a tablespace.  DB2 probably just frees 
its logical use, but doesn't FREEMAIN the storage.

Bottom line, you basically need enough AUX storage to hold all your buffer 
pools.

If you have a lot of pages just hanging out in AUX and you don't have any 
demand paging, maybe you have buffer pools a little larger than you need.

Of course, the DB2 guys want a minimum of reads because a read is much slower 
than a page-in, but that is something for individual shops to work out.

Joel Goldstein can probably wax poetic on this topic much more than I can.

Chris Blaicher
Technical Architect
Mainframe Development
Syncsort Incorporated
2 Blue Hill Plaza #1563, Pearl River, NY 10965

P: 201-930-8234  |  M: 512-627-3803
E: cblaic...@syncsort.com

www.syncsort.com

CONNECTING BIG IRON TO BIG DATA


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jesse 1 Robinson
Sent: Tuesday, April 11, 2017 2:47 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Paging subsystems in the era of bigass memory

The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
usage for some period of time. Then usage starts to creep up until we get 
warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory to 
an LPAR slows the creep but cannot seem to stop it altogether. The problem is 
most pronounced on systems with large DB2 apps.

Part of the problem, I learned some time back at SHARE, is that there is no 
mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
storage gets paged out, it sits there like a sandbag until the owning task is 
stopped. Contrast that with JES2 spool track reclaim, which constantly munches 
through spool like Pacman and frees up unneeded space.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Conley
Sent: Tuesday, April 11, 2017 11:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Paging subsystems in the era of bigass memory

On 4/11/2017 1:16 PM, van der Grijn, Bart , B wrote:
> Largest LPARs we have are about 200GB with 6 MOD27 per LPAR. They all run DB2 
> for distributed workloads plus some application specific subsystems.
> The two busiest of those LPARs each run one DB2 member of the same DB2 data 
> sharing group with a frame occupancy of about 39M.
> Next to no paging.
>
> Bart
>

Bart,

This is what has me puzzled.  My two biggest users of AUX, according to 
TMONMVS, are our two DB2 production regions.  They're like 90% of what's in the 
page datasets.  I have the DB2 sysprog looking at DB2's virtual storage knobs 
to see if we have something misconfigured.

Thanks,
Tom Conley


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN





ATTENTION: -

The information contained in this message (including any files transmitted with 
this message) may contain proprietary, trade secret or other confidential 
and/or legally privileged information. Any pricing information contained in 
this message or in any files transmitted with this message is always 
confidential and cannot be shared with any third parties without prior written 
approval from Syncsort. This message is intended to be read only by the 
individual or entity to whom it is addressed or by their designee. If the 
reader of this message is not the intended recipient, you are on notice that 
any use, disclosure, copying or distribution of this message, in any form, is 
strictly prohibited. If you have received this message in error, please 
immediately notify the sender and/or Syncsort and destroy all copies of this 
message in your possession, custody or control.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Ed Jaffe

On 4/11/2017 8:54 AM, Doug Henry wrote:


Hi Tom,
We have a pair of SCM cards (so in case one fails). Each card has 1.4 Terabytes 
of memory.
For an lpar that has 524GB online and  the SCM online is 384G.  As for the cost 
I don't really know.


ISTR a pair of flash cards is somewhere around $150K.

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Carmen Vitullo
Right there with you - last IPL of the prod system was Feb 26th, all LOCAL page 
datasets are sitting @ 20%, I was told this was NORMAL for DB2 


- Original Message -

From: "Tom Conley"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Tuesday, April 11, 2017 1:52:06 PM 
Subject: Re: Paging subsystems in the era of bigass memory 

On 4/11/2017 2:46 PM, Jesse 1 Robinson wrote: 
> The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
> usage for some period of time. Then usage starts to creep up until we get 
> warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory 
> to an LPAR slows the creep but cannot seem to stop it altogether. The problem 
> is most pronounced on systems with large DB2 apps. 
> 
> Part of the problem, I learned some time back at SHARE, is that there is no 
> mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
> storage gets paged out, it sits there like a sandbag until the owning task is 
> stopped. Contrast that with JES2 spool track reclaim, which constantly 
> munches through spool like Pacman and frees up unneeded space. 
> 
> . 
> . 
> J.O.Skip Robinson 
> Southern California Edison Company 
> Electric Dragon Team Paddler 
> SHARE MVS Program Co-Manager 
> 323-715-0595 Mobile 
> 626-543-6132 Office ⇐=== NEW 
> robin...@sce.com 
> 

Preach it Brother Skip!! After IPL, I'm at 0%. A week in, 15%, a month 
in, 44%. If this is indeed a "feature", then we need to create a 
requirement to fix this. 

Tom 

-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Jerry Whitteridge
Certainly sounds like requirement time. We see the same issue particularly on 
one of our Dev systems with 14 DB2 members on it.

Jerry Whitteridge
Manager Mainframe Systems & Storage
Albertsons - Safeway Inc.
623 869 5523
Corporate Tieline - 85523

If you feel in control
you just aren't going fast enough.



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Conley
Sent: Tuesday, April 11, 2017 11:52 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: EXTERNAL: Re: Paging subsystems in the era of bigass memory

On 4/11/2017 2:46 PM, Jesse 1 Robinson wrote:
> The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
> usage for some period of time. Then usage starts to creep up until we get 
> warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory 
> to an LPAR slows the creep but cannot seem to stop it altogether. The problem 
> is most pronounced on systems with large DB2 apps.
>
> Part of the problem, I learned some time back at SHARE, is that there is no 
> mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
> storage gets paged out, it sits there like a sandbag until the owning task is 
> stopped. Contrast that with JES2 spool track reclaim, which constantly 
> munches through spool like Pacman and frees up unneeded space.
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> robin...@sce.com
>

Preach it Brother Skip!!  After IPL, I'm at 0%.  A week in, 15%, a month in, 
44%.  If this is indeed a "feature", then we need to create a requirement to 
fix this.

Tom

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 Warning: All e-mail sent to this address will be received by the corporate 
e-mail system, and is subject to archival and review by someone other than the 
recipient. This e-mail may contain proprietary information and is intended only 
for the use of the intended recipient(s). If the reader of this message is not 
the intended recipient(s), you are notified that you have received this message 
in error and that any review, dissemination, distribution or copying of this 
message is strictly prohibited. If you have received this message in error, 
please notify the sender immediately.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tom Conley

On 4/11/2017 2:46 PM, Jesse 1 Robinson wrote:

The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
usage for some period of time. Then usage starts to creep up until we get 
warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory to 
an LPAR slows the creep but cannot seem to stop it altogether. The problem is 
most pronounced on systems with large DB2 apps.

Part of the problem, I learned some time back at SHARE, is that there is no 
mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
storage gets paged out, it sits there like a sandbag until the owning task is 
stopped. Contrast that with JES2 spool track reclaim, which constantly munches 
through spool like Pacman and frees up unneeded space.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com



Preach it Brother Skip!!  After IPL, I'm at 0%.  A week in, 15%, a month 
in, 44%.  If this is indeed a "feature", then we need to create a 
requirement to fix this.


Tom

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Jesse 1 Robinson
The problem we face is 'paging creep'. Right after IPL, systems show 0% ASM 
usage for some period of time. Then usage starts to creep up until we get 
warnings, then eventually hit the no-more-SVC-dumps condition. Adding memory to 
an LPAR slows the creep but cannot seem to stop it altogether. The problem is 
most pronounced on systems with large DB2 apps. 

Part of the problem, I learned some time back at SHARE, is that there is no 
mechanism to 'reclaim' page slots that no longer need to remain on disk. Once 
storage gets paged out, it sits there like a sandbag until the owning task is 
stopped. Contrast that with JES2 spool track reclaim, which constantly munches 
through spool like Pacman and frees up unneeded space.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Conley
Sent: Tuesday, April 11, 2017 11:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: Paging subsystems in the era of bigass memory

On 4/11/2017 1:16 PM, van der Grijn, Bart , B wrote:
> Largest LPARs we have are about 200GB with 6 MOD27 per LPAR. They all run DB2 
> for distributed workloads plus some application specific subsystems.
> The two busiest of those LPARs each run one DB2 member of the same DB2 data 
> sharing group with a frame occupancy of about 39M.
> Next to no paging.
>
> Bart
>

Bart,

This is what has me puzzled.  My two biggest users of AUX, according to 
TMONMVS, are our two DB2 production regions.  They're like 90% of what's in the 
page datasets.  I have the DB2 sysprog looking at DB2's virtual storage knobs 
to see if we have something misconfigured.

Thanks,
Tom Conley


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tom Conley

On 4/11/2017 1:16 PM, van der Grijn, Bart , B wrote:

Largest LPARs we have are about 200GB with 6 MOD27 per LPAR. They all run DB2 
for distributed workloads plus some application specific subsystems.
The two busiest of those LPARs each run one DB2 member of the same DB2 data 
sharing group with a frame occupancy of about 39M.
Next to no paging.

Bart



Bart,

This is what has me puzzled.  My two biggest users of AUX, according to 
TMONMVS, are our two DB2 production regions.  They're like 90% of what's 
in the page datasets.  I have the DB2 sysprog looking at DB2's virtual 
storage knobs to see if we have something misconfigured.


Thanks,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread van der Grijn, Bart (B)
Largest LPARs we have are about 200GB with 6 MOD27 per LPAR. They all run DB2 
for distributed workloads plus some application specific subsystems. 
The two busiest of those LPARs each run one DB2 member of the same DB2 data 
sharing group with a frame occupancy of about 39M.
Next to no paging.

Bart

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Pinnacle
Sent: Tuesday, April 11, 2017 10:47 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Paging subsystems in the era of bigass memory

Gone are the halcyon days when we could run an LPAR with three mod-3's 
as the local paging subsystem.  With today's large memory sizes, I'm 
faced with having to completely rethink my paging subsystems.  I've 
currently got a 133GB LPAR with 18 mod-9 locals at 44%.  I'm going to 
add 22 more mod-9's, which will get me just under the 30% threshold.  
That's 40 page datasets, which is about 30 more than the most I've ever 
managed. I'm thinking about going to 10 mod-54's as my final solution 
for this LPAR (roughly 4x the real memory).  I wondered what the rest of 
you are doing with your paging subsystems in the era of bigass memory sizes.

Regards,
Tom Conley

-- 
  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Allan Staller
Don't forget about PAGTOTL  in SYS1.PARMLIB(IEASYS00).
Default is 40. Extra slots reserve ESQA, so don't be overly generous.

HTH,


I had the total number of pages allocated and available in the current paging 
subsystem, so I calculated the available pages with 22 more mod-9's and then 
divided the allocated pages into the new total pages available.  I plan to do 
PAGEADDs for the new page datasets, and then PAGEDEL and PAGEADD the existing 
datasets to redistribute the pages.



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Mark Zelden
On Tue, 11 Apr 2017 12:36:10 -0400, Tom Conley  
wrote:

>I had the total number of pages allocated and available in the current
>paging subsystem, so I calculated the available pages with 22 more
>mod-9's and then divided the allocated pages into the new total pages
>available.  I plan to do PAGEADDs for the new page datasets, and then
>PAGEDEL and PAGEADD the existing datasets to redistribute the pages.
>
>Regards,
>Tom Conley
>

Better to use PAGEDEL REPLACE for the ones that will be replaced. Then
maybe wait for an IPL if you can to eliminate the others after changing
IEASYSxx.See the warning in the MVS commands manual.   Yes, I ran
 into that problem many years ago when going from mod-3 to mod-9
local page data sets. 

Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
ITIL v3 Foundation Certified
mailto:m...@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tom Conley

On 4/11/2017 12:03 PM, Lizette Koehler wrote:

Tom

How did you determine you needed 22 more MOD9s.  Is there a formula you used?

Lizette




Hi Lizette,

I had the total number of pages allocated and available in the current 
paging subsystem, so I calculated the available pages with 22 more 
mod-9's and then divided the allocated pages into the new total pages 
available.  I plan to do PAGEADDs for the new page datasets, and then 
PAGEDEL and PAGEADD the existing datasets to redistribute the pages.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Mark Zelden
On Tue, 11 Apr 2017 10:46:40 -0400, Pinnacle  wrote:

>Gone are the halcyon days when we could run an LPAR with three mod-3's
>as the local paging subsystem.  With today's large memory sizes, I'm
>faced with having to completely rethink my paging subsystems.  I've
>currently got a 133GB LPAR with 18 mod-9 locals at 44%.  I'm going to
>add 22 more mod-9's, which will get me just under the 30% threshold.
>That's 40 page datasets, which is about 30 more than the most I've ever
>managed. I'm thinking about going to 10 mod-54's as my final solution
>for this LPAR (roughly 4x the real memory).  I wondered what the rest of
>you are doing with your paging subsystems in the era of bigass memory sizes.
>
>Regards,
>Tom Conley
>

The 3 largest LPARs in terms of memory in one of the sysplexes I support are:
286G, running dev WebSphere and 2 prod WAS LPARs each with 230G.  I know
there is not enough locals to cover all these 64-bit workloads actually using
what their potential is - but then again, I don't have enough DASD of that
in the entire DASD farm.   We try not to page on those prod LPARs so there is
enough real storage to support that and the local page DS usage is
near 0%.  There are 9 3390-27 locals on those 2 prod LPARs.  OTOH, the
dev LPAR with 286G has a larger virtual storage requirement due to 
so many dev WAS regions.  There are 16 mod 27s for locals and right
now the usage is 6%.   Other LPARs in the sysplex running CICS, DB2,
MQ and most of of the batch have an average of about 80G of real storage
and 7-10 locals, all 3390 mod 27s.  

There is no way you could possible plan for worse case scenarios with
64-bit.  I think you have to look at what your actual workloads are and what
happens during "bad" events (like multiple SVC dumps) to have extra
storage available for that so paging doesn't go through the roof. (I wish
we had flash in our z13s for SVC dumps, but we don't).  

We do keep page ds usage at 30% or lower per the old ROT and I still
see that ROT sited.   But honestly, I don't get it... with mod-27s and
mod-54s for locals, how could ASM not find enough contiguous slots
to be efficient even if the page DS is 50% utilized.  


Best Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
ITIL v3 Foundation Certified
mailto:m...@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Lizette Koehler
Tom

How did you determine you needed 22 more MOD9s.  Is there a formula you used?

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Pinnacle
> Sent: Tuesday, April 11, 2017 7:47 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Paging subsystems in the era of bigass memory
> 
> Gone are the halcyon days when we could run an LPAR with three mod-3's as the
> local paging subsystem.  With today's large memory sizes, I'm faced with
> having to completely rethink my paging subsystems.  I've currently got a 133GB
> LPAR with 18 mod-9 locals at 44%.  I'm going to add 22 more mod-9's, which
> will get me just under the 30% threshold.
> That's 40 page datasets, which is about 30 more than the most I've ever
> managed. I'm thinking about going to 10 mod-54's as my final solution for this
> LPAR (roughly 4x the real memory).  I wondered what the rest of you are doing
> with your paging subsystems in the era of bigass memory sizes.
> 
> Regards,
> Tom Conley
> 
> --
> 
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Doug Henry
On Tue, 11 Apr 2017 11:31:32 -0400, Tom Conley  
wrote:

>On 4/11/2017 11:20 AM, Doug Henry wrote:
>> On Tue, 11 Apr 2017 10:46:40 -0400, Pinnacle  
>> wrote:
>>
>>  I wondered what the rest of
>>> you are doing with your paging subsystems in the era of bigass memory sizes.
>>>
>>> Regards,
>>> Tom Conley
>> Hi Tom,
>> We use Storage Class Memory to solve the problem.
>>
>> Doug Henry
>> USBANK
>>
>
>Hey Doug,
>
>Oh, a wiseguy (nyuk nyuk).  How much real memory and how much SCM?  And
>is the price point for SCM competitive with a similar amount of DASD?
>It might be if you go with the old 2-3X real (e.g. 4-6TB of DASD just to
>back 2TB real).
>
>Regards,
>Tom Conley
>
Hi Tom,
We have a pair of SCM cards (so in case one fails). Each card has 1.4 Terabytes 
of memory.
For an lpar that has 524GB online and  the SCM online is 384G.  As for the cost 
I don't really know.
Doug

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Tom Conley

On 4/11/2017 11:20 AM, Doug Henry wrote:

On Tue, 11 Apr 2017 10:46:40 -0400, Pinnacle  wrote:

 I wondered what the rest of

you are doing with your paging subsystems in the era of bigass memory sizes.

Regards,
Tom Conley

Hi Tom,
We use Storage Class Memory to solve the problem.

Doug Henry
USBANK



Hey Doug,

Oh, a wiseguy (nyuk nyuk).  How much real memory and how much SCM?  And 
is the price point for SCM competitive with a similar amount of DASD? 
It might be if you go with the old 2-3X real (e.g. 4-6TB of DASD just to 
back 2TB real).


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Paging subsystems in the era of bigass memory

2017-04-11 Thread Doug Henry
On Tue, 11 Apr 2017 10:46:40 -0400, Pinnacle  wrote:

 I wondered what the rest of
>you are doing with your paging subsystems in the era of bigass memory sizes.
>
>Regards,
>Tom Conley
Hi Tom,
We use Storage Class Memory to solve the problem.

Doug Henry
USBANK


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN