Re: Very Lage Page Datasets (was ASM and HiperPAV)

2012-02-18 Thread Walter Medenbach
Two years back we had an issue where the throughput of a database utility
dropped considerably (job had to be cancelled because it was too slow)
because it decided to not use more memory to do it's processing. This was
found to be  because the page datasets were more than 50% utilized. The
notes below are from my analysis of the problem at the time.

Some software will believe that the system is storage constrained if the
page datasets are at or near 50% full.   The software affected   use the
SYSEVENT STGTEST macro to determine how much storage it can use without
affecting system paging performance. The returned values are used to set
maximum sizes for such things as memory work areas. If these are small then
the programs will be inefficient and use significantly more cpu and do more
IOs. This should be considered when determining page dataset size.

SYSEVENT STGTST behaviour from Authorized assembler Services reference
manual:

After SRM returns, each word contains a storage amount that represents a
specific number of frames. Before you choose a number to use as the basis
for decision, be aware of how your decision affects the performance of the
system. The meaning of the returned values is:
Use of the first number will affect system performance very little, if at
all.
Use of the second number might affect system performance to some degree.
Use of the third number might substantially affect system performance.

If you base decisions on the value in the second or third word, SRM may
have to take processor storage away from other programs and replace it with
auxiliary storage.
If you are running your system in workload management goal mode, the value
returned in the third word will always be the same as the value returned in
the second word.

APAR OA20116 has a note throws some light on how the second word returned
from SYSEVENT STGTST is calculated:

Problem conclusion
The logic in IRAEVREQ was changed in a way, that the maximum of
the return value 1 and return value 2 is used as return value 2
and return value 3.

Additionally the logic in IRAEVREQ ensures that the value
returned in words 2 and 3 do not drive the system into an
auxiliary storage shortage.  The value returned in words 2 and 3
will only fill the AUX subsystem up to 50%.

Examples:
If the AUX subsystem is filled by 25% and the return value 1
contains 1000 frames.  Then the return value 2 and 3 are set to
1000 frames + 25% of the AUX slots.

Walter Medenbach

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: LPAR with Dedicated CPs - LP Dispatch Time does not equal LP Online Time

2011-08-19 Thread Walter Medenbach
Interesting observation.

The doc for type 70 subtype 1 smf record describes the SMF70INB (PR/SM
indicator bits) value 3 as An additional partition, that is not included in
the count of configured partitions, is presented with a name of “PHYSICAL”.
This partition includes all of the uncaptured time that was used by the LPAR
management time support feature but could not be attributed to a specific
logical partition.

The difference between CPU_DISPATCH_SEC and CPU_EFF_DISP_SEC is the PR/SM
overhead directly attributed to that LPAR,  I suspect that the difference
between LP_ONLINE_SEC and CPU_DISPATCH_SEC is part of the uncaptured time
seen in the PHYSICAL partition. I recommend displaying the values for the
PHYSICAL LPAR to confirm.

For those unfamiliar with TDS. The manual has the following descriptions

LP_ONLINE_SEC  Total logical processor online seconds for this LPAR.
Calculated as the sum of SMF70ONT/100.

CPU_DISPATCH_SEC Logical processor dispatch time, in seconds. Calculated
as the sum of SMF70PDT/1 000 000

CPU_EFF_DISP_SEC Logical processor effective dispatch time (excluding LPAR
management time), in seconds. Calculated as the sum of
SMF70EDT/1 000 000.

Walter Medenbach


On Thu, Aug 18, 2011 at 2:50 AM, Rick Mansfeldt rmans...@bellsouth.netwrote:

 I'm looking at Tivoli Decision Support data for an LPAR defined with 8
 dedicated CPs.  Specifically, I'm using data from the MVSPM_LPAR_H table and
 comparing columns:

 LP_ONLINE_SEC (Total logical processor online seconds for this LPAR)
  and

 CPU_DISPATCH_SEC (Logical processor dispatch time, in seconds)

 I would expect the values to be equal for an LPAR with dedicated engines
 however I'm seeing a small delta between the two values.

 Example:

 LP_ONLINE_SEC = 14399.994 secs andCPU_DISPATCH_SEC =  14399.695
 secs  (delta = 0.299 secs)

 Any idea what the 0.299 secs represents?

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Omon/Epilog question

2011-05-13 Thread Walter Medenbach
The same commands can be used in batch mode

Walter Medenbach
Performance Analyst
IBM Australia

On Thu, May 12, 2011 at 2:44 PM, Cobe Xu cob...@gmail.com wrote:

 Hi list,

 How to use archived EDS (regardless active ones in EDSLIST) when involve
 EPILOG in batch mode?
 In panel mode, I know DAT ADD and DAT USE command can help.
 Many thanks!

 --
 Cobe Xu

 Best Regards
 ---
 z/OS Performance  Capacity Analyst
 z/OS System Programmer
 Email: cob...@gmail.com
 ---
 *Impart fishing is much better than just donate fishes*

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dummy LPAR to store excess MIPS

2011-03-11 Thread Walter Medenbach
Defined capacity  capping uses a rolling 4 hour average. Usage before the
cap kicks in can therefore exceed some license agreements. We have
successfully use a  dummy coupling facility soaker LPAR. The ICF must have
dynamic dispatch set to OFF to ensure that it goes into a cpu loop. The
amount of soak is controlled by capping the ICF LPAR and adjusting the
weight as required.

Walter Medenbach

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: replacing SAS for SMF reports?

2008-06-19 Thread Walter Medenbach
Perl provides both data manipulation and statistical functions and, off the
mainframe, is used to manipulate data before processing with R.  It's part
of the ported tools for USS
www-03.ibm.com/servers/eserver/zseries/zos/unix/perl/index.html . It is
nowhere near as fast as SAS but it is free and has a wide user base.

Walter Medenbach

On Tue, Jun 17, 2008 at 12:51 AM, McKown, John 
[EMAIL PROTECTED] wrote:

 Well, it looks like SAS is pricing itself out of our range. Or
 management is just doesn't think that we are getting our moneys worth or
 ...

 Anyway, other than using HLASM or maybe shudder COBOL, anybody have
 any suggestions how to easily do some ad hoc type SMF reporting? What
 would be really nice would be some sort of SMF to XML output program. I
 really like the IRRADU00 output (RACF SMF data translated to XML). I
 download that to my PC and run Java against it. If necessary, I could
 even develop and test the Java code on my PC and run the application on
 the mainframe once it is working. (or use Co:Z to ship the XML to my
 Linux system and run the code there with the response going back to the
 mainframe).

 --
 John McKown
 Senior Systems Programmer
 HealthMarkets
 Keeping the Promise of Affordable Coverage
 Administrative Services Group
 Information Technology

 The information contained in this e-mail message may be privileged
 and/or confidential.  It is for intended addressee(s) only.  If you are
 not the intended recipient, you are hereby notified that any disclosure,
 reproduction, distribution or other use of this communication is
 strictly prohibited and could, in certain circumstances, be a criminal
 offense.  If you have received this e-mail in error, please notify the
 sender by reply and delete this message without copying or disclosing
 it.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: replacing SAS for SMF reports?

2008-06-19 Thread Walter Medenbach
Agree entirely. The biggest stumbling block is having up to date templates
for all the SMF records. I have often thought that that would make a great
open source project,


Walter Medenbach.

On Fri, Jun 20, 2008 at 10:19 AM, Ted MacNEIL [EMAIL PROTECTED] wrote:

 Perl provides both data manipulation and statistical functions and, off
 the mainframe, is used to manipulate data before processing with R.

 Not to belittle your response, but is there a body of code to read SMF
 data?
 The issue is not the statistical/reporting capability, rather the ability
 to read the raw data.
 There are many packages better than SAS (as a SAS bigot) to report and
 analyse, but how many can read?

 -
 Too busy driving to stop for gas!

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: How to limit SAS in a PLEX

2008-03-13 Thread Walter Medenbach
To direct SAS usage to a licensed machine in a sysplex we created a AS WLM
scheduling environment. To reduce JCL changes, JES exit 6  added this
scheduling environment if a  step explicitly used SAS.

Walter Medenbach

On Wed, Mar 12, 2008 at 4:59 AM, Lizette Koehler [EMAIL PROTECTED]
wrote:

 I need some suggestions on how to limit where a program (in this instance
 SAS) runs.  I know I can write a program that would test for the LPAR name
 and then either link to SAS or fail the request.  But what are my other
 options?  we are z/os v1.7

 REXX?
 WLM?
 OTHER?

 We do not have Through Put Manager.  So I need to keep it simple and no
 money.

 Lizette

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Soft Capping

2008-02-13 Thread Walter Medenbach
Nothing wrong with restricting capacity to save on licensing costs. We use a
coupling facility LPAR with dynamic dispatch disabled to control the amount
of CPU available to the other LPARs on the machine. The zOS LPARs are all
uncapped to allow each LPAR to use any unused capacity.

Walter Medenbach

On Feb 9, 2008 5:23 AM, Kelman, Tom [EMAIL PROTECTED] wrote:

 This has been cross posted to the MXG list.



 At the end of last year I was asked to determine what would be practical
 soft caps for our LPARs.  We are currently at z/OS v1.7 so we can't cap
 at the CEC level yet.  Of course, the reason for the cap was to keep
 software costs down, especially for the OEM software.  We have an
 agreement with most of our vendors that we won't use more than 110 MSUs
 out of our 138 MSU box.  My direction was that we didn't want the total
 MSU 4HRA to go about 110.  So I split 110 between the three LPARs we
 have as equitably as I could based on a years worth of analysis.  This
 split came to 90 MSUs for production, 19 MSUs for development, and 1 MSU
 for the sysprog sandbox.  We have just hit the cap on development and
 they are screaming blood murder because there are some deadlines to meet
 for a conversion.  So they want to increase the cap on development.  My
 suggestions were to take some from production and give it to development
 or to just increase development since the high 4HRA for the individual
 LPARs never occur at the same time.



 Now, after that explanation, my question.  Has anybody had to cap their
 machine by LPAR like this?  If so, do you insure that the individual
 caps would add up to some sort of specified limit, or did you set them a
 little higher realizing that they probably wouldn't hit the max all in
 the same day?  Of course taking the second route will leave open the
 possibility of going over the CEC 4HRA that you want (in our case 110
 MSUs).  I'd appreciate any ideas anyone has on determining these caps.



 Please don't say that you're not in favor of capping to keep software
 costs down.  Neither am I so you'd be preaching to the choir.  However,
 as we all know, after all the recommendations you do what your told to
 do.



 Tom Kelman

 Commerce Bank of Kansas City

 (816) 760-7632






 *
 If you wish to communicate securely with Commerce Bank and its
 affiliates, you must log into your account under Online Services at
 http://www.commercebank.com or use the Commerce Bank Secure
 Email Message Center at https://securemail.commercebank.com

 NOTICE: This electronic mail message and any attached files are
 confidential. The information is exclusively for the use of the
 individual or entity intended as the recipient. If you are not
 the intended recipient, any use, copying, printing, reviewing,
 retention, disclosure, distribution or forwarding of the message
 or any attached file is not authorized and is strictly prohibited.
 If you have received this electronic mail message in error, please
 advise the sender by reply electronic mail immediately and
 permanently delete the original transmission, any attachments
 and any copies of this message from your computer system.

 *

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Sub-CEC Reports

2008-01-10 Thread Walter Medenbach
Use email if that is acceptable. http://planetmvs.com/mvsmail/ shows how. I
suspect that a digital signature could also be attached.

regards...Walter

On Jan 11, 2008 6:22 AM, Dean Montevago [EMAIL PROTECTED] wrote:

 We right the output to a file, can we just FTP it ? Will they accept it
 like that ?

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
 Behalf Of Gary Green
 Sent: Thursday, January 10, 2008 2:02 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: Sub-CEC Reports


 Automatically sending it is the easy part.  We need to have the report
 signed by a manager to authenticate it so we wind up faxing it.

 Unless of course, you mean extracting that single page from the mass of
 output and just sending that.

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
 Behalf Of Dean Montevago
 Sent: Thursday, January 10, 2008 10:49 AM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Sub-CEC Reports

 Hi,

 Has anybody automated sending the report to IBM ? If so, how'd you do it
 ?

 TIA
 Dean

 Dean Montevago
 Sr. Systems Specialist
 Visiting Nurse Service of New York
 (212) 609 - 9608
 [EMAIL PROTECTED]


 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search
 the archives at http://bama.ua.edu/archives/ibm-main.html


 --
 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.5.516 / Virus Database: 269.19.0/1216 - Release Date:
 1/9/2008 10:16 AM



  http://e-mail-servers.com/abac9a0aea5a4412007fddd857cbee5bworker.jpg


 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search
 the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Inaccurate CPU% reported by RMF and TMON

2008-01-07 Thread Walter Medenbach
Sounds like uncaptured cpu and therefore is not an error with RMF and TMON.
Uncaptured cpu is cpu that has not been associated with a particular address
space. Your capture ratio appears low. Find out when the problem started and
whether it occurs 24x7.  Look for such things as SLIP traps or high paging.

Regards...Walter

On Jan 8, 2008 3:14 PM, Jason To [EMAIL PROTECTED] wrote:

 We have encountered some weird problem last week and discovered that
 the total MVS CPU busy percentage reported by both RMF and TMON were
 inaccurate. RMF and TMON reported MVS CPU percentage does not match
 with the total CPU% usage by the jobs running in the system at least
 in one LPAR, the other LPAR seems to be fine. For example the reported
 total CPU% was 72% at an interval period but only 40% when we add up
 all the CPU% of jobs, a disparity of 30%. From the WLM activity
 report, by comparing it with the total APPL% used divided by the total
 assigned CPs also produced result of 40+%. Hence, the MVS CPU
 percentage should have been 40+%.  Anyone out there have encountered
 this problem before? Any reported fix to resolve this problem? Btw, we
 are still at z/OS v1.4, running in the sysplex.

 Regards,
 Jason

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html