Re: REDBOOKS Question

2010-11-07 Thread Terry Draper
Redbooks once published are not updated. There are no dash numbers or TNLs.
A new Redbooks on the same or similar subject has a completely new number. 
These are only done when a major change to the subject requires it and the lab 
that develops the product wants a new Redbook.
Error should be trapped at the draft stage before publication. If not then they 
will remain for the life of the Redbook.
I have been involved in several Redbooks. I always say the product manual is 
the definitive source of information on the subject.

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Sun, 7/11/10, Ed Gould ps2...@yahoo.com wrote:


From: Ed Gould ps2...@yahoo.com
Subject: REDBOOKS Question
To: IBM-MAIN@bama.ua.edu
Date: Sunday, 7 November, 2010, 4:05


I have forgotten the answer to this question long ago and I am not coming up 
with any answer to my question to myself.I will pose it to the group and see if 
anyone on here knows the answer.I am trying to remember how IBM handles it if 
there is an error (or poorly written or whatever error) happens with REDBOOKS.
Are they quietly updated (and if so) is there a new dash number or is there a 
completely new number assigned  or how does IBM handle updates to any REDBOOKS? 
I know there are errors as long time ago (30+ years) I used to see them with 
TNL's (we all remember those right?) but I do not remember how IBM handles them 
currently. Anyone?




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU capping is not working for one Lpar only on CEC?

2010-11-04 Thread Terry Draper
I am not sure what type of capping you are trying to use. 
 
If you want to use hard capping then this will use the weights and it will give 
you 160/(160+10). This will not be exact. If you look at the PR/SM manual is is 
usually within 1% but can be over 3^% out.
 
You talk about MSUs, which is soft capping and this uses the rolling 4 hour 
average. So until your 4 hour average goes over the define you can use over 
this number of MSUs.
 
 You cannot use both hard and soft capping.
 
Looking at the report it says zero for your defined capacity MSUs. So it looks 
like you are not using soft capping.
 
You have hardware capping set for yes.
 
What are you trying to do. If you are managing the software costs then the MSU 
4 hour rolling avaerage is appropriate. If you want to have a maximum capacity 
for a partition all teh time then you need hard capping. But remember there is 
a small percentage error rate on this.
 
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Wed, 3/11/10, Cobe Xu cob...@gmail.com wrote:


From: Cobe Xu cob...@gmail.com
Subject: CPU capping is not working for one Lpar only on CEC?
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 3 November, 2010, 9:59


Hi list,
We aim to cap the only active LPAR on the CEC(26 MSU) to 24 MSU.
But, I'm a bit confuse when I checked the RMF CPU Activity report as below,
which shows that with the interval, SYS2 was able to use up to 25 MSU.
(Highlighted)
So my questions are:
1. Is this because CPU capping is not working for only one active LPAR on
the CEC? If it's the case, any reference?
2. Or, this is related to the WEIGHT value we used to CAP? in our case, we
reference our CEC capacity is 26 MSU (about 171 MIPS), target to CAP 24 MSU
(160 MIPS).
. But, for client's sake, we use MIPS value as the WEIGHT,i.e 160. (as
highlighted in the report). And, this mislead the Lpar scheduler that it is
over 100% of the CEC. Thus,
  SYS2 can use as much as it needs.
3. Or any other posibility?
Pls shed some light, thanks a lot!



PAGE    2
            z/OS V1R8                SYSTEM ID SYS2             START
08/17/2010-03.00.00  INTERVAL 000.59.59
                                     RPT VERSION V1R8 RMF       END
08/17/2010-04.00.00  CYCLE 0.100 SECONDS


MVS PARTITION NAME                    SYS2        NUMBER OF PHYSICAL
PROCESSORS           4                 GROUP NAME       N/A
IMAGE CAPACITY                          24
CP                            2                 LIMIT            N/A
NUMBER OF CONFIGURED PARTITIONS          5
ICF                           2
WAIT COMPLETION
NO

DISPATCH INTERVAL
DYNAMIC

- PARTITION DATA -  -- LOGICAL PARTITION PROCESSOR
DATA --   -- AVERAGE PROCESSOR UTILIZATION PERCENTAGES --
                   MSU  -CAPPING--  PROCESSOR-  DISPATCH TIME
DATA   LOGICAL PROCESSORS  --- PHYSICAL PROCESSORS ---
NAME       S   WGT  DEF    ACT  DEF   WLM%  NUM   TYPE   EFFECTIVE
TOTAL       EFFECTIVE    TOTAL  LPAR MGMT  EFFECTIVE  TOTAL
SYS2       A   160    0     25  YES    0.0    2   CP    01.55.49.238
01.55.53.278       96.52    96.57      0.06      96.52  96.57
SYS6       A    10    0      0  YES    0.0    2   CP    00.00.00.000
00.00.00.000        0.00     0.00      0.00       0.00   0.00
*PHYSICAL*
00.00.01.367                           0.02              0.02
                                                        
                         --     -- --
  TOTAL                                                 01.55.49.238
01.55.54.645                           0.08      96.52  96.59


CFP01AH2   A   DED                            1   ICF   00.59.59.655
00.59.59.725       99.99    99.99      0.00      50.00  50.00
CFP02AH2   A   DED                            1   ICF   00.59.59.666
00.59.59.708       99.99    99.99      0.00      50.00  50.00
*PHYSICAL*
00.00.00.422                           0.01              0.01
                                                        
                         --     -- --
  TOTAL                                                 01.59.59.321
01.59.59.855                           0.01      99.99  100.0


SYS8
-- 
Cobe Xu

Best Regards
---
zOS Performance  Capacity Analyst
E2E Performance Analyst
Email: cob...@gmail.com
---

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives

Re: New(ish) redbook on z196

2010-11-01 Thread Terry Draper
The July copy was a draft version.
The final version was published on 15th October.
I suggest you download the final version.   


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Mon, 1/11/10, Scott Rowe scott.r...@joann.com wrote:


From: Scott Rowe scott.r...@joann.com
Subject: Re: New(ish) redbook on z196
To: IBM-MAIN@bama.ua.edu
Date: Monday, 1 November, 2010, 20:30


I don't know if it's been posted before, but I downloaded the pdf on 7/23.

On Mon, Nov 1, 2010 at 2:55 PM, McKown, John
john.mck...@healthmarkets.comwrote:

 Sorry if I'm reposting something that was already posted.


 http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg247833.html?Open

 quote
 The zEnterprise System consists of the IBM zEnterprise 196 central
 processor complex, IBM zEnterprise Unified Resource Manager, and IBM
 zEnterprise BladeCenter(r) Extension. The z196 is designed with improved
 scalability, performance, security, resiliency, availability, and
 virtualization. The z196 Model M80 provides up to 1.6 times the total system
 capacity of the z10 EC Model E64, and all z196 models provide up to twice
 the available memory of the z10 EC. The zBX infrastructure works with the
 z196 to enhance System z virtualization and management through an integrated
 hardware platform that spans mainframe and POWER7 technologies. Through the
 Unified Resource Manager, the zEnterprise System is managed as a single pool
 of resources, integrating system and workload management across the
 environment.

 This book provides an overview of the zEnterprise System and its functions,
 features, and associated software support. Greater detail is offered in
 areas relevant to technical planning. This book is intended for systems
 engineers, consultants, planners, and anyone wanting to understand the
 zEnterprise System functions and plan for their usage. It is not intended as
 an introduction to mainframes. Readers are expected to be generally familiar
 with existing IBM System z technology and terminology.
 /quote

 John McKown
 Systems Engineer IV
 IT

 Administrative Services Group

 HealthMarkets(r)

 9151 Boulevard 26 * N. Richland Hills * TX 76010
 (817) 255-3225 phone * (817)-691-6183 cell
 john.mck...@healthmarkets.com * www.HealthMarkets.com

 Confidentiality Notice: This e-mail message may contain confidential or
 proprietary information. If you are not the intended recipient, please
 contact the sender by reply e-mail and destroy all copies of the original
 message. HealthMarkets(r) is the brand name for products underwritten and
 issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake
 Life Insurance Company(r), Mid-West National Life Insurance Company of
 TennesseeSM and The MEGA Life and Health Insurance Company.SM


 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains
confidential and privileged information intended only for the addressee.
If you are not the intended recipient, please be advised that you have
received this material in error and that any forwarding, copying, printing,
distribution, use or disclosure of the material is strictly prohibited.
If you have received this material in error, please (i) do not read it,
(ii) reply to the sender that you received the message in error, and
(iii) erase or destroy the material. Emails are not secure and can be
intercepted, amended, lost or destroyed, or contain viruses. You are deemed
to have accepted these risks if you communicate with us by email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Storage usage in a job

2010-10-20 Thread Terry Draper
Phil,
    You do not say what kind of batch job it is. Is it normal qsam/vsam or is 
it DB2?
 
    The CPU percentage (CPU time /elapsed time) is over 70%. This is very high 
for a non DB2. Even for DB2 it would have to find most of the data already in 
bufferpools.
 
    The fialure is insufficienty storage to loadf a module. The module name is 
a sample module for LE(searhced GOOGLE for it). So it is probably not big. 
   
   Also you have paging at 228K according to your joblog. I would assume your 
working set is large.
 
   I would not expect it to be a real storage problem It is almost certainly a 
virtual storage problem in the jobs region. 
 
   Try increasing your region size.
 
   Also are you sure you are not looping on the same record - hence the high 
CPU percentage.
 
   Maybe you build an in memory table and the larger volumes are blowing the 
region.
 
   I would recommend fixing the virtual storage AND the high CPU percentage. 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Wed, 20/10/10, Mike Schwab mike.a.sch...@gmail.com wrote:


From: Mike Schwab mike.a.sch...@gmail.com
Subject: Re: Storage usage in a job
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 20 October, 2010, 4:42


120T = 120,000 4K pages over 480,000,000 bytes

On Tue, Oct 19, 2010 at 9:10 PM, Phil Smith p...@voltage.com wrote:
 We're doing some load testing, and running out of storage, but we can't 
 figure out how to tell WHAT storage. Shortly before it blew off, the REAL 
 column in SDSF showed 120T. We're pretty sure that wasn't 120 terabytes; it 
 seems to be T for Thousand. But that isn't a lot of memory?!

 Here's the job output when it failed - does this offer any clues? Any hints, 
 or pointers to doc, are gratefully accepted!

 EW4000I FETCH FOR MODULE EDCU FROM DDNAME -LNKLST- FAILED BECAUSE 
 INSUFFICIENT STORAGE WAS AVAILABLE.
 SV031I LIBRARY ACCESS FAILED FOR MODULE EDCU, RETURN CODE 24, REASON CODE 
 26080021, DDNAME *LNKLST*
                                              --TIMINGS (MINS.)--            
 -PAGING COUNTS
 STEPNAME PROCSTEP    RC   EXCP   CONN    TCB    SRB  CLOCK   SERV  WORKLOAD  
 PAGE  SWAP   VIO SWAPS
 RUN                3632 474037     51 345.99    .00  482.8 558935  BATCH     
 288K     0     0     0
 HUDIUNIR ENDED.  NAME-                     TOTAL TCB CPU TIME= 345.99 TOTAL 
 ELAPSED TIME= 482.8
 HASP395 HUDIUNIR ENDED
 --
 ...phsiii

 Phil Smith III
 p...@voltage.com
 Voltage Security, Inc.
 www.voltage.com

-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Batch loop in SYSplex

2010-10-11 Thread Terry Draper
Mohd,
   Just some basic questions.
 
   Is this a one off event or have you had several times?
 
   Is it always the same job?
 
   What sort of VSAM dataset is it? How big is it?
 
   Do you have RACF erase on scratch turned on for it? If so, and the data set 
is big, the delete will take a long time writing binary zeros over it.
 
  

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Mon, 11/10/10, Mohd Shahrifuddin fud...@bayss.com wrote:


From: Mohd Shahrifuddin fud...@bayss.com
Subject: Batch loop in SYSplex
To: IBM-MAIN@bama.ua.edu
Date: Monday, 11 October, 2010, 2:57


Dear All,

Environment:
We in z/OS 1.9, CICS, IMS and VSAM environment. Running Parallel SYSPLEX 4 
LPAR. Lately we apply new PTF RSU1006 for all three(z/OS,IMS and CICS).

Encounter Problem:
We have using our own develop Job Schedule running in LPAR A. Our Batch job 
we route to running in LPAR B. IMS is Sharing all LPAR. Before PTF we not 
encounter any problem but after complete rooling IPL all LPAR we encounter 
problem. The problem is the batch job running in LPAR B cannot complete just 
only to delete define VSAM file. The worst the job using alot of CPU look like 
loop job. If we run the batch job using the same LPAR (LPAR A) with the job 
schedule is not encounter any problem.

Help:
If any of lister face same problem please advice us what the problem. If any 
cause PTF make the situation problem.

Thank You in advance,

Mohd Shahrifuddin.
Shanghai, China. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SORT question

2010-09-30 Thread Terry Draper
Frank,
   I did not plan for SORT to read the records. SORTIN is dummy. The exit reads 
the records directly and inserts the records into sort. I have written many 
such E15 exits to process SMF data. The E15 does the read from its own input 
file. It then only reads the records until the higher date is reached. It then 
stops reading any more records and signals to sort that the input is finished.
   This will only read the required records. 


Terry Draper
series Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Wed, 29/9/10, Frank Yaeger yae...@us.ibm.com wrote:


From: Frank Yaeger yae...@us.ibm.com
Subject: Re: SORT question
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 29 September, 2010, 20:30


Terry Draper wrote on IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu
wrote on 09/29/2010 02:43:19 AM:
 Maybe try using an E15 exit to read the input file. Stop passing the
 records when you reach the date higher than required..
 I am not sure how I would pass the date to the E15. Could it be a
 parm. Or it could be read from a separate single record input file
 on first pass through the E15.

 Any comments on this?

This would probably have the opposite effect of what was requested (better
performance due to NOT reading records once the date is found).

Although, you can stop passing the records to the exit by having the exit
pass back RC=8 early, ALL of the records will still be read; some will
just not be passed to the exit.

But since an exit is not really needed in this case, and using an exit
involves additional overhead, this wouldn't accomplish anything.

Frank Yaeger - DFSORT Development Team (IBM) - yae...@us.ibm.com
Specialties: JOINKEYS, FINDREP, WHEN=GROUP, ICETOOL, Symbols, Migration

= DFSORT/MVS is on the Web at http://www.ibm.com/storage/dfsort

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SORT question

2010-09-29 Thread Terry Draper
Maybe try using an E15 exit to read the input file. Stop passing the records 
when you reach the date higher than required.. 
I am not sure how I would pass the date to the E15. Could it be a parm. Or it 
could be read from a separate single record input file on first pass through 
the E15.
 
Any comments on this?


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Tue, 28/9/10, Frank Yaeger yae...@us.ibm.com wrote:


From: Frank Yaeger yae...@us.ibm.com
Subject: Re: SORT question
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 28 September, 2010, 18:13


David Bettern on IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote
on 09/28/2010 08:14:26 AM:
 I think the real question was how to tell sort to stop reading anymore
 input records when it reaches the first record that's greater than the
 selection criteria instead of reading through the rest of the file but
not
 selecting any of those records.  I'm not sure we have a way to do that
but
 I'm sure Frank will weigh in shortly.

The only way to tell DFSORT to stop processing records is with STOPAFT=n.
But in this case, we can't know what n is ahead of time.  So I don't
see a way to do it in one pass.

Frank Yaeger - DFSORT Development Team (IBM) - yae...@us.ibm.com
Specialties: JOINKEYS, FINDREP, WHEN=GROUP, ICETOOL, Symbols, Migration

= DFSORT/MVS is on the Web at http://www.ibm.com/storage/dfsort

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SORT question

2010-09-29 Thread Terry Draper
The question stated I have a large file that is sorted in ascending order by a 
date field, MMDD. So I must assume it is already in sequence od date. 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Wed, 29/9/10, Staller, Allan allan.stal...@kbm1.com wrote:


From: Staller, Allan allan.stal...@kbm1.com
Subject: Re: SORT question
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 29 September, 2010, 15:05


At E15 time, the file has not yet been sorted.

snip
Maybe try using an E15 exit to read the input file. Stop passing the records 
when you reach the date higher than required.. 
I am not sure how I would pass the date to the E15. Could it be a parm. Or it 
could be read from a separate single record input file on first pass through 
the E15.
 
Any comments on this?
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SQA/ESQA HAS EXPANDED INTO CSA/ECSA

2010-09-21 Thread Terry Draper
Matan,
   The display you gave is for real storage use. You need the virtual storage 
view for the common areas. RMF monitor III has this display.
 
   By itself an overflow from SQA/ESQA to CSA/ECSA is not a problem. That is so 
long as you have ample room in CSA/ECSA to accommodate it.
 
   If the overflow is for SQA to CSA you are talking about areas with limited 
size below the line. Just increases the size of either may reduce your private 
area below the line by 1 megabyte.
So in this case I would do serious investigation of why the increase. Most use 
of common has been move above the line.
 
   If the overflow is from ESQA to ECSA then your ECSA is probably big enough 
to handle it. If you do not have a lot of free space in ECSA already then 
increase at the next IPL. If you have an address space that requires a large 
extended private (such as DB2 with SAP) then be careful not to reduce extended 
private too much.
 
   If you continue to have an overflow then you may want to also increase ESQA 
(although this is not essential). Note that some overflow pages may hang around 
until the next IPL even though the total ESQA requirement has gone down. This 
is because the requester of the overflow pages is still using them.
 
  So the message in itself is not a problem. It should trigger you to 
investigate more as detailed above.
 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Tue, 21/9/10, Matan Cohen matancohen...@gmail.com wrote:


From: Matan Cohen matancohen...@gmail.com
Subject: SQA/ESQA HAS EXPANDED INTO CSA/ECSA
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 21 September, 2010, 9:34


Hi,
When I started the websphere on the z/9 i noticed this message :
*IRA103I **SQA**/**ESQA** HAS EXPANDED INTO **CSA**/**ECSA** BY   887 PAGES*


entering to the Omegamon for MVS show this :

Major Area         Real   Minor Area        Not/Fix     Fixed    Total
===
High Private      1,117M                      1,109M        8M   1,117M
===
Extended Private  4,091M                      4,013M   79,664K   4,091M
                           (ELSQA)               92K   76,808K  76,900K
---
Extended Common 104,884K  CSA                42,448K   16,860K  59,308K
                          FLPA                             12K      12K
                          PLPA               20,560K      288K  20,848K
                          SQA                 1,996K   12,552K  14,548K
                          Read/Write Nuc                  488K     488K
                          Read-only Nuc                 9,680K   9,680K
===
Common            2,132K  Read-only Nuc                   112K     112K
                          Read/Write Nuc                   52K      52K
                          SQA                             332K     332K
                          PLPA                1,184K             1,184K
                          CSA                   352K      100K     452K
---
Private          55,580K  V=V                48,936K    6,628K  55,564K
                           (LSQA)                       5,836K   5,836K
                          System Area            16K                16K
---
Abs Zero Frame       24K                                   24K      24K
===
Available       187,724K
DataOnly Spaces 446,216K
DataOnly Sp Mgmt  5,876K
Shared Fixed        664K
Shared Pageable 144,948K
Page Table        5,656K
Local Quad        3,312K
BDF                  24K
TDF                  24K
SQA Reserved         20K
DAT Off Nucleus      16K
---
Total Storage         6G



we have performance problem with the WAS but in HIGH CPU meaning
I wander if i should care for this message and increase the SQA or consider
other steps, any advise on this issue will be appreciate .



-- 
best regards,
matan cohen
MF System Administrator.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU Spikes completely hangs sessions;No logons: need inputs

2010-09-06 Thread Terry Draper
Amit.
   Firstly you should have RMF monitor III setup so that it can see the address 
spaces in all the LPARs in the Sysplex. Make sure RMF address spaces are at 
high priority. You can then see who is getting the CPU and how much the LPAR is 
using.
 
  Have the TSO userids of selected (trusted not to abuse the priority) 
performance people set to a single period high priority service class. 
Alternatively you can get someone reset to a high priority at problem times.

  It is useful to have some TSO users stay logged on nearly all the time. Not 
sure of the best options to achieve this.

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Mon, 6/9/10, amit amitpdu...@gmail.com wrote:


From: amit amitpdu...@gmail.com
Subject: CPU Spikes completely hangs sessions;No logons: need inputs
To: IBM-MAIN@bama.ua.edu
Date: Monday, 6 September, 2010, 11:59


hi,

i have come across similar times where in my services/STCs demand a lot of
CPU and thus resulting soft capping.
some times even when msu's are being increased, via system tools though Ops
can see that my stcs are taking more CPU, i am unable to logon to the
LPAR..so are other users.
is there a way that i can monitor/ view the STC of the affected LPAR from a
diff one, in sysplex.
secondly, how do i prioritize  my TSO ID in that LPAR so that it may logon
and check/action to trap the issue.

My questions might sound foolish or/and inappropriate, but can assure, are
not just for Fun.
eager to learn, if possible some help/experience shared.

TIA,
Amit

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JES2 vs. JES3

2010-09-06 Thread Terry Draper
Having worked with both JES2 and JES3 (JES3 first), I would say it depends on 
what functionality you require.
 
JES3 has some multisystem capability that you may like. If there is no 
requirement for this then definitely go JES2.
 
JES2 can handle multisystems adequately. JES2 has tended to get support before 
JES3.
 
Many JES3 systems have been converted to JES2.
 
Someone new should definitely look to use JES2.
 
Now I will be shot down by the JES3 bigots. All I say is have have no axe to 
grind for either.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +66 811431287

--- On Sun, 5/9/10, Thompson, Steve steve_thomp...@stercomm.com wrote:


From: Thompson, Steve steve_thomp...@stercomm.com
Subject: JES2 vs. JES3
To: IBM-MAIN@bama.ua.edu
Date: Sunday, 5 September, 2010, 23:13


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Clark Morris
Sent: Sunday, September 05, 2010 4:55 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: O/T IBM to Ship World's Fastest Computer Chip

SNIPPAGE

As someone who was in a field where you can't get a consensus on
whether JES2 is better than JES3 and who is a follower of
transportation issues (and a member of Transport Action Atlantic), I
doubt a reporter would be able to determine easily which side of an
argument is flat out wrong, even with some hours of research.

SNIPPAGE

I created a new thread out of this because I think this is a bit more
important a topic -- so why let it get lost in O/T IBM blah blah?

Why is JES2 better than JES3? Why is JES3 better than JES2?

Why would JES3 be preferred over JES2?

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM

2010-02-04 Thread Terry Draper
I would also look at the presentations on the IBM WLM web page.
These tell you a lot more than the manuals.
 
For production CICS I would always go with response time goals. I know it uses 
a bit more CPU, but you get better controls. For development, I would probably 
use velocity (especially if in same Sysplex as production). 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 3/2/10, Kelman, Tom thomas.kel...@commercebank.com wrote:


From: Kelman, Tom thomas.kel...@commercebank.com
Subject: Re: WLM
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 3 February, 2010, 18:14


I am in the process of completely redesigning our WLM policy, so I'm
going through pretty much the same as you.  Although, I do have some
experience in designing one shortly after WLM appeared on the scene.
What I'm trying to get a handle on is new functionality and subsystems
that have been introduced since I originally set up a WLM.

What WLM manual are you looking at?  Definitely get the MVS Planning
Workload Management manual.  Also, I would recommend the System
Programmer's Guide to Workload Management Redbook. You might also want
to review any performance manuals available for whatever subsystems you
have (DB2, IMS, CICS, MQ, etc.).

As far as response time vs. velocity goals are concerned, response time
goals are a whole lot easier to deal with.  If you have CICS, for
example, you probably have a good idea how fast you want the
transactions to finish.  That and IMS are excellent candidates for
response time goals.  Also, response time goals stay the same between
operating system and hardware changes.  Velocity goals need to be
reevaluated and possibly changed.  However, long running tasks (long
batch jobs, forever running STCs, etc.) require velocity goals, but you
can set up a response time goal service class for some batch jobs.  For
example, in my first WLM I had set up a response time goal of 60%
witching 15 minutes for short running jobs.  It worked well, but you do
need to know, and be able to control, your batch environment to do that.
Also, when setting up a response time goal, use percentage response
time, not average response time.  IMHO the average response time goal is
worthless. 

Tom Kelman
Enterprise Capacity Planner
Commerce Bank of Kansas City
(816) 760-7632

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of gsg
 Sent: Wednesday, February 03, 2010 11:15 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: WLM
 
 What are the benefits on controlling via a response time goal vs. a
 velocity
 goal?  Our system has been pretty constrained lately and I'm looking
for
 ways
 to improve it, but I'm not that familiar with WLM.  By the way, I am
 looking at
 the WLM manual was well.  I need to get the MVS Planning Workload
 Management one though.
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


*
If you wish to communicate securely with Commerce Bank and its
affiliates, you must log into your account under Online Services at 
http://www.commercebank.com or use the Commerce Bank Secure
Email Message Center at https://securemail.commercebank.com

NOTICE: This electronic mail message and any attached files are
confidential. The information is exclusively for the use of the
individual or entity intended as the recipient. If you are not
the intended recipient, any use, copying, printing, reviewing,
retention, disclosure, distribution or forwarding of the message
or any attached file is not authorized and is strictly prohibited.
If you have received this electronic mail message in error, please
advise the sender by reply electronic mail immediately and
permanently delete the original transmission, any attachments
and any copies of this message from your computer system.
*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM countersues Neon over zPrime accelerator

2010-02-03 Thread Terry Draper
Everyone keeps talking about the impact on the IBM software bill.
 
I would be very worried about the impact of my non-IBM software bill. Even if 
IBM allowed the use of speciality engines (which I would not expect them to 
do), other software suppliers may not.
 
I think the zPrime product could be bad news for everyone, even the users of 
the product.

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 3/2/10, Mark Zelden mark.zel...@zurichna.com wrote:


From: Mark Zelden mark.zel...@zurichna.com
Subject: Re: IBM countersues Neon over zPrime accelerator
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 3 February, 2010, 14:04


On Wed, 3 Feb 2010 09:03:33 +0100, R.S. r.skoru...@bremultibank.com.pl wrote:

Mark Zelden pisze:
[...]
 I don't have a stake either way, but if I were rooting for Neon and they
 won this battle, IBM would still be free to change the licensing rules or
 not charge less for special engines, or they could just change the code
 and break zPrime for Neon.   So the only benefit seems to be temporary for
 anyone using the software.  Long term, it could hurt everyone else.

Another scenario: Neon wins, IBM tries to break zPrime and is sued for
trust practices. Then LOSES anti-trust trial and is obliged to keep
zPrime working. Effect: everyone can lower his HW and SW bills.


zPrime working doesn't do anyone any good if specialty engines don't
get a price break (IBM can't  be forced to sell them cheaper).   That is
the whole point of them.  So the effect is more likely to be no more
specialty engines (why have them at the same price).  That hurts
everyone who uses them today**  that doesn't run zPrime.

** Everyone that uses them enough to get a price advantage.  It doesn't
help to purchase a specialty engine at 1/4 the price of a general engine
and only run it at 10% utilization.  But ZAAPZIIP in z/OS 1.11 (and the
support added to z/OS 1.9  1.10 via OA27495) help. 

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSORT memory used paging

2010-01-19 Thread Terry Draper
My response is not directly to this posting, just my comments to the original 
question. I do not have the original post.
 
First, I have to ask what is more important to you? Is it SORTs or your online 
systems?
I will assume online.
 
We had problems with large sorts impacting our online. CICS is usually storage 
protected, so it was not impacted. We run Websphere Application Server and this 
was badly affected.
WAS is not storage protected and uses a lot of memory. So it was a prime target 
for page steals. After the sort started we had high page-in rates for WAS. This 
was an issue.
 
We looked at DFSORT options and decided we wanted to stop the page steals.We 
changed the EXPOLD parameter to zero, The default is MAX. This means that now 
DFSORT will only try to use available memory and not steal unreferenced pages 
from other work. 
 
If you are looking to make SORT fly then you may want to leave DFSORT to steal 
pages. But we want our on-line systems to perform and so DFSORT is a low 
priority.
 
At the time of our problem we were storage constrained. Not paging, but low 
available frame queue. We added more memory and now have a lot free. We still 
do not allow DFSORT to steal pages though.
 
On paging, I come from the school that I do not want to page. If I could 
restrict to very low priority work then maybe a little paging.
 
The cost of memory means that you get more benefit from adding memory than 
messing around with the paging subsystem. Yes I want a paging subsystem that 
can handle large paging, but I do not want to use it. Its there as insurance.
 
Also beware of free memory depletion when you take an SVCDUMP. This can hit you 
the same way.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 19/1/10, Staller, Allan allan.stal...@kbm1.com wrote:


From: Staller, Allan allan.stal...@kbm1.com
Subject: Re: DFSORT memory used  paging
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 19 January, 2010, 13:19


Page out is asynchronous and will only affect paging performance to the
extent that the channel/device is not available for demand page-in.

snip
By %USE I meant page slots used, meaning increase in page out.

 Why does workflow drop from %use?

too much juice wasted on paging out/in.

The issue is not page config, but increase in paging out/in  drop in
workflow.
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DB2 and PAV

2009-11-24 Thread Terry Draper
I have a situation where we are writing to a Shark DASD from DB2. 
 
I know DB2 will start multiple write engines to write the data from the buffer 
pool. However all the writes will go to the same volume.
 
My questions are about PAV. 
 
1. The DASD controller will serialise writes to the same extent - as specified 
in the define extent CCW. What does DB2 specify for the extent?
Is the the DASD extent or just the area it is writing to? If so what area is 
specified?
I cannot find this documented anywhere.
The many write engines will be trying to build the single table.
 
2. We may have many write engines started to the same DB2 table on a single 
volume.
There may be many of these.What is the maximum number of PAVs that WLM will 
give it?

The reason for the question is to evaluate the benefit (if any) of running 
multiple JOBs which will all contribute to the data Db2 will write to the 
single tablespace (i.e. dataset).

 
Terry Draper
zSeries Performance Consultant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Imp1

2009-11-24 Thread Terry Draper
I see no technical reason to not put anything in importance 1. The only thing 
that matters is the relative importance. Importance 1 does not mean anything 
special.
 
HOWEVER! I can see why you would not want to much there. If you put many of 
your important workloads in importance 1 and you found that one workload was 
extra special, then what do you do? 
I can understand designing around importance 2 to discretionary and then see 
what really has a strong business need for importance 1.
 
This is one of those It Depends questions. There is no answer which is right 
for everyone.
Just understand the implications and make your own decisions. 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 24/11/09, R Hey sys...@yahoo.com wrote:


From: R Hey sys...@yahoo.com
Subject: WLM Imp1
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 24 November, 2009, 8:23


Hi,

It's been recommended NOT to use Imp 1 in WLM.

My client has been using Imp1 for online CICS.

I’m planning to change it to use Imp2 buy changing all Imp to Imp+1.

Can you think of any potential problems/issues?

TIA,
Rez 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DB2 and PAV

2009-11-24 Thread Terry Draper
Bill and John,
   Thanks for your reply.
 
    You said that eight was the maximum for MA. Did you mean Multiple 
Allegiance, i.e. accesses from multiple systems. Does this also apply when all 
I/Os are form one system?
 
   I understand how things work, its just the undocumented bits that are 
difficult. It was easier to get these when I was in IBM doing this job.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 24/11/09, John Baxter john.bax...@atcoitek.com wrote:


From: John Baxter john.bax...@atcoitek.com
Subject: Re: DB2 and PAV
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 24 November, 2009, 15:59


Bill is correct in that DB2 will minimize the extent sizes for write 
operations, therefore increasing the likelihood of multiple, parallel 
operations to the same device. But also remember that DB2 will try to maximize 
the effectiveness of write I/O's, endeavoring to externalize as many pages as 
possible in one I/O operation (which may contain many chained CCW's).

Another point to remember is that if the aliases are managed by WLM, a given 
DASD volume may become starved, and IOSQ builds up because of the long WLM 
process cycle. There are threshold controls for the BP's, and generally one 
tries to set the pageset level thresholds fairly low to discourage write bursts 
and maintain a steady trickle of writes. Obviously it would be ideal to strike 
a balance between write efficacy and keeping the write I/O's from overly 
bunching-up.

Another strong recommendation is to implement HyperPAV aliasing, which we found 
to almost totally mitigate the concerns mentioned above. These aliases are 
assigned to I/O operations as needed and released for reuse (within the same 
LCU or associated CSS, if configured) after the extent-level I/O has 
completed.  The maximum number of aliases assigned to any one base address can 
be very high (and vendor-dependent, I believe).

This area of DB2 performance management is challenging but extremely 
interesting and needs to pull together expertise from the DBAs, sysprogs and 
DASD specialists.

John Baxter

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Bill Fairchild
Sent: Tuesday, November 24, 2009 7:26 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DB2 and PAV

1.  DB2 tries very hard to minimize the size of the extent defined in the 
Define Extent CCW for each I/O that it does to a Shark-type DASD that supports 
Multiple Allegiance (MA).  E.g., if an I/O references only one track, then that 
channel program will have only that one track within the defined extent.  To 
verify, run GTF and trace I/Os to that one volume from DB2, then look at the 
defined extents that will appear in the Prefix CCW for SSCH trace records and 
in the Define Extent for I/O interrupt trace records.

2.  I don't know for sure, but I imagine that WLM will allow the device to have 
as many PAVs as its controller microcode support.  E.g, for the 2105s available 
in 2000 when I last looked at this issue, the maximum number of simultaneous 
I/Os allowed by the microcode to any one device was eight regardless of how 
many PAVs were assigned.  Eight was the maximum MA level.  The number of PAVs 
can be less than, equal to, or greater than the max MA number, but any more 
than the max will result in queued I/O requests if the workload produces 
simultaneous I/Os fast enough.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4503 · Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Terry Draper
Sent: Tuesday, November 24, 2009 2:32 AM
To: IBM-MAIN@bama.ua.edu
Subject: DB2 and PAV

1. The DASD controller will serialise writes to the same extent - as specified 
in the define extent CCW. What does DB2 specify for the extent?
Is the the DASD extent or just the area it is writing to? If so what area is 
specified?
I cannot find this documented anywhere.
The many write engines will be trying to build the single table.

2. We may have many write engines started to the same DB2 table on a single 
volume.
There may be many of these.What is the maximum number of PAVs that WLM will 
give it?

Terry Draper
zSeries Performance Consultant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


The information transmitted is intended only for the addressee and may contain 
confidential, proprietary and/or privileged material. Any unauthorized review, 
distribution or other use of or the taking of any action in reliance upon this 
information is prohibited. If you receive this in error, please contact

Re: DB2 and PAV

2009-11-24 Thread Terry Draper
Ted,
   I agree with you about SLA's etc.
 
   But this is an investigation about how we partition a huge tablespace that 
is reaching the size limit.
   The tablespace will be unavailable to the users during the copying and we 
are trying to find the best copy processing to reduce the outage. The current 
suggested process writes to each partition and fills it before going to the 
next. Hence the questions about concurrent write I/O.
 
   Sometimes you do need to get down into the details.

 
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 24/11/09, Ted MacNEIL eamacn...@yahoo.ca wrote:


From: Ted MacNEIL eamacn...@yahoo.ca
Subject: Re: DB2 and PAV
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 24 November, 2009, 19:13


This area of DB2 performance management is challenging but extremely 
interesting and needs to pull together expertise from the DBAs, sysprogs and 
DASD specialists.


Too many people get hung up on the technical aspects.
Rather than worrying about:

How many write engines?
How is the Defined Extent used/sized?
How many (HIPER)PAVs are allocated/usable?
VSAM chaining?
ETC?

The real issue is:
Are my Service Levels being met?
   AND:
Is my resource consumption appropriate?

If the answer to the above two questions is YES -- relax.

If NO, then use your understanding of the technology to fix it.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Ron Ferguson's VSAM Book

2009-10-06 Thread Terry Draper
There is also an IBM ITSO Redbook called VSAM Demystified. Published in 2003.
 Has good rating from feed backs.
In practise I have not found VSAM too difficult to tune. You just need to 
research how it works first. I have got major improvements from a little work.

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 6/10/09, Jim Marshall jim.marsh...@opm.gov wrote:


From: Jim Marshall jim.marsh...@opm.gov
Subject: Ron Ferguson's VSAM Book
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 6 October, 2009, 12:15 PM


Looked in my archives and came up with 

Virtual Sequential Access Method
The Complete Source Book for VSAM File Structures 
By Ronald Ferguson
Seventh Printing 1992 
no code assigned as is done today

Ron used to give these to anyone who took his VSAM training when he ran SIS 
(Software Information Services). Many times at SHARE when he was presently 
the Mainstar (SIS Follow on firm) booth would have some. You may want to 
check with Mainstar if they are available. Much of the content is in the VSAM 
Manager software product manual.  

Jim Marshall  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: db2 data sharing - group attachment

2009-09-30 Thread Terry Draper
Munif,
   DB2 aplications can only connect to a DB2 in the same image.
   You cannot shut down one member of the DB2 data sharing group and still 
access DB2 data from that image.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 30/9/09, Munif Sadek munif.sa...@gmail.com wrote:


From: Munif Sadek munif.sa...@gmail.com
Subject: db2 data sharing - group attachment
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 30 September, 2009, 8:29 AM


Hi

I have started two db2 data sharing  members on two different z/OS images in 
a parallel Sysplex and can connect to them via group attachment name (SSID) 
in a batch job. My problem is If i shut down one of the member I am not able 
to use group attachment name on that z/OS image to  *automagically* send 
my request across z/OS image to the other DB2 member.

what's the missing bit here and how much work is involved to achieve this. we 
are z/OS 1.9 and DB2 9 system.

Any pointers in the right direction is highly appreciated.

Best regards

Munif

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: What happens in a DS8000 Metro Mirror (PPRC-Sync) config when the relationship between a volume pair is broken?

2009-08-29 Thread Terry Draper
Jan,
   I suggest you look in the Redbook at this address:
 
 http://www.redbooks.ibm.com/abstracts/sg246787.html
 
   Book number is: SG24-6787   Title:  DS8000 Copy Services for IBM Systems z
   It is dated  February 2009 and so is up to date. 

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Sat, 29/8/09, Jan Vanbrabant jan.vanbrab...@telenet.be wrote:


From: Jan Vanbrabant jan.vanbrab...@telenet.be
Subject: What happens in a DS8000 Metro Mirror (PPRC-Sync) config when the 
relationship between a volume pair is broken?
To: IBM-MAIN@bama.ua.edu
Date: Saturday, 29 August, 2009, 1:33 PM


Hi,

1°
What happens in a DS8000 Metro Mirror (PPRC-Sync) config when the relationship 
between a volume pair is broken, say, because of network problems?
This network outage could for example last for hours.

How does the system keep track of the source updates  changes in the mean 
time, in order to resynchronize the volume pair later on?
With the IBM FlashCopy SE feature, there is something like the copy repository 
where only the capacity needed to save pre-change images of the source data is 
allocated.

Is there also kind of such a “copy repository” with Metro Mirror?
If yes, has it to be sized? Or is the storage server capable of handling  
managing this by itself?

Where can I find more info on this?
Have any benchmarks or tests been done in this perspective?
I would appreciate it utmost if you would route me to appropriate reading 
material? 

2°
I assume (dare hope) that the resynchronization process can be (or is) 
prioritized the right way in order to not perturbate the real  usually ongoing 
I/O response time?

Cheers,Jan



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Can CICS region share more than one processor

2009-08-20 Thread Terry Draper
Don,
   Prior to L8 TCBs, DB2 SQL requests executed on threads. Each thread was a 
seperate TCB. So the QR TCB and each thread TCB could execute in parallel. 
  
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 19/8/09, Don Deese don_de...@cpexpert.org wrote:


From: Don Deese don_de...@cpexpert.org
Subject: Re: Can CICS region share more than one processor
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 19 August, 2009, 3:48 PM


Hi Mohammad,

I don't understand the relevance of your question within the context of Dr. 
Merrill's posting.  Dr. Merrill was illustrating the concurrent execution of 
multiple TCBs (both the QR TCB and L8 TCBs) that would not have been possible 
prior to the OTE design.  It is somewhat irrelevant as to whether DB2 CPU time 
was charged to a QR TCB or to an application TCB in past.  The fact that the 
DB2 processing would have been executed in series off QR TCBs, but now executes 
in parallel with L8 TCBs is the important point that Dr. Merrill was making.  
Keep in mind that Dr. Merrill was responding to the OP query Can CICS region 
share more than one processor.

FWIW, I have data from CPExpert users that show 10, 20, 30 or more L8 TCBs 
executing concurrently and using more than 100% of CPU (namely, they are using 
multiple CPUs concurrently for more than 100% of the time).  This would not 
have been possible prior to the OTE design.

As Dr. Merrill pointed out, there are 22 TCB types with CICS/TS 4.1, and 
several of these TCB types can have multiple TCBs executing concurrently.  The 
number of current TCBs can be controlled by the MAXxxxTCBS in the SIT or in the 
JVMPROFILE Resource Definition.  There are, of course, limiting factors 
inherent in the environment (for example, CICS monitors  the amount of 
available MVS storage and will not attach new TCBs from the JVM TCB pool if 
storage is severely constrained).

Regards,

Don

**
Don Deese, Computer Management Sciences, Inc.
Voice: (804) 776-7109  Fax: (804) 776-7139
http://www.cpexpert.org
**



At 09:16 AM 8/13/2009, you wrote:
 Thanks Dr. Merrill for your illustrative example but I do have a question 
 about
 it. Since L8 TCBs are used to execute DB2 code as well, what part of 10,298
 seconds is for DB2 ? Since DB2 related code never executed on QR TCB
 anyway, that portion of CPU usage is moot for this discussion. The real
 question is how much of this CPU now runs on L8 TCB which used to run on QR
 TCB due to the aggressive OTE exploitation ?
 Regards
 Mohammad
 
 
 On Wed, 12 Aug 2009 15:18:23 -0500, Barry Merrill ba...@mxg.com wrote:
 
 In the old days, a CICS subsystem's capacity was limited by
 the amount of CPU TCB time needed for that single QR TCB.
 
 Based on my analysis when OTE was brand new, of the CPU time
 consumed by each of these new CICS TCBs, I planned this post
 to argue that going to OTE didn't help much, because most of
 the CICS CPU time was still being spent under the QR TCB.
 
 I could NOT have been more wrong!
 
 Analyzing new CICS/TS 4.1 Open Beta data from a VERY
 aggressive OTE exploiter site shows (from their
 SMF 110, subtype 2 Dispatcher Statistics segments,
 MXG CICDS and CICINTRV datasets):
 
 Total TCB CPU in Dispatcher Records  = 13,080 seconds
 Total TCB CPU in QR TCB              =  2,776 seconds
 Total TCB CPU in L8 TCB              = 10,298 seconds
 Total TCB CPU in all other TCBs      =      6 seconds
 
 Aha, you say, OTE still doesn't help; the CPU time just moved
 from the QR TCB to the L8 TCB, so the capacity limit just moved
 from one TCB to the other, right?
 
 Wrong again.
 
 While the QR TCB can attach only a single TCB, these new TCBs
 can attach multiple TCBs; in fact, the SMF data shows that
 the L8 TCB attached a maximum of 22 TCBs, each of which
 is a separate dispatchable unit.
 
 So, it REALLY does look like that these multiple OTE TCBs
 do eliminate the old one-TCB CICS capacity limitations,
 and does indeed spread your CICS time across MANY TCBs.
 
 (Total SRB time in the Dispatcher Records was only 65 seconds.)
 
 Barry Merrill
 
 Herbert W. Barry Merrill, PhD
 President-Programmer
 Merrill Consultants
 MXG Software
 10717 Cromwell Drive
 Dallas, TX 75229
 
  ba...@mxg.com
  http://www.mxg.com
                admin questions:       ad...@mxg.com
                technical questions:  supp...@mxg.com
  tel: 214 351 1966
  fax: 214 350 3694
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Can CICS region share more than one processor

2009-08-14 Thread Terry Draper
Tommy,'
   You say you are VSAM.
 
  There is a sub-task for some VSAM functions which is an option within a CICS 
address space. I do not think it takes a lot away from the main task, but make 
sure you are using this option.
 
   You could set up a TOR and route to multiple AORs and for VSAM read only 
files, these could be shared between AORs. For update files create a FOR and 
ship the requests from the AOR to the AOR. You could ship all VSAM requests to 
the FOR if you want.
 
   This will add to the overall CPU but will split the load across several 
tasks. It should not require application code changes. You need to decide how 
to split the load between the AORs and implement in the TOR the appropriate 
routing. You can use CICSPLEX SM to control the routing or create your own 
routing exit.
 
   I thought most users use the TOR/AOR structure now. to avoid your problem.
 
  I would also look at data tables to try to reduce the CPU load for VSAM 
accesses. 
 
  Or you could go out and buy a big engine z10.
 
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Thu, 13/8/09, Tommy Tsui tommyt...@gmail.com wrote:


From: Tommy Tsui tommyt...@gmail.com
Subject: Re: Can CICS region share more than one processor
To: IBM-MAIN@bama.ua.edu
Date: Thursday, 13 August, 2009, 11:40 PM


Our data access for applications is VSAM only without DB2. We have not
implement OTE yet that means we only has QR TCB. As you know, we have
to re-write many user program either switch to cicsplex with RLS or
DB2 access.

On Fri, Aug 14, 2009 at 1:01 AM, Terry Draperw...@btopenworld.com wrote:
 Tommy,

 Can I ask a couple of fundamental questions?

 What is the data access for the applications?
 Is it DB2, DL/1, VSAM or something else.
 If DB2 (and I think DL/1) these will already be running on threads and these 
 use their own TCBs, 1 per thread. If so I cannot understand your problem.

 Also do you have a TOR and AORs structure. If not I suggest you go that way.


 Terry Draper
 zSeries Performance Consultant
 w...@btopenworld.com
 mobile:  +966 556730876

 --- On Wed, 12/8/09, Tommy Tsui tommyt...@gmail.com wrote:


 From: Tommy Tsui tommyt...@gmail.com
 Subject: Can CICS region share more than one processor
 To: IBM-MAIN@bama.ua.edu
 Date: Wednesday, 12 August, 2009, 2:44 PM


 Hi ,

 We hit a problem that our cics cannot utilized more than one CPU
 processor and IBM recommend our shop upgrade to CICSplex .Except this,
 is there any other way to solve this problem?


 any comment will be appreciated

 best regards

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Can CICS region share more than one processor

2009-08-13 Thread Terry Draper
Tommy,
 
Can I ask a couple of fundamental questions?
 
What is the data access for the applications?
Is it DB2, DL/1, VSAM or something else.
If DB2 (and I think DL/1) these will already be running on threads and these 
use their own TCBs, 1 per thread. If so I cannot understand your problem. 
 
Also do you have a TOR and AORs structure. If not I suggest you go that way. 


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 12/8/09, Tommy Tsui tommyt...@gmail.com wrote:


From: Tommy Tsui tommyt...@gmail.com
Subject: Can CICS region share more than one processor
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 12 August, 2009, 2:44 PM


Hi ,

We hit a problem that our cics cannot utilized more than one CPU
processor and IBM recommend our shop upgrade to CICSplex .Except this,
is there any other way to solve this problem?


any comment will be appreciated

best regards

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SDSF DA fields CPU/L/Z/

2009-08-05 Thread Terry Draper
David,
   On the SDSF DA panel the fields CPU/L/Z are as follows:
 
   CPU ( SDSF help calls this MVS busy) this is the percentage busy the the 
z/OS operating system thinks it is busy. This includes both the time the 
logical CP is running plus the time the logical CP is not in a wait state and 
is waiting for PR/SM to dispatch it. This is because another LPAR is running.
 
  L (SDSF help call this LPAR busy) this is the logical CP is running. This is 
the correct CPU utilisation of the LPAR.
 
 Z (SDSF help calls this zAAP busy) this is the zAAP busy percent if you have 
one.
 
  To explain the first two let me sketch a diagram: 
 
   = time
  A  B C
   Logical CP running ===   Logical CP running 
 
 
   If during the whole time z/OS has work to run the MVS busy would be the 
whole time
   (A + B + C).
   LPAR busy would be the whole time less the  part (A + C).
   The  part is when other LPARs have control the real CPU (C).
 
    The bigger the difference between MVS busy and LPAR busy the more your work 
is being delayed by other LPARs. If this is you main production system them you 
should look at increasing this LPAR's weight.
 
   You owe me a pint for this quick course. Just joking.

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Wed, 5/8/09, David Speake david.spe...@bcbssc.com wrote:


From: David Speake david.spe...@bcbssc.com
Subject: SDSF DA fields CPU/L/Z/
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, 5 August, 2009, 12:02 AM


SDSF DA displays a line of information above the 'COMMAND INPUT' line.

SDSF DA SYSJ  (ALL)    PAG    0  CPU/L    68/ 34       LINE 1-1 (1)

Can anyone point me to an IBM manual for citation as to exactly what 
CPU/L    68/ 34  in the above mean. The Help panel shows

CPU/L/Z/  26/ 26/  0
  |      
Percentage of time the CPU is busy, MVS, LPAR and zAAP views. 

The zAAP view would, I guess, reflect busy % for all CPUs (if any)
designated as zAAP processors by PR/SM.
But what is the definition/distinction for CPU/L (MVS/LPAR) views.
I am looking at z/OS: Resource Measurement Facility Performance Management
Guide, B.4.2 Value of LPAR CPU management and I cannot form a good definition.


An explanation would be appreciated. A definition citation even more so.

David 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Service Class assignment Order

2009-06-05 Thread Terry Draper
Jacky,
    With response time goals in CICS, the CICS STC service class is only used 
at start up. After that WLM builds its own own groups to put CICS address 
spaces in. Depending on the CICS transactions within an address space and how 
they are achieveing their goals, the CICS address space as a whole is put in 
the appropriate group. WLM can only manage CICS at the address space level.
What you find is that all your CICS address spaces move together if you 
distribute the same transactions across all your CICS. You can see this in 
RMF II where all the AORs may have the same dispatching priority and maybe your 
TORs have another priority.

There is a presentation on the IBM WLM web site describing this.
 http://www-03.ibm.com/servers/resources/servers_eserver_zseries_zos_wlm_pdf_cidwlm_pdf_cidwlm.pdf

Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876





From: Jacky Bright jacky.bri...@gmail.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, 4 June, 2009 2:04:14 PM
Subject: WLM Service Class assignment Order

Can anyone tell me how WLM decides which service class to assign to CICS
Subsystem ?

I have following Classification Rules defined in system.

For the type CICS following are the entries :

  Qualifier  Qualifier      Starting      Service  Report
# type      name          position      Class    Class
- -- -- -       
1 SI        RTGCICS                      CICS1TP  RTG3T
2 . TN      . M*                              CICS1TP  IASP
2 . TN      . V*                                CICS1TP  NDSAP
1 SI        R1%CICS                      CICSSLA

and For the type STCS following are the entries :

  Qualifier  Qualifier      Starting      Service  Report
# type      name          position      Class    Class
- -- -- -       
1 SYG        ABCRL                                  SYSSTC
2 . TN      . R1%CICS                    CICSSTC  RCICS
2 . TN      . E%CICS                      CICSSTC  ECICS
2 . TN      . R%CICS                      CICSSTC  FCICS

In my SDSF for DA panel I can see that for R*CICS region the Service Class
is CICSSTC.

But confused whether there is any significance for the entry defined for
type CICS. i.e. CICSSLA service Class.

JAcky

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Global mirroring

2009-06-04 Thread Terry Draper
We have synchronous from A to B and asynchronous from A to C.
Sounds just what you want to do.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Tue, 2/6/09, Michael Saraco michael.sar...@baer-consulting.com wrote:


From: Michael Saraco michael.sar...@baer-consulting.com
Subject: Global mirroring
To: IBM-MAIN@bama.ua.edu
Date: Tuesday, 2 June, 2009, 5:52 PM


The question I have is. Can I use Global mirroring to copy from site A to Site 
B 
and also at the same time copy from site A to site C? Or do you have to go 
from site A to B then from site B to site C?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: System data sets and 3390-27

2009-05-19 Thread Terry Draper
Brad,
   I have seen some responses to your question, which raised questions. 
 
I will ignore the RVA, which works differently with its virtualisation.
 
   The latest DASD does reserve all the space for a specified volume. So 
allocating a 1 cylinder data set on a mod 27 would waste nearly all the space. 
With some of the copy functions you can get away with less for the copy, but 
not the original volumes.
 
   The data is striped across the raid DASD, so the real I/Os for a specific 
volume can be to different RAID DASD in parallel. So a mod 27 is not an issue 
here.
 
   One thing that would be an issue is if you did real RESERVEs and did not 
convert them to systems wide enqueues. You would be RESERVEing a lot of data 
from other systems.
 
  My major concern is the actual performance of a mod 27, irrespective of the 
type of data.
 
   If you put a lot of active data on a mod 27 then it will need more PAVs  
than a mod 3 or mod 9. I know you could have WLM managed dynamic PAVs. But the 
WLM addition of extra PAVs will lag behind an increase in I/O activity to a 
volume. The delays will be seen in IOSQ time. One solution is HIPERPAVs, which 
are only on the latest IBM DASD - these allocate PAVs from a pool as required 
rather than assigning to specific volumes. I do not think EMC have HIPERPAVs 
and do not know if they have committed to them and if so when. 
 
I think I would play safe for your new environment and put busy system data 
sets on smaller models than 27.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Mon, 18/5/09, Brad Wissink bjwi...@iastate.edu wrote:

From: Brad Wissink bjwi...@iastate.edu
Subject: System data sets and 3390-27
To: IBM-MAIN@bama.ua.edu
Date: Monday, 18 May, 2009, 8:39 PM

We are moving from a Shark with no PAV's to a EMC SAN with PAV's.  We
are 
going to use 3390-27's.  I have read most of the archives on 3390-27's
but I 
did not see anything on running your sysres or system data sets from a 27?  
We are thinking about going totally with 3390-27's, which means we must run

with the sysres, jes checkpoint and spool, catalogues, RACF database, etc on 
the 27's.  Is anyone doing that?  What do we need to watch out for?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Any Batch Pipes experience?

2009-03-20 Thread Terry Draper
I seem to remember that there was a JOB restructuring function available with 
Batch Pipes. Not aware that it was used much. Or if its still around.
 
The problem with that product was that it relied on history. As soon as you 
change a JOB, for the next N (not sure how many) runs it does no optimization. 
This is while it relearns the new JOB structure. I think any JCL change to the 
JOB would stop the optimization. Thus the batch window will run longer for this 
period of relearning. Also there were several things it could not detect.
 
Me, I would go with a manual process to fix the big gains.
 
Terry Draper
zSeries Performance  Specialist

--- On Thu, 19/3/09, Spencer, Mike mike_spen...@bmc.com wrote:

From: Spencer, Mike mike_spen...@bmc.com
Subject: Re: Any Batch Pipes experience?
To: IBM-MAIN@bama.ua.edu
Date: Thursday, 19 March, 2009, 11:26 AM

Ah, to keep with the 80's theme, way, dude if you have lots of
time on your hands.  Yes, the user could insert another JOB card, but with
thousands of jobs executing, and not being a consultant, I have better things to
do than waste my company's dollars trying to manually streamline my batch
processes.  
And of course there are no IO savings by inserting another JOB card.

MAINVIEW Batch Optimizer provides the ability to pipe around DB2 or IMS steps
to negate any negative impact.  It's all a very simple process to implement.
   

Michael Spencer
BMC Software

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of
Nemo
Sent: Wednesday, March 18, 2009 12:59 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Any Batch Pipes experience?

On Wed, 18 Mar 2009 07:45:09 -0500, Spencer, Mike wrote:

Batch Pipes can only move data between two different jobs.  It cannot 
move
data between steps because in a Batch Pipes world, there is no way to get
multiple steps running in parallel for the pipe to work.
 
no way?!?  All the user needs to do is insert another JOB card
(non-duplicate name is a plus but not a requirement if JES' duplicate jobs
are allowed to execute concurrently).  

That sounds like a way to me.  
 
 
MAINVIEW Batch Optimizer from BMC will run steps in parallel, entire 
jobs in
parallel, and optimizes QSAM and native VSAM I/O processing among other items.
 
Does BMC's MAINVIEW Batch Optimizer have a clue if more than one of those
parallel job steps are updating the same DB2 table (and thus potentially causing
damage)?  Some steps have implicit serialization requirements.  How does BO cope
with those?  
   

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to
lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at
http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Any Batch Pipes experience?

2009-03-20 Thread Terry Draper
I just did some research on the old faithful Google for Batchpipes.
 
I found in Wikipedia, in the History section:
 
BatchPipes Version 1 was developed in the late 1980s and early 1990s simply as 
a technique to speed up MVS/ESA batch processing. In 1997 the functionality of 
BatchPipes was integrated into a larger IBM product - SmartBatch (which 
incorporated two BMC Corporation product features: DataAccelerator and 
BatchAccelerator). However SmartBatch was discontinued in April 2000.
Subsequently BatchPipes Version 2 was released, incorporating BatchPipes 
Version 1 and some additional features from SmartBatch: BatchPipePlex and 
BatchPipeWorks. BatchPipes Version 2 is still a marketed IBM product.
 
It was SmartBatch that I was thinking of.



Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Fri, 20/3/09, Nemo plumbersar...@gmail.com wrote:

From: Nemo plumbersar...@gmail.com
Subject: Re: Any Batch Pipes experience?
To: IBM-MAIN@bama.ua.edu
Date: Friday, 20 March, 2009, 5:43 PM

On Fri, 20 Mar 2009 17:32:31 +, Terry Draper wrote:
 
I seem to remember that there was a JOB restructuring function available 
with Batch Pipes. Not aware that it was used much. Or if its still around.
 
The problem with that product was that it relied on history. As soon as
you 
change a JOB, for the next N (not sure how many) runs it does no 
optimization. This is while it relearns the new JOB structure. I think any JCL 
change to the JOB would stop the optimization. Thus the batch window will 
run longer for this period of relearning. Also there were several things it
could 
not detect.
 
 
You have confused IBM's Batch Pipes (BatchPipes/MVS) with
BMC's 
MAINVIEW Batch Optimizer.   BMC's BO relies on the job history, much as
you 
said.  BatchPipes relies on a one-time manual conversion and it is all smooth 
sailing from there.  (No history, no downstream relearning.)  
 
And, as the poster from Brazil stated - it works great.  
 
 
 
Me, I would go with a manual process to fix the big gains.

Me, too.  One and done.  
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF reporting question

2009-02-28 Thread Terry Draper
We use RMF Spreadsheet reporter. The customer does not have SAS or MXG.
 
It has lots of canned graphs for z/OS things. It comes complete with RMF. We 
include a lot of these graphs in our monthly technical report to management. 
The tool is not user friendly but it does the job.
 
It will not address the who is using or CICS things. I have taken the 30 record 
and built a cut down flat file (no repeat sections) which I can report on 
easily. I used assembler - you get the mappings with z/OS - so easier to use.
 
Certainly look at using RMF Spreadsheet Reporter. Then look elsewhere for the 
others.


Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Thu, 26/2/09, David Betten bet...@us.ibm.com wrote:

From: David Betten bet...@us.ibm.com
Subject: Re: SMF reporting question
To: IBM-MAIN@bama.ua.edu
Date: Thursday, 26 February, 2009, 7:17 PM

I'm not that familiar with it but the RMF Spreadsheet Reporter might povide
you with some z/OS performance reporting.
I'm pretty sure it comes with RMF so it probably meets your requirement of
being something you already have.


Have a nice day,
Dave Betten
DFSORT Development, Performance Lead
IBM Corporation
email:  bet...@us.ibm.com
DFSORT/MVSontheweb at http://www.ibm.com/storage/dfsort/

IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote on 02/26/2009
11:07:43 AM:

 I realize that most people likely use SAS for SMF reporting. We did this
in
 the past. Management has declared that SAS on the z is simply too
expensive.
 They also declared that installing the Windows version on our main
user's
PC
 is also too expensive. End of discussion on that point.

 So, what else could be used? It must either be something that we already
 have, such as COBOL, EasyTrieve Plus (without the SMF add on), REXX,
HLASM,
 ... . We cannot spend any hard money on this. Oh, it would be
nice if
if
 were very CPU efficient because we just downgraded our z9BC from a V02 to
a
 T02 in order to save on software costs. And it must be such that doing
ad
 hoc requests can be responded to quickly. Yes, I know, give me
the
world,
 but don't spend any money.

 I am actually looking at downloading the raw SMF data to a Linux box (my
 desktop) using BINary and SITE RDW. I did this for last week's SMF and
had
 about 14Gib of data. I know how to read this with Java. This may actually
be
 what I end up looking at doing. But I am the only person in my group who
is
 even mildly Java literate. The main performance person is not. And he
 doesn't have access to my PC anyway. Of course, that is one reason
that
I'm
 looking at Java. I have written Java in the past (minor application)
which
 truly was run anywhere. At least it ran, as compiled on the
Linux box,
on
 32 bit Linux/Intel, 64 bit Linux/Intel, Mac OS/X, 32 bit Windows, and on
the
 z. I just transferred the jar file and ran it.

 Any thoughts or commiserations appreciated.

 --
 John

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Z11 - Water cooling?

2009-02-02 Thread Terry Draper
Peter,
   You are correct that some of the 9672's had water cooling. It started at 
the top of the range (cannot remember which model). If I remember correctly it 
was used for all models on later ranges. I could find out if anyone wants to 
know.
 
   I put water cooled in quotes because it depends what you mean.
 
   There is the case of a water enclosed system that chills the air before it 
is passed over the CPU s. This can be called water cooled, but no water passes 
to the CPU packaging. This is what appeared in the 9672's.   
 
   This compares to the old water cooled processors, like 3090's, where the 
water actually flows through the CP module. I am not aware of anything like 
this being used on the latest processors.  
 
   It depends if you would really call the first a water cooled system.
 
   The rumour needs to be clarified on what exactly they are implying.
 
   If I have time I will research and give the full picture.
 
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Mon, 2/2/09, Hunkeler Peter (KIUK 3) peter.hunke...@credit-suisse.com 
wrote:

From: Hunkeler Peter (KIUK 3) peter.hunke...@credit-suisse.com
Subject: Re: Z11 - Water cooling?
To: IBM-MAIN@bama.ua.edu
Date: Monday, 2 February, 2009, 8:17 AM

With the z9 EC, IBM used what they called hybrid cooling.  


The first z/Architecture machines (zSeries 900) already had them.
Didn't remember if the latest 9672 also had MCU's but a quick
search showed that at least some of the 9672 G5 and G6 models
also had a closed loop liquid cooling system.

-- 
Peter Hunkeler
Credit Suisse

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html