Re: Execution Velocity

2012-03-20 Thread Gerhard Adam
We have a HOTBATCH service class defined with IMP=1, Execution 
Velocity of 90, with the CPU Critical flag turned on.  This service 
class has a jobclass assigned to it.  At any given time during our 
batch window, there may be up to two of three of these jobs running.


For a system with only 4 LP's that's an extremely high velocity and 
probably isn't attainable.  You neglected to mention what your "actuals" 
were, since that would be helpful in assessing what's taking place.


Bear in mind that if your velocity is essentially unachievable, the only 
thing that will happen is WLM will set the Skip Clock and tend to ignore 
that service class from receiving help.


Bring the velocity into line with what it can actually use [keeping in 
mind the the other high importance level tasks that are also running].


It appears that you're trying to use velocity as a priority setting 
[which it isn't].  In addition, CPU critical isn't likely going to make 
much difference either, since it is highly unlikely that batch elapsed 
time is that heavily dependent on CPU.


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: WLM Problems

2011-09-19 Thread Gerhard Adam
Yes.  The simplest way is to ensure that your production regions always have
a higher importance level than your development regions.  That way, their
goals are preferentially "protected" against lower importance regions.



Hello all, I have a question. I'm being told that there is no way to
keep a looping lower priority task in a development CICS region from
affecting the dispatching of a production CICS system. Is this really
true. I'm not the WLM guy, but I would have thought that there would be
some way to set it up so that test, dev, batch or online would not
affect a well behaved production environment. Would someone please
enlighten me. 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Enclave help

2011-07-05 Thread Gerhard Adam
Why not simply quiesce it?  This is readily available from SDSF.

>To try to resolve this (at least partially) I put it in 
>my SC with a RG cap of 1 SU, varied the zIIP off line to get it on the 
>GCP, then put the zIIP back online.  This seemed to slow it down, 
>although I'm still not sure why it worked as well as it did: I expected it 
>to bounce back to the zIIP and run away again in short order.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Real Storage Occupancy

2011-07-03 Thread Gerhard Adam
Let me ask an obvious question  has the monitoring program been checked
to ensure that it isn't collecting the data or storing it improperly
(thereby accumulating data)?

Many performance monitors will allow you to see the data that is contained
in the allocated storage, so if you are seeing an increased in allocated
frames, then you should be able to examine their contents to see if it is
new or replicated data, etc. ... 

Also, have you validated the reported data from the REXX program against
another monitor just to ensure no errors are being shown?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Production MIPS

2011-06-17 Thread Gerhard Adam
Why not use the LSPR directly?  Even the SRM constant is derived from the
LSPR mixed workload values.

Adam

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Ed Finnell
Sent: Friday, June 17, 2011 4:49 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Production MIPS

Cheryl Watson does this for a living. Think they're still at
 _http://www.watsonwalker.com/chart.html_ 
(http://www.watsonwalker.com/chart.html)  The  high-end numbers are
extrapolated, but the low end are pretty 
good depending on  bariatric pressure and entrophy saturation...
 
In a message dated 6/17/2011 5:05:10 P.M. Central Daylight Time,  
j...@fiteq.com writes:
 
MIPS of  a typical production  mainframe?



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL: IF and ABEND

2011-06-02 Thread Gerhard Adam
Maybe I missed something, but have you considered the IF/THEN/ELSE
statement?

You can code the abend condition as stepname.ABEND

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU utilization/forecasting

2011-04-16 Thread Gerhard Adam
>You assumed queue existence. Good assumption for z/OS and most other 
>systems, but has no meaning from CPU perspective.

A system without a queue has no delays.  At that point the only improvement
possible would be a faster architecture.


>Again, there are systems, where there are no such thing like DP (or they 
>are more or less similar), but those systems still have to do with CPU 
>and their workload. For example I believe there were no such gismo in 
>MS-DOS, but DOS used processor and the processor had some utilization, 
>usually between 0 and 100%.

Even Windows has a priority scheme, but that isn't really necessary for the
discussion.  If you want to be more specific then you could simply refine
the statement that systems that have priorities will have different
utilization effects.  In this cases, where no specific priority arrangement
exists, then it simply behaves as if they are all a single priority.  No
real difference in the system behavior;  only in the interaction of
competitors.

I would argue that it isn't likely that anyone posting on this forum is
doing a capacity plan for an MS-DOS based system.  

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU utilization/forecasting

2011-04-16 Thread Gerhard Adam
>Well, I disagree with such definition. IMHO CPU busy is "pattern" of 
>NOPs (No OPeration) and "usable" instructions. 1% busy simply means that 
>99% of cycles were filled with NOP, and only 1% of cycles were other 
>instructione were executed.

You can disagree with it all you like, but a job that is dispatched on a CPU
is of no interest from a performance perspective.  What matters is how long
dispatchable work has to wait in the queue.

>> HOWEVER, this depends explicitly on the relative
>> dispatching priority of the work, so the 80% utilization would only be
>> relevant to the lowest priority work in the system.

>In fact dispatching priority denies you statement above or at least 
>distorts it.

No, because it means that any work at DP=255 will not be delayed by lower
priority work (beyond that work completing its dispatch interval).
Therefore it is completely erroneous to assume that work at that high
dispatching priority "sees" the overall utilization of the processor.  Its
dispatching priority ensures that only competitors at the same dispatching
priority are relevant in establishing delays.  A job at DP=245 will have to
compete with all units of work at higher dispatching priorities, and
therefore a higher utilization represents a higher probability of such lower
priority work being subject to queuing delays.

This is the basis by which workload manager takes actions in the first
place, to ensure that higher importance goals are assigned higher
dispatching priorities (if that's the primary delay) to ensure that their
goals are more readily met (regardless of the overall utilization of the
processor(s)).  The overall CPU utilization represents the ongoing demand,
while the relative utilization at each dispatching priority represents the
level of competition experienced by the workload.  

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU utilization/forecasting

2011-04-15 Thread Gerhard Adam
>- what do you mean by average. I disclose very big secret: CPU is always 
>100% busy or 0% busy (*). Always - mean in every tact (tick?).
>So the shorter period you take for CPU measurement the higher peaks you 
>will get. If your periods are as short as CPU cycle then your CPU usage 
>is binary: 100% or 0%.
>So, what average would you like to take? 4H, 24H or 15 min.?

Bear in mind that the utilization in this case simply represents the
probability of the processor being busy when a unit of work becomes ready to
run.  So at 80% utilization this represents an 80% probability that a unit
of work will be delayed.  HOWEVER, this depends explicitly on the relative
dispatching priority of the work, so the 80% utilization would only be
relevant to the lowest priority work in the system.  SYSTEM service class
(at DP = 255) would experience virtually no utilization delays (beyond those
from competitors at the same dispatching priority), so that is all that will
need to be considered.  There are some minor considerations regarding low
priority retaining control for the minimum dispatch interval, but that
shouldn't affect your view of averages.

So, even if you were off in your growth predictions, it isn't likely that
high priority work will actually suffer.  It will always tend to be the
lowest priority work, so you will want to measure peaks since that's the
component that will be most impacted due to under-sizing.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DD DUMMY allocate any BUFFERS?

2011-03-21 Thread Gerhard Adam
Have you checked the dump?

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Stewart, David James
Sent: Monday, March 21, 2011 9:48 PM
To: IBM-MAIN@bama.ua.edu
Subject: DD DUMMY allocate any BUFFERS?

Hi

 

I cannot find this information on IBM

 

Does DD DUMMY cause any QSAM BUFFERS to be allocated for QSAM files? ie.
The default value of 5?

 

I'm trying to find out why we getting a S878 on a job with REGION=0M on
the jobcard.

 

There is several other possible reasons but I want to eliminate or
include this one also.

 

 

David Stewart

Technical Specialist

Technical Services

 

Standard Chartered Bank

Phone:

+603 7681 2101 

Fax:

Mobile:

+603 7956 4658

+60 176083655 

Fonenet:

16032101

 

 

Email:

david.stew...@sc.com  

 

 

 

 

 

 

 

 

 

 

 

Address: Level 2, Menara LYL Jalan 51A/223 Petaling Jaya Selangor 

Website:  http://www.scopeinternational-kl.com
 

 


This email and any attachments are confidential and may also be privileged.
If you are not the addressee, do not disclose, copy, circulate or in any
other way use or rely on the information contained in this email or any
attachments.  If received in error, notify the sender immediately and delete
this email and any attachments from your system.  Emails cannot be
guaranteed to be secure or error free as the message and any attachments
could be intercepted, corrupted, lost, delayed, incomplete or amended.
Standard Chartered PLC and its subsidiaries do not accept liability for
damage caused by this email or any attachments and may monitor email
traffic.

 

Standard Chartered PLC is incorporated in England with limited liability
under company number 966425 and has its registered office at 1 Aldermanbury
Square, London, EC2V 7SB.

 

Standard Chartered Bank ("SCB") is incorporated in England with limited
liability by Royal Charter 1853, under reference ZC18.  The Principal Office
of SCB is situated in England at 1 Aldermanbury Square, London EC2V 7SB. In
the United Kingdom, SCB is authorised and regulated by the Financial
Services Authority under FSA register number 114276.

 

If you are receiving this email from SCB outside the UK, please click
http://www.standardchartered.com/global/email_disclaimer.html to refer to
the information on other jurisdictions.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Difference between DISP=NEW and MOD for a PDS member?

2011-03-05 Thread Gerhard Adam
I'm not sure I'm interpreting this properly, but it sounds like you have a
product and want to support what your customers do when accessing a PDS
(even if they code it badly).  My confusion comes from your reference to the
JFCB, since it suggests that you are checking to see what they have coded,
instead of simply overriding whatever they have coded with the proper
disposition.  Since you have the JFCB, just set the DISP and issue an OPEN
TYPE=J and be done with it.  That way it doesn't make any difference what
they have coded (since whatever they have coded shouldn't make any
difference anyway).

You don't have to explain it to them, nor do you have to justify your
actions.  You're simply taking the choice away from them regardless of what
they code and doing it properly.

>I've found the immediate problem. Where my code apparently really should be
>testing for NEW or MOD it was testing only for MOD, and as JCL apparently
>populates JFCBIND2 with NEW (x'C0') for both NEW and MOD my code was
failing
>for both cases.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Difference between DISP=NEW and MOD for a PDS member?

2011-03-04 Thread Gerhard Adam
I don't think it's a question of what to reply, but rather if you feel the
product is poorly documented or supported, then it seems the place to be
having this conversation is with IBM.  I personally can't see any reason for
coding DISP=MOD or DISP=NEW when maintaining PDS members.  As for, customers
can do what they want ... perhaps so, but not if they expect support.  I had
one programmer code DISP=(OLD,DELETE) to try and remove a member from a
LNKLST library, so I don't accept that argument (fortunately security
prevented him from deleting the entire library).  If they use the DISP
improperly, then you will see this kind of coding.

My point about poor coding practice, is that the DISP is used to reflect the
state of an entire data set and not individual members.  So while DISP=NEW
and DISP=MOD are supported, there's no practical reason for their use and it
creates the erroneous view that DISP processing operates against members.  

While it's not my place to tell you what to do, or how your organization
runs, I'm concerned that you've mentioned that you have a "bug", but haven't
mentioned what IBM's response is to this (perhaps you mentioned it already
and I simply missed it)?

I don't find that the MVS documentation is "scattered" as much as it is
redundant in numerous placed.  I don't recall seeing conflicting or
contradictory documentation, so I'm not quite sure what you mean by that
statement.  From what I posted, it was found exactly where it should be, so
I'm not sure what you might be referring to.

If you don't mind ... what is the "bug" you're experiencing?  Is it behaving
in some unusual way or simply failing?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Difference between DISP=NEW and MOD for a PDS member?

2011-03-04 Thread Gerhard Adam
Why should it make a distinction?  DISP=MOD and DISP=NEW behave exactly the
same way for a new data set also.

In fact, a DISP=MOD doesn't make logical sense for a PDS, since it isn't
sequential data that is being appended, nor does it make sense for a
directory.  While the manual indicates that both forms work, they seem like
a poor practice in actual coding.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Charles Mills
Sent: Friday, March 04, 2011 4:34 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Difference between DISP=NEW and MOD for a PDS member?

Yes. Notice that it makes no distinction between MOD and NEW for a member.
That is exactly why I asked the question.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Gerhard Adam
Sent: Friday, March 04, 2011 4:08 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Difference between DISP=NEW and MOD for a PDS member?

Has it occurred to anyone to just look at the JCL Reference manual?

" When you specify DISP=MOD or DISP=NEW for a partitioned data set (PDS) or
partitioned data set extended (PDSE), and you also specify a member name in
the DSNAME parameter, the member name must not already exist. If the member
name already exists, the system terminates the job.

When you specify DISP=OLD for a PDS or a PDSE, and you also specify a member
name in the DSNAME parameter, the data set must already exist. If the member
name already exists and the data set is opened for output, the system
replaces the existing member with the new member. If the member name does
not already exist and the data set is opened for output, the system adds the
member to the data set.

When you specify DISP=MOD for a PDS or a PDSE, and you do not specify a
member name, the system positions the read/write mechanism at the end of the
data set. The system does not make an automatic entry into the directory.

When you specify DISP=MOD for a PDS or a PDSE, and you do specify a member
name, the system positions the read/write mechanism at the end of the data
set. If the member name already exists, the system terminates the job.

When you specify DISP=SHR for a partitioned data set extended (PDSE) and
also specify a member name, then:

* If the member name exists, the member can have one writer or be shared
by multiple readers, or

* If the member name does not exist, the member can be added to the data
set. Thus, multiple jobs can access different members of the data set and
add new members to the data set concurrently -- but concurrent update access
to a specific member (or update and read by other jobs) is not valid. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Difference between DISP=NEW and MOD for a PDS member?

2011-03-04 Thread Gerhard Adam
Has it occurred to anyone to just look at the JCL Reference manual?

" When you specify DISP=MOD or DISP=NEW for a partitioned data set (PDS) or
partitioned data set extended (PDSE), and you also specify a member name in
the DSNAME parameter, the member name must not already exist. If the member
name already exists, the system terminates the job.

When you specify DISP=OLD for a PDS or a PDSE, and you also specify a member
name in the DSNAME parameter, the data set must already exist. If the member
name already exists and the data set is opened for output, the system
replaces the existing member with the new member. If the member name does
not already exist and the data set is opened for output, the system adds the
member to the data set.

When you specify DISP=MOD for a PDS or a PDSE, and you do not specify a
member name, the system positions the read/write mechanism at the end of the
data set. The system does not make an automatic entry into the directory.

When you specify DISP=MOD for a PDS or a PDSE, and you do specify a member
name, the system positions the read/write mechanism at the end of the data
set. If the member name already exists, the system terminates the job.

When you specify DISP=SHR for a partitioned data set extended (PDSE) and
also specify a member name, then:

* If the member name exists, the member can have one writer or be shared
by multiple readers, or

* If the member name does not exist, the member can be added to the data
set. Thus, multiple jobs can access different members of the data set and
add new members to the data set concurrently -- but concurrent update access
to a specific member (or update and read by other jobs) is not valid. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-02 Thread Gerhard Adam
>Of course it's not all due to a lot of executable code. It's much more
complex than that. Like most things in life, there are mixtures of causes.

>Changing CICS is going to take a lot of time and effort, and some work has
obviously already been done. I personally believe the priority of 64-bit
exploitation >should have been much higher, but I'm certain the development
team has pressure on it from lots of source.

Unfortunately, the problem is that even if IBM were to provide this
capability, the likelihood is that the overwhelming majority of customers
wouldn't use it.  Even in the example you cited, if this were available
tomorrow, I would bet that there would be no project on the schedule to
begin exploiting the feature and the customer would continue to run 800
regions.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-02 Thread Gerhard Adam
>This was confirmed to me by a very senior, knowledgeable person at a large
manufacturing company in the Midwest. Originally, they had split CICS into
multiple >regions for functional and operational reasons, but the number
grew from ten to over eight hundred because they kept running out of address
space for programs. 

I tend to remain skeptical that 800 regions is the result excessive
executable program code.  

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-01 Thread Gerhard Adam
>Perhaps you believe that IBM will *never* support code above the line.  I
>don't happen to think that they are that short-sighted.  We have already
>seen that thee loader can load data-only CSECTs above the bar.  I suspect
>that there is more to come.

Depends on what you mean by "support code".  Certainly memory resident
modules/libraries might make sense.  User application code isn't worth the
effort, since there isn't anything that is remotely approaching the 2 GB
limit.  So, it is entirely possible that as an addition to an LPA-like
function that executable code would be supported.

However, I don't understand why any other application would be interested.
It isn't as if any executable code is so large that it can't live within the
2GB virtual area.  Where's the constraint on executable code?  Since data
can readily be stored in 64-bit storage and switching modes is readily
achieved, what exactly is IBM's gain in expending the effort to allow 64-bit
executable code beyond the novelty of doing it?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-01 Thread Gerhard Adam
>Huh? We're discussing how to get around that restriction.

The point is that there is nothing to "get around".  Executable code isn't
supported about 2GB for numerous reasons already mentioned.  Therefore the
middle 64-bits of the z/architecture PSW only contain zeros.  Compressing
the PSW doesn't incur the loss of any data.  Data loss would occur only if a
64-bit instruction address had to be preserved, but since that isn't valid,
there is nothing lost.

If you attempted it, then you would experience instruction address
truncation of the high order bits.

Once again, there's nothing to "get around", since it simply isn't allowed.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-01 Thread Gerhard Adam
Nothing like that is required.  My understanding is that when the PSW is
stored it is compressed down to an ESA/390 PSW format (check bit 12) and
expanded when it is reloaded.  There is no point at which an 8-byte PSW is
"active" in any sense.

>There are indeed many programs that inspect the 8-byte PSW, but not that
many 
>that manipulate the address portion. And those that do typically do
something 
>"reasonable" with the address, such as a) replacing it altogether with
another 
>address or b) adjusting the address by 2, 4 or 6 bytes (or the ILC) to
cause an 
>instruction to be re-executed or skipped. z/OS can probably compare the
8-byte 
>PSW at re-dispatch time against a copy saved at interrupt time and, if it
has 
>changed, guess right 99% of the time as to what should be done to the
16-byte 
>PSW to effect the intended change(s) just by following just a few simple
rules.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-12-01 Thread Gerhard Adam
>The same thing happened with 370 to XA, and probably with revolutions in
previous versions (I wasn't there for them).

Actually it wasn't the same, since the size of the registers/PSW didn't
change during that entire period.  Therefore, while their usage changed,
there were no
Issues related to storing such values.  Changing the displacements and
layout of control blocks is a major departure and seems virtually impossible
to maintain any kind of downward compatibility once such a change is made.

>We were still struggling with VSCR long after MVS/XA was GA.

One problem was the inability of applications to exploit 31-bit, so VSCR
continued far longer than it needed to.  This isn't the case in z/OS.
Despite the caveat of claiming that we'll never need that much storage, the
reality is that 2GB is a phenomenally large amount of storage for executable
code. So, other than consuming it in memory residency requirements, it is
not something there is a pressing need to change.

Adam 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-11-30 Thread Gerhard Adam
How would you branch to code above the 2GB bar, since none is allowed there?
The obvious problem being how you would even get it loaded up there.  If I
recall, the fundamental problem is that the PSW cannot be saved with an
address greater than 31-bit for the next instruction since neither the TCB's
or RB's have a large enough area to store it.


>I have a vague memory of reading, here or on the assembler mailing list,
>that you would need to run disabled if you wanted to branch to code
>located above the bar.  I also have a vague memory that the Wrath of
>God would probably fall on you anyway if you tried it, disabled or not.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 64 bit mode disabled

2010-11-30 Thread Gerhard Adam
If you were disabled it wouldn't be a very practical way to maintain data
buffers.

No, you don't have to be authorized, nor disabled.

> From what I understand running in 64 bit mode (SAM64)  you have to run
disabled  does the
> SAM64 instruction do that

  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IEFBR14

2010-11-20 Thread Gerhard Adam
What would be the point of an eye-catcher?  So you can identify the module
in a dump?

Adam

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Martin Packer
Sent: Saturday, November 20, 2010 9:58 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: IEFBR14

Pathlength to eg branch around an eyecatcher would be trivial compared to 
just getting started.

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IEFBR14

2010-11-20 Thread Gerhard Adam
Well with two instructions a single APAR would make the ratio 50%.  If it
only had one instruction at the time, it would have been 100%.  So I'm sure
the ratio is true, even if it is trivial.

Adam

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ed Gould
Sent: Saturday, November 20, 2010 8:09 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: IEFBR14

--- On Fri, 11/19/10, Paul Gilmartin  wrote:

It is rumored that IEFBR14 has the highest ratio of APARs to
bytes of code of any MVS program supplied by IBM.

I don't know how to substantiate that.  I'd be more confident
that IEFBR14 has the highest ratio of lines of Friday LISTSERV
discussion to lines of executable code.

Just doing my part,
gil

Gil:
The only APAR that I can ever remember IEFBR14 was zeroing out the return
code in reg 15. I know I was one of those people who asked for tyhe APAR and
that was a *LONG TIME AGO*. I am talking late 70's or early 80's. I know at
one time there was "talk" about having an Eye catcher in the "module" but I
do not recall ever hearing where that ended up.Personally I think its a
waste of time as we are talking 2 instrctions. 
ed




  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IEFBR14

2010-11-19 Thread Gerhard Adam
>just to let everyone know...ca7 problem..

What does that mean?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IEFBR14

2010-11-19 Thread Gerhard Adam
IEFBR14 can't get a S0C4 from the instructions coded in it, since none of
them require addressability.  Therefore, you're looking a modified version
of it that obviously has code added.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-03 Thread Gerhard Adam
>The way I read the articles, there was mirroring and the failure of
>primary was made disastrous by the failure of the mirroring device. If
>this is the case, what are the probabilities of the same thing on IBM
>devices regardless of the operating system?

That's probably true.  After all, who would ever think to have more than one
backup.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-03 Thread Gerhard Adam
>That works fine for files managed by a DBMS. What about ordinary PS/PO 
>datasets that may get updated several times between backup cycles?

They need to be backed up more frequently if they're that critical.  This
isn't rocket science.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-03 Thread Gerhard Adam
>No argument but, as we all know, "Stuff" happens.

>Here on "The List", we all have the benefit of long experience and a 
>very high set of quality standards. Not every shop enjoys these attributes.

Sorry, but that's no excuse.  When someone sets themselves up as being the
outsourcer and is being paid in excess of $2 billion, there simply aren't
any excuses.  We're not talking about some small data center with limited
staff.  We're talking about a data center that feels that it is capable of
being in the business of performing IT services for others.

It is simply inexcusable.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-03 Thread Gerhard Adam
>A file or database gets corrupted Tuesday evening and Wednesday
>morning review catches it.  Meanwhile further updates have been done.
>How simple is the recovery and damage limitation process?  This is
>just on scenario of failures that can take much time to fix.

The problem with all these "what if" scenarios is that they don't explain
how the problem occurred.  If a morning review can catch a problem, then
what was the condition that presented the previous evening?  Is it something
that could be detected by automation?  Is it something that requires manual
review to prevent its occurrence in the future.

The point isn't that problems can't occur, but what is being done to
mitigate their effects?

When a problem is understood then actions can be taken to prevent its
recurrence.  However, it seems clear that in this case, that whatever
problem may have existed, it was ignored.  Nothing takes seven days to
recover unless you've screwed up virtually everything that was there to
protect you.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-02 Thread Gerhard Adam
That was bad programming practice even then.  Especially in that timeframe
when there's no question that the access method would've kicked back an
error condition for attempting to replace a duplicate record.  My point
here, is that this is not a technology issue, but a "people not doing their
job" issue. 


>It happened in the late 1970's.

>
>>We had an incident in Illinois where a license plate number was 
>>re-assigned but the database wasn't updated because the plate already 
>>appeared in the database. An innocent man was killed by State Police 
>>because the previous holder of that number was a badly wanted felon that 
>>was characterized as "Armed and Dangerous". You should have seen how 
>>fast the fertilizer hit the Westinghouse! But the worst damage was 
>>already done.
>>
>
>>20-20 hindsight?
>>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-02 Thread Gerhard Adam
>If the process seems to be working smoothly, who checks results? And how?

>We had an incident in Illinois where a license plate number was 
>re-assigned but the database wasn't updated because the plate already 
>appeared in the database. An innocent man was killed by State Police 
>because the previous holder of that number was a badly wanted felon that 
>was characterized as "Armed and Dangerous". You should have seen how 
>fast the fertilizer hit the Westinghouse! But the worst damage was 
>already done.

>20-20 hindsight?

You're kidding right?  It sounds reasonable that a database application that
is responsible for license plates was never tested to see how it behaved in
the event of a duplicate record?  How exactly does an application not test
for duplicates?

Once again, if you don't check results, then you can't know what's wrong.

As for how? ... well that depends on the possible errors that can occur.
Every problem has a cause, and I simply don't believe that some corruption
occurred without any indication whatsoever (in the backup or in the day to
day processing).  Something occurred someplace and it was missed.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-02 Thread Gerhard Adam
>It turned out our contract with our service provider did not allow for the
testing back-ups.

>And, even though it's a best practice, if you don't pay for it, you don't
get it.

>An application problem, followed by a procedural problem still happens.

I completely understand, but those aren't problems (at least not in any
technical sense).  Quite frankly, if they don't pay for it, they deserve
everything that happens to them.

>From the top comment, I can't even begin to imagine how such a contract is
negotiated.  It's worse than amateur hour.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-02 Thread Gerhard Adam
>If something starts going wrong and doesn't get detected for a period
>of time, the backups will be contaminated from time of first failure.
>Getting things straightened out can be interesting.  

What can start going wrong that doesn't get detected?  Someone's not paying
attention.

These issues have been around for well over 40 years, and if they haven't
been addressed then shame on whoever's responsible.  It simply isn't
acceptable to say that "something got messed up".  This isn't 1975.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-02 Thread Gerhard Adam
>Having been in on a couple of recovery actions due to major files
>getting fouled up, I can believe that things could take up to a week.

How do major files get fouled up with adequate backups?


>There also are some fiascos that real-time
>backup won't guard against.

Not true.  Especially when every company that does DR testing always come
back proclaiming how successful the test was.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-01 Thread Gerhard Adam
>In all those story links, there was no mention at all that their DR plan
was 
>even invoked.

>Maybe it wasn't deemed to be a significant enough failure to warrant 
>executing their DR plan ???

If a seven day outage isn't enough, what would it take?  A nuclear first
strike?  In my view, this is precisely what's wrong with most DR plans, is
that they don't actually plan for the likeliest cases.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Virginia DOT outage

2010-09-01 Thread Gerhard Adam
>What we can learn here is to be more careful when using "modern" storage
>systems. They are more a computer system than storage today. And it is no
>matter if you have a mainframe or a Nintendo server. Both we need data to
>work.

This has nothing to do with "modern" storage systems, and everything to do
with "modern" attitudes that have forgotten the lessons of the past.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Virginia DOT outage

2010-09-01 Thread Gerhard Adam
Regardless of the cause, doesn't this say more about the disaster recovery
scenario than anything else?

 

Adam

 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone Using z10 Large Pages in anger?

2010-08-15 Thread Gerhard Adam
>sometimes you just have to take IBM's word for certain recommendations as they 
>are >very difficult to measure outside a lab.  One example that comes quickly 
>to mind is >CPENABLE.   

True enough, however perhaps one of the reasons it's difficult to measure 
outside the lab is that there is no appreciable difference except in the lab.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone Using z10 Large Pages in anger

2010-08-15 Thread Gerhard Adam
>In theory, getting 1 Storage Obtain for 1M should be less overhead than 256 4K 
>>versions. The virtual storage algorithms to get the data is supposed to be 
>less (need >less levels of hashing).

It is my understanding that the primary "benefit" is to extend the reach of the 
TLB so that effectively, address translation occurs at the segment rather than 
page boundary (in terms of TLB entries).

This seems like a difficult enough metric to obtain and other than theoretical 
explanations, doesn't seem likely to become easier.  It is all predicated on 
the notion that for many large memory applications, the concept of 
"super-pages" can increase the practical usefulness of the TLB without have to 
make the size any larger.

In that way a 2x256 way TLB implementation can suddenly span 512 MB instead of 
only 2 MB. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone Using z10 Large Pages in anger

2010-08-15 Thread Gerhard Adam
> We are still using large pages but have been doing it in steps over the 
> last  6 months (IPL with larger LFAREA, convert more WAS regions to 64-bit).

OK, I give up.  Why?  What is the benefit versus the cost?  Even the original 
literature suggested that it could cause performance degradation for some 
applications (although obviously not because of bugs).

So I'm curious.  Who is actually measuring this?  What is being measured?  and 
how did anyone determine that it would be beneficial?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone Using z10 Large Pages in anger?

2010-08-14 Thread Gerhard Adam
>Thus there is a hole in the RMF metrics (unless ALL large pages are being 
>used) covering >unused large pages, and indeed, the total large pages present 
>on the system.

Actually my thought is also a "so what".  It seems that the other issues of 4K 
to 1M split and coalesce are certainly legitimate issues as well as an IPL for 
LFAREA.  However, this has nothing to do with the original point about metrics.

What is interesting is that despite the fact that no one seems to have measured 
any performance metrics regarding benefits for large page support, it seems the 
obvious question is then why use it?  This solves all the concerns in one 
stroke.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Anyone Using z10 Large Pages in anger?

2010-08-14 Thread Gerhard Adam
>My current main concern is that once large pages are deployed, a complete 
>memory >map of real memory will no longer be possible until the missing 
>metric(s) are added (and >at what level of z/OS that would take effect/be 
>retro-fitted).  As things stand, this >missing 'unused large pages' metric in 
>RMF SMF records will seriously mess with our real >memory monitoring 
>capability (which is important to us, as we maintain everything in >real in 
>production - well, attempt to!)

I'm not clear on what your concern is.  Is it your "monitoring capability" or 
is it the actual performance?  What is it you expect to monitor with these 
metrics?  Why should having an unused large page be any different an a current 
unused segment?  If it's unused, then by definition it has no impact, unless 
you're suggesting that it restricts the available memory for other users.  

In short, I'm not clear on what you're monitoring with real memory maps nor why 
you care, but it's your installation ... 

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: gssa started task - very high cpu utilization

2010-07-22 Thread Gerhard Adam
Since GSS is used to run the IMODS for the System Condition Monitor (SYSVIEW) 
it will be directly related to how many are run and the frequency with which 
they run.  In addition, it would be useful to check and see if you also have 
Insight/DB2 running, since that is often involved in higher CPU utilization.



> This is my first post to this list. We are experiencing very high mips usage 
> by a 
> third party product - sysview/cacommon serices. Since I upgraded to Sysview 
> 12.5 the GSSA(stc for common services) has been causing high spikes in cpu 
> during various times of the day. Of course I am working this issue with the 
> vendor but I thought someone may have experienced this at their site. We are 
> running z/OS 1.10 - sysplex of 10 lpars, another with 7 and another with 3. 
> Some of the cpus are z10s and a few are z9s. The previous releases did not 
> cause any of these issues. Maybe there is a apar that someone knows of? 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Could two parallel sysplex share one CF LPAR

2010-07-06 Thread Gerhard Adam
>Normal One parallel sysplex use its own CFs.

> I wonder whether two parallel sysplex could share one CF Lpar 

That's the definition of a parallel sysplex.  Therefore when any images share 
the same coupling facility (CF), they are, by definition, part of the same 
sysplex.

In actuality, you aren't asking about sharing a coupling facility, but rather 
whether you can partition a coupling facility to segregate the operations of 
two separate parallel sysplexes.  The answer to that is no.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Is it 5%

2010-06-25 Thread Gerhard Adam
>>So, what is the point, on today's fast machines, about worrying about the 
optimisation of the small stuff?

This statement makes no sense at any level.  What constitutes "user code"?  
Does that only apply to instructions that I've personally crafted?  If I 
read/write an unblocked file, is there no room for improvement simply because I 
cause an access method to perform an I/O hundreds of more times than I need to?

More importantly, why distinguish between a service that a user would have to 
perform anyway, versus one that is provided by the system?  Does microcode 
count, or only hardware implemented instructions regarding "user code"?

If I invoke a REXX built-in function is that user code or vendor code?

More importantly you're ignoring the fact that coding is rarely a singular 
issue, but rather it involves poor designs and implementations.

Even if this percentage could be corroborated based on some arbitrary 
definition of what delineates "user" from "system" code, the point is 
irrelevant.  Systems behave as an aggregate of their code and the requests that 
are expected to be satisfied.  To treat any of them in isolation completely 
misses the point.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


WLM question that I'm afraid I know the answer to

2010-06-24 Thread Gerhard Adam
Yes, just QUIESCE the job.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF99 record

2010-06-02 Thread Gerhard Adam
> So to me, collecting them and storing them all the time is a waste since they
> are really only useful (for the most part) to IBM and for diagnostic purposes.
> Comparing SMF 99 to CICS and DB2 SMF records ... or SMF 30s and 70s is
> not a good comparison.   It's like keeping a trace on all time (even though
> it might be low overhead) just because you may want it someday.

They certainly aren't used frequently but can be very useful in helping 
determine why certain actions were or weren't taken by WLM.  As for keeping a 
trace on all the time, in truth, it's irrelevant since you aren't turning that 
off.  The only thing you're turning off is the actual collection of the data.  
It's easier to have it, especially when trying to resolve a particular WLM 
decision than it is to try and recreate the problem.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: U.S. house decommissions its last mainframe, saves $730,000

2009-10-11 Thread Gerhard Adam
Since when does the U.S. House of Representatives mean all of Washington, D.C.? 
 More importantly, since when is $730,000 much of a data center?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Sites running CA-SYSVIEW release 12.

2009-03-09 Thread Gerhard Adam
Was anything changed when SYSVIEW 12.0 was installed regarding the panels it 
was on?  In other words, if it was moved to a sub-menu, then you could 
experience a problem like this.

This sounds like an ISPF navigation issue (since SYSVIEW doesn't actually have 
anything to do with the use of the = sign or semicolon in this context).

Adam

>We are having a problem when in CA-SYSVIEW, then entering the ISPF 
>jump key character '=' (equal sign) in combination with the TSO/ISPF 
>command separator character ';' (semicolon)
>example given  =ze;2

>which results in ISPF going to the primary options menu rather than 
>the application Ze  (for us it is the job scheduling package Zeke).


>I am looking one or more sites that are running CA-SYSVIEW release 12 
>and might still have CA-SYSVIEW 11.6 laying around.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Performance Question for your collective consideration

2008-12-23 Thread Gerhard Adam
The primary benefit of multiple engines is that they will reduce the 
probability of queueing for the processor, so if there are a sufficiently high 
number of ready tasks, then this can improve throughput (not necessarily 
response time).

However, in the scenario you've described the other configuration has engines 
which are about 60% faster, which means that each set of instructions will 
complete about 60% faster, allowing more work through the dispatcher in a 
shorter period of time.  Once again, queueing would be reduced.

A show-stopper is if any of the single-TCB units of work would exceed the 
resources available on an engine (in the 5-way).  In this case, a faster 3-way 
is the only option (or splitting the unit of work to create more competing 
units).

On the other hand, it is highly unlikely that the long-running transactions 
would actually benefit from the faster engines since they are probably I/O 
bound which will tend to dilute any benefit of the faster engines.  Unless a 
unit of work is particularly CPU bound, then simply having faster engines isn't 
likely to make much difference.

In general, the guideline should be based on the level of ready tasks (i.e. 
potential for queuing), which will be more readily handled by multiple engines. 
 In addition, the larger the number of engines, the flatter the usage "curve" 
is since queuing occurs significantly later than with fewer engines.

It is important to note that simply having ready tasks is not a criteria for 
requiring more engines.  The point is that if there is a sufficiently large 
number of tasks so that some units of work are continously blocked (or there is 
workload that can benefit from parallelism), then the higher the number of 
engines the better the throughput will likely be.  

I can't emphasize this enough if there is no queuing then there will be 
little or no difference in the performance of these two machines.  Also note, 
that all systems will have queues, but the point is that the queuing must be a 
large enough component to impact the throughput or response time.  In other 
words if the purpose is to run 10 transactions per second, then the 5 engine 
machine will only need to deliver 2 transactions per second per engine, while 
the 3-way will have to run about 3.3 transactions per second per engine.  If 
each of these transactions were to take 50 milliseconds to execute then 900 
milliseconds would be available for other work even though we might well be in 
the queue waiting for access (single engine - 5 way example).

We could also see this using a simplifieid Velocity calculation similar to what 
Workload Manager uses.  If there were 10 ready tasks in both configurations 
(and neglecting higher priority work), the balanced velocity on the 5-way 
processor would be 50%, while on the 3-way it would be 30%.  Both values would 
represent the same level of throughput.  (Of course in reality it would vary 
more than that because of higher priority tasks and varying number of ready 
tasks).

Anyway  my two cents


>...  Some of the transactions are coded in a way I saw many
>years ago that I characterize as running a batch job in a CICS.  I hold my
>breath while they run and pray the web side of the interactive workload
>doesn't back up or time out and cascade to a bigger issue.  These exist for
>historical reasons, which I am trying to be delicate about discussing.

>We have a TOR and a couple of FOR regions.  We have enough online related
>TCBs hot at a time that I lean more towards the 5 CPU machine, but some
>others here want the 3 CPU machine to push the single threaded stuff faster.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Performance Question for your collective consideration

2008-12-19 Thread Gerhard Adam
> 
> If you had two machines, equal MIPS z10 BC boxes, would you want the box
> with 5 CPUs or the one with 3 CPUs?  Memory, etc all equal.
> 

Well therein lies your problem.  They are NOT equal machines and the reason why 
this comparison is incorrect is because you're using that nonsense metric MIPS.

If we use your example and simply said that the total machine configuration was 
600 MIPS, then the one machine would actually be a 5 x 120 MIPS machine and the 
other would be a 3 x 200 MIPS machine.  They would be quite different in the 
power available for any given set of instructions.

This is only one reason why MIPS is such a bad number to use and is generally 
so completely misunderstood.  The most obvious point is that if 600 MIPS were 
the power available, then it is clear that this is wrong since no single unit 
of work could actually use it.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: tracking UNDISP. TASKS

2008-08-21 Thread Gerhard Adam
RMF Produces an interval report which shows the queue depth during that 
interval.


Also, if you look in the RMF Report Analysis manual you'll get some general 
guidelines about how to interpret these values.  It has been improved to 
account for the number of engines in your configuration.


Adam



RMF CPU report. IN READY tasks. You might also check out the OUT READ
queue

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ERBZRA50/CCON
TENTS?SHELF=ERBZBK50&DN=SC33-7991-10&DT=20050721091849

Watch the wrap


Filling in for a performance guy - I need to find out over time the
number of
UNDISPATCHED TASKS on our overloaded z9's... a peak per interval would
be
great.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Going unsupported - time to fold?

2008-07-09 Thread Gerhard Adam
It might succeed, if management throws enough manpower and servers into 
it. But at what expense? Who's going to coordinate the implementation? 
Who's going to determine and map out all the interactions between datasets 
and processes? How will data be moved from one process to the next, 
perhaps on a different server? How many "administrators" will be required 
to maintain that whole enterprise (polite word for mess?) Are programs to 
be translated from COBOL to another language? Who's going to train staff? 
At what expense?


I believe that these are all valid questions that need to be answered 
before any prognosis of success or failure can be made.


In my admittedly limited experience, that server farm is going to take a 
significant increase in manpower just to maintain the status quo; finding 
and/or making a developer staff is also going to be a major headache. You 
might find people that know the languages, but they still need to learn 
about the business. And as others have already noted, security and legal 
issues will further muddy the waters.





All this may be true 

The problem is not whether it is true or not, but whether it is really your 
responsibility to raise these questions.  It's one thing if your input is 
being solicited, but it becomes problematic when management is being 
questioned about expenditures and costs by subordinates.


As I said, these questions need to be answered, and ultimately they will be. 
However, all too often, the people raising concerns don't have the complete 
story (and are not involved in the decisions), so when they begin to push 
the issue they are perceived as stepping way over the line regarding both 
their authority and responsibility.


From the general tone of this discussion, we need to take a step back and 
stop assuming that we're the only people that care what happens in an 
organization. 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: WLM and skip clock

2008-07-09 Thread Gerhard Adam

In some past posts, while discussing about WLM, someone introduced 'skip
clock' concept to describe behaviour of wlm when  goals are not reached, 
for

instance because they are quite high and there isn't enough to boost them.
In this case wlm ignores them (or leaves them with reached velocity) until
skip clock expires (so I was told).

Is this skip clock concept described somewhere (manuals, papers ) ?






From the WLM Services:


"They are also skipped when a potential receiver's skip clock hasn't 
expired. If a receiver is assessed and all actions to help it are rejected, 
a skip clock (counter) is set and the service class period will not be 
selected as a receiver again until the skip clock expires." 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: WLM and skip clock

2008-07-09 Thread Gerhard Adam

In some past posts, while discussing about WLM, someone introduced 'skip
clock' concept to describe behaviour of wlm when  goals are not reached, 
for

instance because they are quite high and there isn't enough to boost them.
In this case wlm ignores them (or leaves them with reached velocity) until
skip clock expires (so I was told).

Is this skip clock concept described somewhere (manuals, papers ) ?


Check out the WLM Services manual in the Chapter "Using SMF Record Type 99". 
It's not discussed heavily, but it indicates its use.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Going unsupported - time to fold?

2008-07-08 Thread Gerhard Adam

FWIW, I pretty much agree with you.  I'm not terribly comfortable with
the "Let it fail" philosophy, though; I would feel obligated to try to
save the company/agency some pain if I could do it simply by pointing
out some potential red flags.



"Let it fail" is not a philosphy, but rather it is an attempt to let 
decision-makers be responsible for the decisions they make.  In the scenario 
described there is apparently a management decision and a project manager 
that believe that the z/series processor can be migrated in the next 6 
months.  It would appear that the onus is on them to deliver.


From their perspective, there is no "pain" from which to save them.  Any 
attempt to convince them otherwise will only reflect negatively on the 
"advice-giver".


Besides ... what makes everyone so sure that they're wrong?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Going unsupported - time to fold?

2008-07-08 Thread Gerhard Adam
I hate to rain on anyone's parade, but all the posts notwithstanding, the 
problem here isn't failure, but the fear that it might actually succeed.


If the project is as doomed to failure as many have been saying, then so 
what!  Let it fail.  It might be a difficult time and a humbling experience 
for some, but in the end the original system will be vindicated.  However, I 
suspect that the real concern is that it might actually work.  Maybe not as 
well ... maybe without all the bells and whistles, but in the end if it does 
tha job, then management will have been vindicated in the decision.


Doom and gloom is not a way to convince management of the proper course.  If 
the project can succeed, then you'd better get on board now, because the 
train is leaving the station.  If the project is doomed to fail, then that 
will also be apparent soon enough.




1) if by going unsupported - you lose the ability to apply fixes to
software that might have otherwise fixable security holes - are you
therefore in violation of HIPAA? S-OX? or, if you have government or
military contracts, in criminal violation of government-mandated data
privacy protection laws?


What does this even mean?  How would you define a "fixable security hole"? 
Does this mean I only have to have support for RACF (or pick your favorite)?




2) or at the very least - is leaving yourself vulnerable to un-repairable
software grounds for a share-holder (taxpayer in this case) lawsuit?


Not likely.  Since the only way for problems to surface is if you make 
changes, or exploit new features, then by definition you won't have 
unreparable software if everything is frozen.  In the event that something 
did happen, its already been pointed out that help can be obtained for a 
fee.  (Of course, none of this is applicable if "out of support" goes on for 
prolonged periods). 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Practical jokes for mainframe systems programmers

2008-05-21 Thread Gerhard Adam

How about an operator command we wrote as follows:


$TSYS=HI

$HASP000  OK
$HASP999 System now in high speed

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: JES2 DD Concatenation issue

2008-03-28 Thread Gerhard Adam

Paul Gilmartin wrote:

To my meager understanding, if JES2 is down you're SOL, whether
or not it ENQs (but is there a special case in which JES2 might
crash but fail to free the ENQS?)  By my ancient experience,
if JES2 is down, TSO is likewise SOL (or was it Roscoe, then?)



I'm a bit surprised that no one appears to be using the JES2 backup proc.

S JES2BKUP,JOBNAME=JES2,SUB=MSTR

(Where the name of the proc is whatever you'd like it to be).

A stripped down version that allows you to bring up JES2 regardless of the 
state of PROCLIBs would seem a reasonable action.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Is IT becoming extinct?

2008-03-24 Thread Gerhard Adam

What's much harder for both data processing and for users is to figure
out how to collect and use data that might give us that competitive
advantage - without spending more than the return.


Agreed.  But that's a question that's independent of technology.  That isn't 
to say that technology can't assist in this question, but it can't drive it.


Sometimes IT organizations forget that they didn't invent information, they 
simply are a means of managing it.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Is IT becoming extinct?

2008-03-24 Thread Gerhard Adam

More in Nicolas Carr from my favorite pundit.

Carr-ied away: http://www.issurvivor.com/ArticlesDetail.asp?ID=651

Carr-toonish engineering:
http://www.issurvivor.com/ArticlesDetail.asp?ID=652



There is also some truth in these articles, and they should be carefully 
considered.  One problem is that it appears that everyone is applying 
whatever interpreation  they like to the term, "IT", and then using it to 
support their particular point of view.


The author criticizes Carr using absurdity " Every business has access to 
the same everything as every other business -- the same technology, ideas, 
people, processes, capital, real estate ..."


This is patently false, since neither the ideas, people, capital, or real 
estate are commodities in any sense implying equality.  Processes may or may 
not be identical, which doesn't really say much, but to trivialize this 
argument by suggesting that all these elements are on an equal par with 
technology for comparison is seriously disingenious.  By the author's own 
admission " Most internal IT organizations long ago changed their focus. 
They seldom develop. Mostly they configure and integrate purchased 
applications. "  What is this if it isn't turning applications into 
commodities?


Don't get me wrong.  I'm not here to defend nor support Nicholas Carr.  What 
I am saying is that to dismiss some of these points out of hand is also 
wrong, and bears some scrutiny in assessing what is occurring within IT. 
(Disclaimer:  I am not a supporter of Nicholas Carr, nor am I familiar with 
his writings beyond those stated in these posts).


Consider this:

In the early years (decades) of computing, there was a strong incentive for 
companies to develop in-house applications because of the competitive 
advantage this could provide.  An idea could be developed and implemented 
that might completely blind-side a competitor and provide a significant 
business advantage.


Increasingly, this is no longer the case and we have seen a decline in the 
need for large development staffs with a significant portion of software 
being purchased from outside providers.  In other words, many applications 
have become commodities that no longer convey advantage.


Therefore to determine which direction Applications Development is taking 
within the term, "IT",  compare how many new systems and/or applications are 
being created in-house versus those that are purchased "off the shelf".


We could also consider what's happening with IT Operations, and it should be 
abundantly clear that there is a higher degree of automation and system 
tools which have been brought to bear, so this area has also shrunk to 
"commodity" levels.  In other words, in today's environment the operator 
also requires less expertise.


In systems programming, we have also seen greater consolidation of hardware 
resources and more software tools being made available to gain economies of 
scale.  Because of these changes, fewer people can support larger 
configurations.  The responsibilities have also become more specialized with 
the vendors providing a greater role in supporting systems than in previous 
decades.   Systems programmers (in many organizations) have become largely 
supplanted by systems administrators.


In all these cases, the argument can be made that IT has evolved to be 
functional with fewer individuals and less expertise (on hand, on a daily 
basis).  This doesn't mean that the expertise isn't required, but rather 
that it doesn't have to be on staff as a permanent position.  This is one 
reason for the rise of outside service providers.


Similarly, even though many applications components are commodities, many 
other elements are also assumed, so resources need to be expended to live up 
to the expectation.  For example, in the past while response time was useful 
to improve productivity, etc.  In today's "commodity" environment it is an 
expectation that the customer has.  While it is a "commodity" it is also an 
expectation, so that failure to provide expected services becomes a 
competitive disadvantage in today's IT world.  In the past a database might 
have provided advantage by allowing a corporation to access customer data 
more quickly than a competitor.  In today's environment, the database is 
"assumed" and failure to being able to access customer data is a liability.


Without reading too much into it, I would suggest that Nicolas Carr has a 
legitimate point when he says that IT can no longer be assumed to carry a 
business advantage.  In addition, it would appear that it really doesn't 
"matter" from a purely technical perspective.  However, like all the other 
technologies that business relies on, the advantage comes from providing a 
high quality level of services for "expected services" and deploying these 
"commodities" effectively to enhance the business environment.


It seems like everyone wants a black or white argument, in that IT either 
goes away completely, or

Re: Is IT becoming extinct?

2008-03-24 Thread Gerhard Adam
Predicators of the mainframe demise are probably of the same genre as 
those "experts" (who have probably never opened a science book) who are 
expounding the dangers of this "global warming" nonsense.




Why is it that "experts" that expound global warming are fools, but 
"experts" that denounce it are assumed correct?  Can I assume that your 
statement (on global warming) is based on your expertise and that you aren't 
simply repeating something you've heard?


My point, is that expert opinions are just that  opinions. The article 
under discussion is flawed at so many levels, that its difficult to know 
where to start.


First, there is no mention of the mainframe, its demise, or anything 
relating to it in the article.


The article suggests that IT (as an organization of individuals) is verging 
on extinction, but then proceeds to suggest simply that someone else (i.e. 
non-technical personnel), can fulfill the role, or perhaps outsourcers?


If it's outsourcers, then the argument that "expertise" is no longer needed, 
simply falls apart, since it is only a transfer of expertise to a different 
supplier.  However, if the argument is that computer technology is becoming 
simple enough so that anyone can support it, then we have to examine how 
realistic that position is.  I would suggest that more suppliers of 
"expertise" have surfaced because of smaller systems (i.e. "Geek Squad", 
etc.).  This would suggest that the simplicity of personal systems has 
failed and that (other than the power user), the home consumer now has a 
greater need of technical support.


Are we to believe that this situation will suddenly be resolved at the 
corporate level by those same users?


One of the most serious flaws in the article is to dismiss IT as a 
"commodity".  This is easy to say when systems are in place and functioning, 
but it does nothing to address the issue of new applications and the 
infrastructure needed for data management.  Are we to assume that systems 
design, data management, security, communications, etc are also commodities?


The truth is that IT services will not be eliminated, although how they may 
evolve (or mutate) is certainly beyond most of us.  As systems become 
"simpler" for the end-user, there is a higher degree of expertise required 
to deal with the complexities that deliver that "simplicity".  IT services 
have always evolved as the technology has changed.  Businesses have always 
tried to save money and sometimes these two entities have collided with 
harsh consequences.


One point that should be considered, is that much of the article doesn't 
suggest extinction, but rather a higher degree of competition with outside 
providers (either software or services).  This is certainly going to be the 
case, so anyone that thinks that IT will, or should, be "business as usual" 
is in for a rude awakening.  The biggest danger to most IT organizations is 
that they don't realize they're even in a competition and that will 
certainly cause them to go away in favor of their competitors.


Despite its failings, this article should serve as a wake-up call to anyone 
thinking that IT is simply a job you can go to for 30 years and then retire.


Anyway ... end rant

Adam


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Define Pagespace -- SWAP vs NOSWAP?

2008-03-20 Thread Gerhard Adam
... and logical swap is basically about quiescing work - rather than 
freeing up the memory.




But I'll bet that logically swapped address spaces are the first "losers" 
if page stealing has to kick in.


Not true.  Since stealing is no longer associated with address spaces, but 
rather global storage use, this particular bias can't occur.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-24 Thread Gerhard Adam
How to determine what the charge will be is both a political and an 
accounting issue.

With politics being the major one.

Resource-based chargeback is only understandable by IT, and not in every 
case.


I agree, however I'm surprised that no one has mentioned the obvious point 
that needs to be considered.  Chargeback shouldn't be based on usage, but 
rather on the capacity that has been reserved for the anticipated load. 
Every application ultimately contributes to the size of the configuration, 
and serves as the basis for the capacity installed.  Therefore, whether that 
capacity is used or not, it indicates the resources that have been set aside 
for a particular application and represents the resources which should be 
charged.


The amount of reserve capacity available is ultimately a political decision, 
but a user should be charged based on the resources that need to be 
maintained to provide their required level of performance and throughput. 
It doesn't matter whether the user actually uses all the resources all the 
time, since they must exist in the event that they are needed.


Obviously there may be many ways in which these various accounting processes 
can be implemented, however the ultimate problem in most chargeback 
scenarios based on usage, is that they presume that idle resources are 
somehow free (or worse that they should be a distributed cost among all the 
users).


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CA-SYSVIEW

2008-02-22 Thread Gerhard Adam

Janet:

Please contact me offline and I can give you a bit of information regarding 
this product.


Gerhard Adam

- Original Message - 
From: "Grine, Janet [GCG-PFS]" <[EMAIL PROTECTED]>

Newsgroups: bit.listserv.ibm-main
To: 
Sent: Friday, February 22, 2008 1:06 PM
Subject: CA-SYSVIEW


I am researching a product called CA-SYSVIEW.  Can anyone offer any
opinions on this product?  I did notice that Pat O'Keefe seems to like
it.  Anyone else?



Thanks in advance,



Janet (Jill) Grine








--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-22 Thread Gerhard Adam

   Yes , it is the question of the communication with Linux and NT people.
I wanted to explain to my collegues , why the 0.5 % constant CPU usage for 
an idle server is a matter in a large  z/OS system.
An  argument would be,  if I could say: in a week it is "nearly" an hour, 
and an hour CPU in a large system is about  ... $ or ... Euro.


A much bigger question is whether the Linux and NT people have the ability 
to sustain 100% utilization and still meet performance objectives.


Until that point is understood, maximizing the usage of z/series will be 
completely foreign to them.


One other point to consider, is that the argument you're making about how 
"expensive" 0.5% utilization is, will actually backfire since it creates the 
impression that the mainframe is unnecessarily expensive.


This also presumes that the Linux/NT people have an understanding about what 
constitutes 0.5% usage on a z/series versus their respective platforms.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Block Paging in z/OS V1.8

2008-02-21 Thread Gerhard Adam

z/OS 1.8 changed a lot of things in this area.  Page replacement, UIC
calculation and updating, no more physical swaps of ASIDs to AUX storage
to name a few.  So I'm not surprised you don't see blocked paging without
the physical swaps.



There hasn't been alot of detail published about the z/OS 1.8 changes, but 
it seems that the changes in UIC (as well as the change to global 
management) would preclude block paging functions.  If I recall, the concept 
of block paging involved stealing pages from within an address space that 
shared the same UIC value.  Both of these concepts no longer exist (in that 
form), so it would seem that without some other change being indicated, 
block paging would no longer function as previously.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-20 Thread Gerhard Adam

OK, all you guys are right. There is no way to do charge-back
accounting. All the formulae are wrong regardless of what they are.


First of all no one said you can't do chargeback, but only that the 
simplistic solutions being proposed aren't accurate.


If you have a formula, I'd love to see it

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-20 Thread Gerhard Adam


Since what I have said is so ridiculous, why don't you take a crack at
answering the original question? Then we can all take pot shots at what
you say, pointing out that the example you used, based on a real system,
is absolutely _ [fill in the blank].



First of all these comments weren't intended as a personal attack, and I 
never indicated that YOU were ridiculous, but rather than the proposed 
mechanism for costing CPU seconds was.


The primary reason they are ridiculous is that the question that you have 
posted cannot be answered in a simple list-server and to suggest otherwise 
truly is ridiculous.


The original question is wrong at so many levels, especially if one thinks 
that someone might actually be contemplating an I/T accounting/chargeback 
system based on an answer from this list.


This process is far too complex and requires far too many levels of 
management/accounting/technical involvement which is precisely why so many 
of these systems are simply ridiculous.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-19 Thread Gerhard Adam

That example stated that $800 per CPU/hr was the cost for a machine
(undefined as to number of CPUs, MSUs, etc.). It also did not state what
the system software cost, etc. It was a number used to give an example.
And it was about what the cost was for an Amdahl 5990-1400 w/ 2 Gig
C-Store and 2 Gig E-Store (and I can't tell you what circus bureau had
that cost per CPU hour, but I used to work in the facility).

So, the more processors you have to split the costs across, the lower
the CPU/hr charge may be. And those charges are based on the SMF
collected times (since that is what fed the accounting system).

And then there could be charges for I/Os, tape mounts, ATL mounts, etc.
(all things done by Circus Bureaus).

So it is not so ridiculous.



It is clearly ridiculous since the projected $7,000,000 (by your own 
definition) doesn't include peripherals.  Also, more processors doesn't 
SPLIT costs, it multiplies them since more power is presumably available for 
more work.  Since a single unit of work can only take advantage of a single 
engine, then costs should have no bearing.  Same service consumed 
regardless.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-19 Thread Gerhard Adam

You're not going to have that overhead for only one day.  0.5% of a $10
Million computer is $50,000.


... This simple example is based on the notion of the computer costing
$10,000,000 every day.


No, it isn't.  It's based upon the total life cost and assumes that the 
processor

is kept until it has very little value.



Then why mention the $50,000 in the first place, since you didn't qualify it 
over some time span?  The point is that these types of numbers get discussed 
alot when comparing costs, but without a timeline (AND some way of 
correlating computing power available per unit of cost) this is all simply 
nonsense.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Price of CPU seconds

2008-02-19 Thread Gerhard Adam

You're not going to have that overhead for only one day.  0.5% of a $10
Million computer is $50,000.  Of course, that ignores software costs and 
the

other things that have been mentioned.  To think only of CPU seconds
trivializes it.


Unfortunately these kinds of calculations are not only misleading, they are 
wrong.  This simple example is based on the notion of the computer costing 
$10,000,000 every day.  Whereas projecting this cost of $50,000 over the 
period of a year results in about $0.0015 per second in costs.  So, in order 
to consume 0.5% of the available computing power over a year, then that also 
represents 0,5% of all the available time during that year, or 43.8 hours of 
computing time.


One of the other examples uses $0.22/second which results in nearly 
$7,000,000 per year in costs.  So, its not too difficult to extrapolate that 
a 20-engine configuration would recover about $140,000,000 in costs, and so 
on up to 54-engines for the z9 processor.


These numbers and their calculations are ridiculous.  The notion of a single 
number metric for chargeback is every bit as ludicrous as suggesting there 
is a single-number performance value.



Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time differences for the same job

2008-02-07 Thread Gerhard Adam

 3. You can run the same job multiple times to see what the
variability is for the particular job.


Agreed, that seems to be the only sensible solution, though not an
entirely satisfactory one.



Why would this be unsatisfactory?  On a z9 processor, 1.7 billion machine 
cycles (operations) can occur every second, for every engine.  What makes 
you think that you can even remotely begin to describe what "identical" 
means for any given unit of work.  The very nature of the interactions 
between programs practically requires variability rather than 
predictability.  Even if you thought you could isolate each event, there are 
many that will be unknown (not disclosed in the architecture), subject to 
variation due to the last function performed, and unmeasurable.  Why would 
you think that absolutely single measurement repeatability was possible?


As I stated before, there will be variations, but it isn't random. 
Similarly in all my years, it is no coincidence that most operations 
departments were quite comfortable in knowing roughly how long any given job 
was expected to run (I know there are/were exceptions).  The reason they 
knew this is because it was repeatable.  The fact is that an "identical" 
unit of work will operate around a range of values and can be measured. 
Once measured, then the variation can be analyzed and a reasonable 
probability can be defined to account for the work unit's run time.


If you want to know how a particular unit of work will be behave that you 
need to come up with a probability distribution function and run it 
repeatedly under controlled circumstances.  This isn't simply some arbitrary 
point, it is absolutely required to be rigorous.   This concern about 
applications is ill-conceived if it is thought that a one-time measurement 
should produce reliable results.  If an improvement occurs, then the 
probability function will change and the results can be quantified.


In truth, I suspect most of this work hasn't actually been done anywhere to 
either:


1.  Define "identical"
2.  Set up conditions to replicate as close to "identical" as possible the 
circumstances for a unit of work

3.  Measure the variation under controlled repeatable circumstances.
4.  Measure the variation under varying loads. (which negates "identical").

Anyway  my two cents

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time differences for the same job

2008-02-06 Thread Gerhard Adam
I think people are making this unnecessarily complicated.  Variable does not 
mean random.  Jobs don't arbitrarily consume two or three times the 
resources they normally do for "identical" runs, so the question is simply 
one of how to account for normal variation.


There is no getting around the need for multiple runs to get a sense of how 
much variation occurs under similar loads.  Running benchmarks to evaluate 
the effects of "light" versus "heavy" loads are also necessary.  But more 
importantly I think we need to dispense with the silly notion that we have 
any concept of what "identical" is.


Given the large numbers of events occurring in modern computer systems, this 
is somewhat analogous to the notion of individual water droplets from a lawn 
sprinkler.  Ultimately we can agonize over the paths of individual water 
particles, and discuss all the elements that can impede or improve the range 
a droplet travels but in the end, we only want to get our lawn watered.


I don't want to offend anyone, but if you're worried about CPU microseconds 
and coding in high-level languages, I would suggest there is a fundamental 
disconnect and it makes me think you're not really serious.


In addition, if you don't have access to performance data, then regardless 
of the "demands" made by management, you're going to find it difficult to 
evaluate performance results.  Regardless of the politics or frustrations 
involved, performance cannot be derived psychically, but rather it involves 
simply crunching the numbers.  So you will need them .. audit requirements 
or not.




If variability in the measurement tool hides the improvement, is it
worth doing?  Maybe it would be in the production environment, but the
variability in the test environment makes it impossible to prove in
advance.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SRM constant

2008-01-10 Thread Gerhard Adam
"The same applies to the numeric value of the SRM constant (CPU services 
units per second), which does not change when WLM LPAR Vary CPU Management 
adds or removes a logical CP."


IBM Redbook Intelligent Resource Director pg 81 to reflect that SRM constant 
is NOT recalculated when IRD does the configure function.


Adam



No. It's not!
The SRM constant is constant.
It does not change during the life of the IPL.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SRM constant

2008-01-10 Thread Gerhard Adam
"The SRM constant varies when the operator changes the number of logical CPs 
online, through the CONFIG command."


Quote from the IBM RedBook Intelligent Resource Director pg 56.

Adam


No. It's not!
The SRM constant is constant.
It does not change during the life of the IPL.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SRM constant

2008-01-10 Thread Gerhard Adam

Additionally, if you vary engines online or offline (either manually or
using IRD), the SRM constant does not change.


If the LP's are varied on manually, the SRM constant is recalculated to 
reflect the new configuration.  This does not take place if IRD does the 
change.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM question.

2007-12-17 Thread Gerhard Adam

So, my question is, suppose we are in a high CPU use situation. In fact,
the CPU is running 100% and some jobs are not receiving any CPU.
Everything is running behind in service. That is, the PI for all batch
work is >1 . I want to ensure that non-production work will receive CPU
cycles. What is really wanted to to say that production batch will
receive 60% of the cycles, Model office about 20% and test about 10%.
These percentages are not of the entire CPU, but only of the CPU "left
over" after works such as production CICS, JES2, NET, etc get what they
need.



I'm confused by your statement regarding "everything is running behind in 
service".  It seems to me that the first question that needs to be answered 
is "What are the real-world objectives the workload has to achieve?"  A PI > 
1, simply means that your defined goal hasn't been met, but it says nothing 
about the real-world experience seen by the end-user.  In other words, don't 
specify response time goals of 0.5 seconds when 0.7 seconds would do.


Your SYSTEM and SYSSTC service classes will do fine by themselves.  After 
that it is important to ensure that the goals for the other service classes 
reflect reasonable real-world objectives, rather than simply being numbers 
that look good on performance monitors.  If the higher importance level 
service class goals aren't too aggressive, then (in the absence of capacity 
problems), there will be enough resources left to ensure that everyone gets 
a small piece.


Using Resource Groups is NOT a good idea unless you're prepared to have 
higher importance work miss its goals in order to honor the minimums.  An 
approximation of the lower velocities can be calculated as:


{[(Total number of LPs) * (Percent avail)] / (Number of competing tasks)  } 
* 100


The first term will approximate how much CPU resource is available after the 
higher importance work has taken its share, while the number of competing 
tasks is an approximation of how many work units are expected to be 
competing simultaneously.


In any case ... my two cents

Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Performance comparison: Hiperbatch, BLSR, SMB?

2007-11-09 Thread Gerhard Adam
Unfortuanately I haven't looked up this stuff in a long time, so I might be 
wrong.  But IIRC, Hiperbatch is intended for sequential access and is 
counter-productive for random files.  Since it uses a Most Recently Used 
algorithm (instead of LRU), the intent was to ensure that the most recent 
access to a record was the most eligible for getting discarded from memory 
(since this represented the last reader of the data).


The whole point was to avoid having records discarded because of age just 
ahead of someone that was reading the file sequentially.


Also, another point was that the I/O counts were unaffected since the 
application was unaware that it was using Hiperbatch, so that information is 
largely irrelevant.


Anyway ... here's hoping my memory isn't completely gone

Adam




We have a highly used randomly accessed read-only VSAM KSDS that is 
managed

by Hiperbatch during the Production batch window.  Unfortunately, some of
the jobs that use it are still seeing unacceptably high I/O counts and 
long

elapsed times.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-11-05 Thread Gerhard Adam


3380 and 3390 are still valid for disk.  Tape blocksizes can be
larger.  I still believe that IBM needs to move to FBA.  It will take
5 - 10 years but it is ridiculous that the optimal blocksizes for both
3380 and 3390 VSAM make inefficient use of the track if you want the
CI to be in page multiples.  It is even dumber since the actual
spinning DASD is FBA.


This begs the question.  Tape, etc are irelevant when it comes to managing 
DASD blocksizes which is what all the fuss was about.  What I'm curious 
about, is for all the points being raised, how many people actually have 
3380 and/or 3390's on the floor.  Even the point raised here about 
inefficient track usage suggests that the originator of the question has a 
sense of the physical space usage which I suspect isn't true.


Adam 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to force initiator failure

2007-11-02 Thread Gerhard Adam

Bob:

Jim's right.  See below

P INIT,A=22
IEF404I INIT - ENDED - TIME=17.26.07
$HASP395 INIT ENDED
S INIT.INIT,,,JES2,SUB=JES2
$HASP250 INIT PURGED -- (JOB KEY WAS C13EA1E6)
IEA989I SLIP TRAP ID=X33E MATCHED.  JOBNAME=*UNAVAIL, ASID=0022.
$HASP100 INIT ON STCINRDR
IEF196I IGD100I VIO ALLOCATED TO DDNAME SYS3 DATACLAS ()
IEF695I START INIT WITH JOBNAME INIT IS ASSIGNED TO USER START2
, GROUP SYS1
$HASP373 INIT STARTED
IEF403I INIT - STARTED - TIME=17.26.10
$HASP309 INIT 1INACTIVE  C=AH


Adam
- Original Message - 
From: "Bob Stark" <[EMAIL PROTECTED]>

Newsgroups: bit.listserv.ibm-main
To: 
Sent: Friday, November 02, 2007 9:43 AM
Subject: Re: How to force initiator failure



Mark,

My reason for testing is that my automation code is trapping some messages
which allegedly are issued during an initiator failure, determining the 
failing
initiator number from stored data, and restarting the failed initiator 
once it

leaves the planet.

I want to do it a different way, so as to not have to store and maintain 
all of
that data, but I still need to determine the ASID to INIT # mapping. 
Without

an actual test, I'm sure that I'll get it wrong.

I like Sam's "spectacular suicide program" with the CALLRTM. Looks fun!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-31 Thread Gerhard Adam

>How many different DASD geometries are being
encountered today under z/OS?

As inefficient as it may seem, IBM promised (a long time ago) that they 
would not change the geometry of disk.

So, it comes down to two BLKSIZES: DASD & Tape.


Exactly ... which is why I don't understand all the talk about different 
DASD sizes and blocking factors.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-31 Thread Gerhard Adam

Sure they are trivial - until we move into an environment where there
are multiple DASD sizes with different optimal BLKSIZE needs.The
programmer shouldn't care what disk his files are in - the systems
people should have the ability to quickly and easily move the files
depending on current needs.   When they move them, they should be able
to adjust the buffering without recompiling and changing JCL.



I don't understand what you're talking about.  In today's world there is no 
need for an application's programmer to know anything about BLKSIZE beyond 
what the installation demands.  Utilities can certainly move data between 
different geometries and handle the reblocking without intervention.  The 
JCL and program don't need to specify a blocksize since the DSCB provides 
such information.  Where exactly is all the effort?


Whether the programmer should care or not is irrelevant.  In most cases, 
they actually don't know which means that your point has already been made. 
The programmer DOESN'T need to know.


Just a question though..   How many different DASD geometries are being 
encountered today under z/OS?  I am curious about the environment you're 
suggesting.


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-30 Thread Gerhard Adam

I think I would like to suggest that maybe the list as you put it has
pretty much been there since day 1 of OS/360 (before Virtual  Storage). As 
an example we had a IBM SE (yes a real SE) worked up a  document (that was 
later published as an orange book) showing optimal  blocksizes on tape AND 
buffering the tape and how it would benefit  the company ie reduced run 
time faster sorts ect. I believe that his  document started development 
on IBM's SAMe (the cost option to make  it as a default to do changed 
scheduling and the default buffno to 5.  We were amoung the first to buy 
and install the product (we had the  black and blue marks to prove it). 
Yes it was rough (although some  had worse experiences than we) but we 
stuck at it and got a real  benefit from it. It wasn't IBM's most shinning 
moment but we got real  value from it and to this day the people that have 
benefited from  this product (now included in DFSMS) is real. For an 
example we went  from 10 hour processing our nightly batch window to half 
that amount.  Yes we needed more storage but the cut in run time saved us 
from  buying a larger machine at the time (btw this was at a different 
company). So this did help out at many other companies. No JCL DD 
changes, it installed and it worked.




Don't get me wrong.  I completely agree that there was tremendous benefit in 
making I/O more efficient, however in many cases there were significant 
trade-offs that had to be made.  I still remember many situations were we 
had an 80-100K partition (or region) available to run in, and you can bet 
that I wasn't going to waste that space by allocating 5 - 7K buffers (2314 
devices)  to improve a sequential file's performance.  There would have been 
virtually no room for the program code if the only concern was optimal 
blocking.  For selected jobs or high priority work that needed efficiency, 
they were generally given the largest amount of memory and CPU access, 
precisely for the reasons you stated.  However, I also remember many times 
having to specify BUFNO=1, just to make a program fit into the storage 
available.


Additionally, my point about virtual storage was based on the experience of 
having to examine which modules were loaded into memory by the OS to see 
which could be removed to avoid virtual storage constraint.  Things improved 
somewhat with the introduction of MVS, but even so, given the amount of 
real-storage available I think many people have forgotten that you couldn't 
have 100 CICS regions at 8 MB each running.  In today's environment, people 
have buffer pools defined that represent far more storage than was available 
for several systems in those days, so I/O optimization was something one had 
to be judicious about.





I am neutral on this issue as the issues you site are somewhat true  but 
they are somewhat false. SHould it be transparent... I am really  mixed. I 
have seen (somewhat) the PC side and it leads to sloppy  programming and 
frequently unpredictable results. Not taking sides  too much I would 
rather have  predictable results.


I  understand what you're saying, but I guess to extend the thought a bit, 
my point is really that the more transparent someone is, then the more we 
depend on someone else (developers?) to make the decisions for us.  In my 
mind this will usually result in significantly less flexibility, and will 
tend to give external parties the final say in what is considered "optimal". 
One of the significant benefits of the mainframe (z/OS) environment, is that 
an installation has numerous choices to exploit their particular situation 
and means of running a business, while many of the other platforms are quite 
rigid in the options available to adapt to differing circumstances.  I'm 
sure you've seen well-run, as well as poorly run data centers, but at least 
the choices and options were available.  All too often the alternatives (I'm 
thinking PCs here) are like the absurdity of experiencing an error or 
software failure and having that stupid pop-up box appear which allows for 
the singular option of specifying "OK".




Part of the cost is that there are certain rules that must be  followed 
and if they aren't followed INCORRECT-OUT (as they say). The  PC side has 
claimed that small things should not bother programmers.  Well up to a 
point. LRECL and Blocksize are (in my world) two  different animals. As 
others have put it blocksize is agreed that it  should be (majority of the 
time) irrelevant. LRECL is not irrelevant  it is the basic fundamental way 
units of data are  presented to the  programmer


I agree.  My only point is that in many ways I think its presumptious of 
programmers to expect that they shouldn't have to know their chosen craft.


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives 

Re: z/OS 1.9 Features summary

2007-10-30 Thread Gerhard Adam

But SDB came too late: if it had been present in rudimentary
form, supplying a valid but nonoptimal BLKSIZE, in OS/360
release 1, coding BLKSIZE could always have been optional,
and much of the rough transition to present techniques could
have been avoided.


Boy are you dreaming.  Maybe you don't recall, but even with Virtual Storage 
Constraint (VSC), and system programmers evaluating which modules could be 
eliminated from processor storage (LPA) so that demands could be met in CSA 
or Private ... The use of blocksizes for programs, early buffer pools (ie. 
VSAM files, esp with multiple strings) were quite significant and the last 
thing I would have wanted was an operating system making arbitrary decisions 
about what was consider "optimal".  Maybe it was optimal for I/O, but it 
could've killed many a paging subsystem with excessive storage demand. 
Given the problems and constraints that had to be dealt with in early 
systems, I seriously doubt that SDB was at the top of anyone's list of 
issues that needed to addressed as a priority item.


As for "rough transitions",  I have to wonder whether people that can't get 
BLKSIZE and LRECL straight in their minds are in any position to be 
designing or developing anything.  This is some of the most trivial of the 
trivial problems associated with data management.  This mentality of having 
the operating system do it, is precisely why people overload systems and 
wonder why throughput suffers, or why DBA's turn on every trace under the 
sun and wonder at the attendant overhead in database performance.


Like it or not, this is a technical field and such a trivial problem 
shouldn't even be fodder for discussion.


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-30 Thread Gerhard Adam

Though in the past I would agree with you, I no longer do. I've
graduated to a L clue stick for the programming staff with NO
RESULTS. The current crop seem to have the attitude of "it should work
the way that I think it should work and want it to work. I don't have
time to be bothered. I'm too busy as it is." I blame the Web and MS for
this "instant gratification" desire in IT and end-users.


Perhaps we should stop calling them programmers and considering them 
technical?  Maybe I'm being a bit too harsh, but perhaps part of the problem 
is that we still insist on giving people technical titles and 
responsibilities (from decades where they actually meant something) to 
people that are, at best, administrators.


I realize that there have always been problem individuals, or people with 
fewer that the desired skills.  But, in truth, I have never heard more 
whining from people that have access to the best tools and technology 
available, yet they can't seem to figure out how to code an IEBGENER job 
stream.  Even worse, there has never been more documentation available than 
there is today, yet the average "programmer" doesn't seem to realize that 
(just like books), PDFs don't read themselves.  In the past many people were 
accused of only copying the examples from manuals rather than reading them, 
yet in many ways I wish the current crop would be even that industrious.


I also recognize that there are exceptions to this rant out there and to 
them I apologize for my statements and wish them much luck.


My two cents

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-30 Thread Gerhard Adam

 Say that a new application requires you to add fields
to a file's records.  You add them at the end.  Why should you have to 
recompile all of the programs which don't use those fields?  Let the 
access method ignore the rest of the record when used for input.  Thus, 
only programs which use the new fields and those that write to the file 
would need updating.




Sorry, but this simply seems quaint.  What happens when a field increases in 
size, not at the end?  Why should we have separate implementations for 
applications programs, but system utilities can't utilize this function? 
The later point being that an incorrect LRECL specification on an IEBGENER 
or SORT could be disastrous.


If the intent is to provide a character-based file management structure like 
PCs then that is certainly possible, with all the attendant overhead of 
doing so.  A record-based management scheme requires that the application 
take some responsibility for designating what constitutes a record.  Instead 
of making the access method responsible for truncating data arbitrarily 
based on LRECL, how about we consider designing applications with some 
growth built into them with reserved fields?  The only reason programs need 
to be re-compiled when fields are added is because the original applications 
people never planned for any changes, and personally I don't think such poor 
practices become the fault of the operating system to correct.


Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS 1.9 Features summary

2007-10-30 Thread Gerhard Adam

As to the BLKSIZE and LRECL parameters being archaic, they really are
not obsolete.

They are there for efficiency reasons.

Disk I/O is done on physical Blocks, not at the record level, so the
larger the block size the more Efficient I/O is.  Each record read is
retrieved directly out of the program's I/O buffer(s) that matches the
Block Size (2 Buffers by default)  For even more efficient I/O one can
use BUFNO on the DD to increase the buffering.  Usually you want to
enough buffering to hold a size that matches the transfer data package
size of the device the file is on.  (Cartridge drives are typically 64K,
Virtual Tape drive 128K, and 3390's are based on their track size (per
model), having enough buffering to return a full cylinder on DASD is a
good way to go or by multiples of full tracks.


I beg to differ.  All the points that are being raised are indicative of 
knowing what the underlying device geometry actually is, to try and gain 
efficiency from I/O operations.  In truth, the physical geometry is largely 
unknown and most of the data is actually returned from some level of cache, 
where blocksizes mean nothing (since gaps don't exist).  A programmer only 
needs to specify the LRECL and RECFM (which exist as stand-alone parms), so 
I don't understand why BLKSIZE is even discussed unless people simply aren't 
using the existing facilities.


BTW.  IIRC, the default buffer number is 5 for sequential files.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VARY too many devices offline

2007-10-25 Thread Gerhard Adam
Where do these scenarios get dreamed up?  If you have a services issue, then
you can always fire someone if they aren't living up to the terms (or your
perception) of an agreement.  If you're wrong, then a lawsuit by the other
party may help clear things up.

If you suspect something criminal, by all means you can see if a District
Attorney is willing to go for it, otherwise you can pursue a civil suit.

The rest is simply nonsense.  Firing someone for a mistake is silly.
Despite the melodramatic points raised, even when someone is KILLED, the
individual responsible may be liable, but unless negligence or criminal
intent can be proven its highly unlikely that they will be fired.  In fact,
from the legal perspective the individual making the mistake may not even be
considered responsible if they can prove that they weren't adequately
trained for the responsibility given to them.

Adam


  Yes, if they were stealing services or committing some act of 
fraud, maybe they should get fired.  

> A clue was given when it was said that SMF had been turned off. Now, if
> charge-back accounting was being done AND these consultants were in some
> way being charged for CPU time (and/or other resources) that they used
> then this was a form of theft or embezzlement/fraud.
>
> That being the case: depending on what corporate counsel advised, I
> would present them with a bill for 1.5 times their average use on a
> weekend, point out specific points in any contract that shows or states
> this is a material breach.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Are there tasks that don't play by WLM's rules

2007-10-13 Thread Gerhard Adam
>The reason was that there was a single job running in the test service
class and many jobs
>running in production. In this case, the PI of the test service class
>was greater than the PI of the production one, so WLM gave CPU cycles to
>the single test job.

This wouldn't make any difference unless the higher importance work was
already meeting its goals.  The PI is tied to the importance level to
determine where help is needed.

>I don't have this particular problem with production CICS because I have
>marked the production CICS service class as CPU CRITICAL. This means
>that WLM will NEVER steal CPU cycles from CICS to help other, lower
>performance, work even if the CICS PI is smaller than the other work's
>PI.

This is also qualified by the importance level.  CPU cycles can certainly be
stolen to satisfy service classes at the same (or higher) importance levels.
If there are a lot of Importance 1 service classes, then your CICS would
routinely be tapped for dispatching priority.


>(PI is performance index. The smaller it is, the "better" the work in
>that service class is running. A PI of 1 means that the work is
>performing as specified in the service class. Less than 1 means that it
>is exceeding its performance specification. The larger the PI, the
>"worse" the service class is doing.)

Much more significantly is whether WLM is "ignoring" the service class
because of impossible to reach goals.  This is a far more likely scenario
where higher importance work is not helped while lower importance work is.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: India is outsourcing jobs as well

2007-09-27 Thread Gerhard Adam
As far as the numbers go, you couldn't survive anywhere in the United 
States on $1000/yr. Heck, the federal minimum wage is currently $5.85/hr 
which translates to $16,380 if you work 40 hours a week, fifty weeks per 
year. Here in California, In-N-Out burger (http://www.in-n-out.com/) 
starts people at $9.50/hr! That's $26,600/yr just to make hamburgers! 
(Really good ones though!)


I think you might need a new calculator.  40 hours for 50 weeks is 2,000 
hours.  So $5.85/hr is $11,700 annually, while $9.50/hr is $19,000.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Using WLM to kill a job

2007-09-24 Thread Gerhard Adam
>>> Resource groups don't work, since everyone in the resource group
gets
>>> capped by the one person hogging up the cpu. Using service class
>>> periods and bumping down to discretionary doesn't work, since even
at
>>> discretionary you can chew up a lot of cpu (we have lots of cpu in
the
>>> morning, but then we cap in the afternoon). We are trying to stop an
>>> individual from making our cap happen earlier in the day then we
want.
> 

>> Add another service class period with very low (or non-performing
>>goals)
>> to
>> the affected service class. When the user(s) reach the Service Unit
>> Limit and drop into the "last" period. Very little or no service will
be
>> consumed.


>No, this is the same situation as the OP describes for discretionary. If
>his cpu is empty, this group of jobs will still be allowed to consume
>lots of CPU.


Maybe I just don't get it, but one job can't simply "chew up all the CPU".
When the service class is interrupted, then the work will rotate between all
the competing tasks, so that enforcement of a cap doesn't unduly punish
anyone.  In addition, if the intent is to reduce the high water mark for
CPU, then why does it matter where the usage originates?  10 low usage jobs
vs 1 high usage job ... makes no sense.  If you cap a service level then you
will achieve your objective.  Any badly behaving unit of work needs to be
manually transferred to a different service class for special handling (or
it can be done through automation), but you can't be generic about it.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLC, WLM and all the rest: how optimize them ? to Steve Samson - errata corrige

2007-09-24 Thread Gerhard Adam
>If it's 'suspended' by skip clock how is managed during this period ? Is it
>not serviced  at all ? Does it become discretionary ? What else ?

It is NOT managed, nor is it eligible to receive help regardless of what its
importance level or performance index are.  Keep in mind that WLM will
choose a "receiver" and attempt to help bring the performance index into
line with the goals.  In this case, the service class being "skipped" will
not be chosen as a receiver and therefore won't obtain any help.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLC, WLM and all the rest: how optimize them ? to Steve Samson - errata corrige

2007-09-21 Thread Gerhard Adam
>It's not clear to me if the whole service class will be ignored or just the 
>JOB or 
>task that cannot achieve its goal will be ignored though.


Yes, the entire service class is ignored by the Skip Clock setting.  What isn't 
so clear is what the status of resources is that are owned by this service 
class.  Since technically WLM does not take resources from a service class that 
would cause it to miss its goals, I'm not clear on what happens to those 
resources when a goal can't ever be reached?

I suspect that the service class would retain its resources to not make its 
condition any worse ... my concern is that such a service class state would 
create a universal donor.  Any thoughts?

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLC, WLM and all the rest: how optimize them ? to Steve Samson - errata corrige

2007-09-21 Thread Gerhard Adam
>I suspected that the work was not set aside from what I understood (and from 
>the calculation I imagine WLM does) as this could complicate WLM or make 
>WLM algorithms 'weak' .  So you say that there's a waste of system resources 
>that are >used to (try to) reach the given goal as it will never be reached.

Sorry to disagree, but workloads that consistently have unreachable goals are 
ignored by setting the skip clock and are not examined as receivers until the 
skip clock expires.  This ensures that WLM doesn't waste time and resources 
trying to help work attempting to achieve impossible goals

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLC, WLM and all the rest: how optimize them ?

2007-09-19 Thread Gerhard Adam
> That's why you need to be careful in selecting the percentile.  You want 
> WLM
> to exclude the outliers when it makes it's calculations.

True, enough.  However using outliers also means being statistically astute 
rather than picking large percentages because they produce low PIs.  In 
particular 50% (which has been suggested in some places) is ridiculous, 
since it represents a coin-toss ,... not a percentile ranking.

If work doesn't behave consistently, then it can't be grouped together 
regardless of what WLM allows you to define.

Also, let's consider what a sufficient number of jobs is.

At a percentile of 80% completing within 1 minute, would require 7200 jobs 
per day through the service class (minimum) to be a meaningful reflection of 
a performance goal.  In addition, we would expect 1440 of those jobs to MISS 
their goals unless there were spare resources.

Anyway ... my two cents

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The future of IBM Mainframes [just thinking]

2007-09-09 Thread Gerhard Adam
>I suggest that a "very strong innate curiosity" is insufficient.  A good
>systems programmer must have a virtually insatiable curiosity.  I sometimes
>wonder whether any wholly sane individual would seek to go into the field of
>systems programming.

OK ... there's alot of self-motivation that goes into being good in any 
reasonably challenging career.  However, let's not get too carried away with 
ourselves.

We don't leap tall buildings in a single bound, we aren't faster than 
locomotives.

There are many, many fields that are far more stressful, and there are a 
significant number of careers that require considerably more knowledge (and 
responsibility) than systems programming, so let's keep things in perspective.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The future of IBM Mainframes [just thinking]

2007-09-09 Thread Gerhard Adam
>Ah, Consultant which puts you in amongst those who would benefit if all IT 
>work were >outsourced. 
>It may not be my business but it helps to know which side of the economic 
>argument >you're on.


I can't quite believe that you could truly be that naive.  However I do see 
your agenda, so there's not much point in continuing this dialogue.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The future of IBM Mainframes [just thinking]

2007-09-09 Thread Gerhard Adam
>Charter.net? and just who do you represent?

Who do I represent?  

Well, not that it's really any of your business, but I'm a consultant with 35+ 
years as a MVS/zOS systems programmer. 

I don't believe there's any reason why I need to justify the use of a 
particular ISP to you.  However if you must know ... this is what works at the 
hotel I'm staying at while I'm on an assignment.

Adam

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


  1   2   >