The problem you describe was found by Dianne Eppestein at Southwestern Bell
in 1987
in 19xx and Bill Richardson at IBM SMF created the DDCONS=NO option that
eliminated
the CPU time when there were tens of thousands of DD segments at step
termination.
But the same problem would impact any long-running asid, batch job or
started task,
that created many DD segments.

>From MXG Newsletters:

In 1987, Diane Eppestine at Southwestern Bell saw that whenever their
SAR job (a SYSOUT processing subsystem) was cancelled, the CPU went to
100% busy for 30-60 minutes.  Instruction traces found the "loop" was in
DD Consolidation.  SAR dynamically allocates a DD for each SYSOUT file
it processes; by the end of the week that step had over 75,000 DD
entries!  DD consolidation reads the first DD segment, scans the
remaining 74,999 segments for a match, reads the second and scans the
remaining 74,998 for a match, etc.  etc., etc., all at DPRTY=FE!  In
response to Diane's discovery, Bill Richardson, IBM SMF Development,
subsequently provided a new SMF option, DDCONS(NO), specified in
SYS1.PARMLIB(SMFPRMxx), so that you can disable this very unwise (in my
opinion) algorithm, and thereby eliminate its wasted CPITCBTM and
CPISRBTM CPU time.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Ed Gould
Sent: Tuesday, August 14, 2012 7:54 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: reasons for using started task or batch job

Berry:

We had an STC that processed sysouts (some off the wall product that may not
be around now). If we didn't suppress the type 30's (I think its been a
while). Whenever we would bring it down it would literally cause the SMF
address space to run at 100 per cent cpu for hours writing out type 30's
records. It took me weeks to figure it out.  
Even trying to take a dump of SMF address space was fun. I took a SALONE and
did some hunting and looking around and found type 30's were the culprit. We
needed them for accounting. Trying to dump
(IFSMFDMP) the smf datasets was hopeless when this was happening. We had to
schedule extra OT for the operators and they weren't happy.

Ed

Our operators hated being there as they couldn't leave until the system came
down On Aug 14, 2012, at 11:39 AM, Barry Merrill wrote:

> Except for the existence of SMF 30 EXCP segments based on SMFPRMxx 
> options, (i.e., whether they exist in the Step term and Job term 
> records, where they cause problems for long running STCs, causing 
> virtual storage exhaustion, or whether they are only in the Interval 
> records, so they are still counted, and which is recommended for long 
> running STCs), I'm not aware of any other differences in the SMF data 
> created for a batch job versus a started task.
> Can someone educate me if there are other differences in data?
>
> I have a very old note that says that running a task as a Started Task 
> rather than as a JOB bypasses some validity checking, but that if all 
> of the STEPLIB datasets are Authorized, that same validity checking is 
> bypassed for a JOB.
> But I have no more details nor any proof, now.
>
> Barry Merrill
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to