On 10/20/2010 06:00 AM, Binyamin Dissen wrote:
On Wed, 20 Oct 2010 12:34:42 +0200 Marco Gianfranco Indaco
<mgind...@gmail.com> wrote:
:>Hi to all, it's probably a stupid question.
:>Which is the most expensive method to call a program in batch mode?
:>Using a PGM=program within a submitted(and converted) jcl or calling the
:>program from a another one.
:>And what about data definition?
:>Allocate a file within a program or from the jcl?
:>i.e. It's better to have 3 job that call 3 pgm or a program that has 3
:>nested call?
First off all, you have to define expense - cost of people versus cost of
processing.
Obviously a single TCB will use less CPU but a mish-mosh of a program will
cost more people time should there ever be a problem or a need of a change.
--
Binyamin Dissen<bdis...@dissensoftware.com>
http://www.dissensoftware.com
Director, Dissen Software, Bar& Grill - Israel
If Programs A, B, C are really three independent processes
that run serially, then having A call B, call C in one step really
complicates restart
If Programs A, B, C are really three independent processes that need to
run serially, then having A call B, which calls C, in one step really
complicates restart when a problem occurs (which eventually will happen)
in either B or C, and it makes the total process much more obscure to
those who follow and must maintain it. Internal calls should be
restricted to cases where it makes logical sense - performing some
action that is an integral part of what the calling program is doing.
The start up and termination cost of a job step is minor and a occurs
only once per job step. If these programs are doing anything of
significance with any data of significance the step overhead will be
unnoticeable when compared with actual processing of data.
The part of file allocation that has significant cost is basically the
same whether the allocation is via JCL or dynamic allocation. Unless
you really need the flexibility of making the allocation conditionally
dependent on data in ways that are impossible with JCL, dynamic
allocation makes it more difficult for future maintainers to determine
all the input/output files for a process. In addition, there are some
things that either just can't be done or are exceedingly difficult to do
with dynamic allocation - like waiting until datasets that are in use
become available rather than failing allocation, or having concatenated
tape data sets that won't be active at the same time share a single
drive, or having automatic job restart managers like CA11 or ZEBB handle
dataset deletion and GDG adjustment on a job restart. Also changing
file allocation parameters in JCL is trivial and cheap compared with
modifying and recompiling a program when those parameters are embedded
in code using dynamic allocation.
--
Joel C. Ewing, Fort Smith, AR jcew...@acm.org
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html