If portability is an issue then we need ANSI stream I/O in all REXX 
environments. If we had that we wouldn't need EXECIO for new code.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


________________________________________
From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> on behalf of 
Hobart Spitz <orexx...@gmail.com>
Sent: Sunday, September 19, 2021 12:53 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

Thank you for your relevant and helpful comments.  They are very much
appreciated, as I omitted some topics.  I'll do my best to address them
here.  Pardon the lack of brevity.

Concerning EXECIO:
       Yes, the z/OS and z/VM EXECIO versions are mostly incompatible.
Once you have Pipes you don't need or want EXECIO.  Here's why:

   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
   can only use a DD.
   - TSO Pipes has a format that is syntactically and semantically similar
   to that in CMS Pipes.  It is used with a PDS/E allocated to a DD.
   - When a dataset is specified, Pipes defaults to OLD when writing.  This
   is something that z/OS access methods don't even check.  You could still
   accidentally shoot yourself in the foot, but it's not obvious in JCL  In
   Pipes to have to explicitly override the default by coding SHR next to the
   dataset name.  I don't know why you would want to.  Consider the
   organization that doesn't protect itself from writing with a DISP=SHR in
   JCL, either with education, standards or exits.  OT:  This is why you
   should *always* put DISP= on the first line of a JCL DD and the DSN on a
   continuation.  Otherwise, if someone performs maintenance and accidentally
   ends up with DISP=SHR on an output DD, for a long time there may be no
   errors and it may run fine even in production.  That is, until the day that
   another process writes with SHR or reads while the dataset is in an
   inconsistent state.  There could be a lot of confusion.  I would be glad to
   hear from someone knowledgeable that I'm wrong on this and that I missed
   some access method change that has made the concern unnecessary.
   - Pipes lets you read member(s) through a DD and names(s) specified in
   the pipe stage or by feeding member names to certain stages.  With EXECIO,
   you must put the member name on the allocation.  If you've ever read a
   large number of members from a PDS/E with EXECIO, you know it takes
   forever.  You must go through TSO ALLOC command attach, enqueue, catalog
   search, VTOC search, directory search, and OPEN (to name those I know
   about), and then free and dequeue the entire allocation before starting the
   next member.  (OT:  ISPF services allows you to allocate a PDS/E and then
   process multiple members.  What takes minutes with EXECIO takes seconds
   with ISPF and Pipes.)
   - With Pipes, you don't have to write a loop to process or build records
   in a stem or in the stack.  Since the records are ready available to the
   steam, you process them from there in any way that your application
   dictates.
   - Pipes does parallel enqueues, allocations and opens via its COMMIT
   mechanism.  In a nutshell, during the initialization stage (commit level <
   0), any stage can issue a negative return code.  It stops further regular
   processing and gives stages the opportunity to release any resources
   obtained.  This is similar to what happens with batch JOB JCL.  Recovering
   from an enqueue, allocation. or open failure can be complicated with
   multiple instances of EXECIO.

Concerning GLOBALV:

           I have looked at GLOBALV files and they do not appear to be too
difficult to read and write compatibly with REXX and/or Pipes.  IBM
documents the format.  SESSION values could be saved in a VIO dataset for
performance and tranciency.  Thus writing a GLOBALV for TSO seems highly
practical.  If there was no better option, one could write logic for just
those functions that are used by the CMS code being ported.


Concerning checkpointing:

   - Checkpointing was originally implemented because large organizations
   had regular jobs that ran for hours. If there was an abend and a long
   running job had to be rerun, whole sets of dependent jobs would be delayed,
   throwing weekly payroll, monthly payables, or bill cycle receivables, etc.
   behind schedule.  The company could be put at risk of penalties for late
   payroll, interest charges for overdue payments, or delayed receivables
   income, etc.  Because today's platforms are orders of magnitude faster,
   there are many fewer such jobs today, even given increased volumes.  Many
   checkpointing concerns are moot today.  (This includes for example baring
   temporary datasets in production because a job that now runs in 10 minutes
   or less might have to be restarted when it would have run fine from
   scratch.  It will take that time to review the problem, fix it and set up
   the job for restart.  Too many people are copying JCL or following less
   than well documented standards that they and/or their management don't
   understand.)   The remaining long running jobs are prime candidates for
   being rewritten in Pipes.
   - Rewriting a long running JOB or UNIX piping command in Pipes can cut
   the run time dramatically.  A rule of thumb for the time saved is the total
   I/O time for all intermediate datasets used to pass data from step to
   step.  A one hour job could be reduced to 6 minutes.  So even if the legacy
   code does still need checkpointing on today's processors, as a pipe it
   probably won't.  YMMV.
   - If you do actually have to have a long running pipe, other than a
   non-terminating server (e.g. SMTP.  Yes, such things are done!), Pipes and
   REXX offer solutions.  None are for beginners, but are doable if you know
   Pipes and your application requirements.  These next bullets can be skipped
   if you care only that it can be done and don't care about the details.
      - You can set up the pipe to process a block of detail records. Then
      it terminates with a special rerun return code and updates a
      RecordsProcessed variable in the calling program.  That code
would tell the
      invoking REXX program to reissue the pipe.  The REXX program initialized
      the counter to 0 and the pipe would be written with a DROP stage
that would
      skip already processed records.  (A Pipes only solution is possible, but
      off-hand, I think it would be more complicated and harder to
understand and
      maintain.).
      - Alternatively the DELAY could force an exit after running 30
      minutes, for example.  The return code and counter variable would be
      as above.  The advantage here is adaptation to changing ambient
workloads,
      processing requirements, and hardware and software changes.
      - In the case of multiple keyed data sources, save the last
      successfully processed keys in a REXX variable or checkpoint
file, and use
      the saved keys to start processing on all runs.  Count the keys processed.
      - For existing, and needed (see above), checkpointing requirements,
      you can save the same information as is being done now and, if it
      exists, use that when the pipe starts up.  At the checkpoint, update the
      information.  At termination, clear the information.

Concerning Global Warming:

       I'd rather not go into this in detail, but some response seems
unavoidable.
       This may be a paraphrase:  "Everyone is entitled to their own
opinions and beliefs.  No one is entitled to their own facts and logic."
To hold onto beliefs and opinions in the face of contrary facts and logic
seems unwise, at best.

       I don't know that 40% is accurate.  There is so much misinformation
that it's hard to trust anything coming from or favoring extremes.  It
would seem obvious that destroying credibility is counter productive.  In
the case of CoViD, anti-maskers and anti-vaxxers are getting sick and dying
at a higher rate than the overall population.  Who will be affected by the
climate crisis?  The poor, underprivileged, and/or minorities I suspect.

       Why are we still giving tax breaks, subsidies, and/or hand-outs,
etc. to the same people who knew about global warming and hid the
information or who created an EV and then killed it?
       Driving a hybrid, PHEV, or EV reduces the demand for oil.
Paradoxically, that moderates the price of oil for people still driving
gas-only vehicles and encourages the deniers.  It is a conundrum for which
I cannot even speculate a solution.

       If an EV train locomotive (WabTec?) can be made and Musk could do so
for passenger vehicles, what's holding everybody else up?

       The fact of the climate crisis is a reality that is not affected by
opinions or beliefs.  The threat to many forms of life on our planet is
recognized by all legitimate scientists and all governments participating
in the IPCC.  If making Pipes available to all z/OS customers has a chance
of reducing global warming, even a little, then there is no reason not to
do so now.


Concerning EXECIO, Checkpointing, and JCL emulation together:

        In simple cases, when multiple datasets are being read and written
by a single pipe, if any enqueues or allocations fail, no records flow in
the pipe, and all builtin stages perform their termination logic.  The last
includes freeing.all stage acquired resources..  This is similar to how JCL
works, and simplifies checkpointing.  With EXECIO, parallel allocations
then OPENs then reads are not usually done; ALLOC, EXECIO , ALLOC, EXECIO,
ALLOC, EXECIO, etc. is the practice for most programmers.


            In transparency, I cannot say that all potential issues raised
here or elsewhere are wholly resolved.  The potential benefits of using
Pipes are so enormous and vast that getting it into the z/OS base should be
IBM's top priority, bar none.  The more sites and staff are using and
familiarizing themselves with Pipes the sooner techniques and methods can
be refined and shared.  If I had to guess, I would expect the benefits of
Pipes to be much greater than that of DB2 while occurring over much less
time, with less pain, with fewer constraints and at less cost.

           Yes, Pipes has a number of stages that interface with DB2.  Most
stages interface with most other stages, so you can process DB2 tables
against sequential files, VSAM files and/or ISPF tables, for example.  To
paraphrase Melinda Varian, who also deserves kudos for her Pipes work:
With Pipes, like another product from Denmark (Legos), most pieces work
with most other pieces.  The only real limit is your imagination.
Something else that Melinda suggested:  A GUI interface would speed the
adoption greatly.  (So would JCL to Pipes converter.  🙂  )

            It may be that Pipes would be good for all IBM platforms,
because UNIX piping lacks important features (efficiency, multiple
rejoinable streams, and predictability).  IMHO, z/OS is the next most
cost/effective candidate to get what z/VM has had for decades.

            One action item that I omitted:  Vote for the Pipes in the z/OS
Base requirements on ibm.com and share.org.

            I hope this helps.

            Where I have used PDS/E, I refer to both PDSs and PDSEs.

            Thanks.

OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Fri, Sep 17, 2021 at 8:54 PM Paul Gilmartin <
0000000433f07816-dmarc-requ...@listserv.ua.edu> wrote:

> On Fri, 17 Sep 2021 16:21:25 -0400, Steve Thompson wrote:
>
> >EXECIO is not supported the same between z/os and CMS. This is a pain.
> >
> Yes.  What are your specifics.  Mine include:
>
> o Lack of the VAR and, especially STRING, options, requiring extra code.
>   Is that an accommodation to syntactic limitations of the CLIST/TSO
> parser?
>
> o Requirement for the OPEN option in some cases.  I suspect this is a
>   feckless accommodation to a customer who complained that
>   EXECIO 0 DISKW fileid (FINIS created an empty file on TSO but not
>   on CMS.  That's merely the behavior of the underlying file system.
>
> o Lack of the PRINT, PUNCH, etc. forms.  That's a difference in the
>   Data Management conventions of the supporting OS.
>
> o Difference in the fileid syntax.  As in the previous bullet.  I've easily
>   dealt with that by dual-path.
>
> The page at 
> <http://secure-web.cisco.com/1QtVBhFBOS1J2YQIpiL913w0rlPvg6P3m3sAiR4eGlIC1SdgmVqmFsImIMXSGG4C2At7G2l13raQpAfkYR39-PW1R3mLOGYMHxCgGREj0groqw6r7_RreiJJFlNe3-kbWsSlgri4T08fPjNCGGaymvrmyu3pjtiLpd_MuY8358F9VoORvSbDouA5jCUO1OX70OuXArgslZ42wa6lAbWCWFGGi5JZaEqbFBRbU9NK4Ltz02dnHNU7ICXAuZo0F3lY-p-rh7onJLgLJLj32H3JAFyggNGTnpvmcYlth3h-xqJrZbpb5Lh3SomqBC4a0NjHk3HzbBMmzVMPnuHULS_Mjv2sdXu4mVvDuVXxmsduR6c2QKYU6L--XJzNNM3OpVqVRdKzgjuMgmJUV4_djH3ZFfyfFrb9qhyBaAfi9H1yzvl1WgAsiqFCfnrEpyOqDEpCV/http%3A%2F%2Fplanetmvs.com%2Frexxanywhere%2F>
>  lauds Rexx portability.
> I see it as an argument for the other side.
>
> -- gil
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to