Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Mike Schwab
How about
//ddname DD DISP=(NEW,DELETE,DELETE),
// DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no B]),BLKSIZE=n),
// PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
// ddname1,ddname2,...,ddnameN),
and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
and sets up the PUT/WRITE to DDNAME to call the program instead?
The close unloads the PIPES program in (subtask memory).
VSAM writes accepted like a DSORG=PS?
Any other parameters possible?

A pipes DDNAME can use a PIPES parameter for a subsequent program.
Of course, ddnames can only be used by the EXEC PGM= or one of the
pipes programs.
And all pipes programs remain in memory until step end.
In case of abend / restart the checkpoint is taken in EXEC PGM= and
abandon updates / writes in any pipes program after the checkpoint.

On Mon, Sep 20, 2021 at 9:12 PM Paul Gilmartin
<000433f07816-dmarc-requ...@listserv.ua.edu> wrote:
>
> On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:
>
> >So, in a sense, instead of pipes, the programs could be modified so
> >that instead of outputting a record, call the next program passing the
> >record as input.
> >
> No.
> Rather than requiring every utility to be modified in order to use the
> capability, it should be provided at the access method level or lower,
> transparent, so the top programs are oblivious to it.
>
> And there should be provision for inserting filters between a producer
> and a consumer to convert incompatible data formats.
>
> And ... what am I thinking?
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Paul Gilmartin
On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:

>So, in a sense, instead of pipes, the programs could be modified so
>that instead of outputting a record, call the next program passing the
>record as input.
>
No.
Rather than requiring every utility to be modified in order to use the
capability, it should be provided at the access method level or lower,
transparent, so the top programs are oblivious to it.

And there should be provision for inserting filters between a producer
and a consumer to convert incompatible data formats.

And ... what am I thinking?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Phil Smith III
Hobart Spitz wrote, in part:

>This is a great comment.  I hadn't given that much thought to the question.

>Not to split hairs, but I didn't say MIPS, I said hardware.

>If I had to guess, MIPS usage might actually increase slightly, because the
>Pipes dispatcher has to switch between stages twice for every record that
>is passed.

 

Sure, just sayin' you'd want to be very clear about what you do mean.

 

I'm not quite sure what you mean by "more MIPS but less hardware", though?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Mike Schwab
So, in a sense, instead of pipes, the programs could be modified so
that instead of outputting a record, call the next program passing the
record as input.

Could something be developed similar to a SORTOUT exit that implements
this switch?

On Mon, Sep 20, 2021 at 4:27 PM Hobart Spitz  wrote:
>
> Phil;
>
> This is a great comment.  I hadn't given that much thought to the question.
>
> Not to split hairs, but I didn't say MIPS, I said hardware.
>
> If I had to guess, MIPS usage might actually increase slightly, because the
> Pipes dispatcher has to switch between stages twice for every record that
> is passed.  Access method overheard would drop.  Buffered access methods,
> in most cases, only have to increment the pointer into the block buffer,
> check for end-of-buffer and return to the caller.  I don't know for sure
> which is larger.  Maybe someone more knowledgeable than I can shed some
> light.
>
> I would say the real savings would be in elapsed run time and working set
> size.  Run time, due to eliminating something like 80-95% of I/O operations
> for intra-JOB datasets.  Working set reduction would save on real memory.
> (See below.)  Run time is probably more of a concern to customers,
> especially those with tight batch windows.  That said, working set size
> reduction would mean that processors would likely spend more, if not all,
> time pegged at 100% busy, because so many more address spaces (TSO and JOB)
> would be in a swapped-in and ready state than before.  Depending on what
> metrics the capacity planners are looking at, CPU sales might actually
> increase.  As I think about it more, if thru-put increases, new data could
> be generated more quickly and other times of hardware could be more in
> demand during peak load times.  I just don't know enough to say for sure.
>
> Phil and others know what follows.
>
> For those who don't know, in the typical case, a record passes through all
> possible stages before the next record begins the same trip.  Each record
> stays in the working page set, at least partially, during the entire time.
> Parts that are referenced have a good chance of staying cache resident
> between stages.
>
> Think of it this way:  You can visualize UNIX piping as a series of
> hourglasses open at both ends and connected in a tower.  Each grain of sand
> must stop at every "stage" and wait its turn to go through the narrow
> opening at the waist of each hourglass.  In Pipes, most stages have no
> delay and it's like a single tall hourglass tube with only one narrow
> point.  My best guess is that Pipes, in this analogy, would have only
> 5%-15% of the narrow openings as an equivalent UNIX piping command, meaning
> that the data (sand) would flow 85-95% faster in the Pipes "hourglass".
>
>
> OREXXMan
> Would you rather pass data in move mode (*nix piping) or locate mode
> (Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
> with more than a dozen filters, while Pipelines specifications are commonly
> over 100s of stages, and 1000s of stages are not uncommon.
> IBM has been looking for an HLL for program products; REXX is that language.
>
>
> On Mon, Sep 20, 2021 at 12:48 PM Phil Smith III  wrote:
>
> > Hobart Spitz wrote, in part:
> >
> > >The case *for *Pipes in the z/OS base.:
> >
> > > 2. Hardware usage would drop for customers.
> >
> >
> >
> > From IBM's perspective, that might not be a positive argument. It should
> > be-they're hopefully not fooling themselves that they have a lock on
> > enterprise computing any more, so anything that makes life more palatable
> > for the remaining faithful at low cost to IBM should be A Good Thing-but I
> > can easily imagine someone saying, "We estimate this will reduce MIPS
> > purchases by x%, that's bad, don't do it".
> >
> >
> >
> > Just sayin'.
> >
> >
> >
> > ...phsiii
> >
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Hobart Spitz
Phil;

This is a great comment.  I hadn't given that much thought to the question.

Not to split hairs, but I didn't say MIPS, I said hardware.

If I had to guess, MIPS usage might actually increase slightly, because the
Pipes dispatcher has to switch between stages twice for every record that
is passed.  Access method overheard would drop.  Buffered access methods,
in most cases, only have to increment the pointer into the block buffer,
check for end-of-buffer and return to the caller.  I don't know for sure
which is larger.  Maybe someone more knowledgeable than I can shed some
light.

I would say the real savings would be in elapsed run time and working set
size.  Run time, due to eliminating something like 80-95% of I/O operations
for intra-JOB datasets.  Working set reduction would save on real memory.
(See below.)  Run time is probably more of a concern to customers,
especially those with tight batch windows.  That said, working set size
reduction would mean that processors would likely spend more, if not all,
time pegged at 100% busy, because so many more address spaces (TSO and JOB)
would be in a swapped-in and ready state than before.  Depending on what
metrics the capacity planners are looking at, CPU sales might actually
increase.  As I think about it more, if thru-put increases, new data could
be generated more quickly and other times of hardware could be more in
demand during peak load times.  I just don't know enough to say for sure.

Phil and others know what follows.

For those who don't know, in the typical case, a record passes through all
possible stages before the next record begins the same trip.  Each record
stays in the working page set, at least partially, during the entire time.
Parts that are referenced have a good chance of staying cache resident
between stages.

Think of it this way:  You can visualize UNIX piping as a series of
hourglasses open at both ends and connected in a tower.  Each grain of sand
must stop at every "stage" and wait its turn to go through the narrow
opening at the waist of each hourglass.  In Pipes, most stages have no
delay and it's like a single tall hourglass tube with only one narrow
point.  My best guess is that Pipes, in this analogy, would have only
5%-15% of the narrow openings as an equivalent UNIX piping command, meaning
that the data (sand) would flow 85-95% faster in the Pipes "hourglass".


OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Mon, Sep 20, 2021 at 12:48 PM Phil Smith III  wrote:

> Hobart Spitz wrote, in part:
>
> >The case *for *Pipes in the z/OS base.:
>
> > 2. Hardware usage would drop for customers.
>
>
>
> From IBM's perspective, that might not be a positive argument. It should
> be-they're hopefully not fooling themselves that they have a lock on
> enterprise computing any more, so anything that makes life more palatable
> for the remaining faithful at low cost to IBM should be A Good Thing-but I
> can easily imagine someone saying, "We estimate this will reduce MIPS
> purchases by x%, that's bad, don't do it".
>
>
>
> Just sayin'.
>
>
>
> ...phsiii
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Seymour J Metz
It would not be difficult to use the API to bring in stream I/O under TSO, but 
supporting PAM and SAM is another matter.

Concurrency is another reason for porting oorexx. As for z/VM, it's not your 
father's CMS.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Paul Gilmartin [000433f07816-dmarc-requ...@listserv.ua.edu]
Sent: Monday, September 20, 2021 10:48 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

On Mon, 20 Sep 2021 14:20:04 +, Seymour J Metz wrote:

>If portability is an issue then we need ANSI stream I/O in all REXX 
>environments. If we had that we wouldn't need EXECIO for new code.
>
It's ironic.  The z/OS Rexx interpreter supports ANSI stream, but only under 
OMVS.
I wonder whether the API could start Rexx with the needed function package in 
other
environments.  And only for UNIX files, not Classic.

The z/OS Rexx compiler appears to support Stream I/O, but not for UNIX files.

Conway's Law.  An  OMVS developer pleaded for directing message and TRACE
output to stderr (the Regina default) rather than stdout.  The Rexx designers
rebuffed that.  Even though the Rexx programming interface provides distinct
calls for those functions.

I've been RTFM.  It appears that the "other side" of DDNAME is SUBSYS=PIPE.
Would that work even in a single job step, with use of alternate DDNAMES as
necessary?

And concurrency.  ADDRESS ATTCHMVS is a deception.  It always WAITs for
subtask completion rather than allowing it to run in the background. Concurrency
would be enormously difficult in CMS; relatively easy in z/OS.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Phil Smith III
Hobart Spitz wrote, in part:

>The case *for *Pipes in the z/OS base.:

> 2. Hardware usage would drop for customers.

 

>From IBM's perspective, that might not be a positive argument. It should
be-they're hopefully not fooling themselves that they have a lock on
enterprise computing any more, so anything that makes life more palatable
for the remaining faithful at low cost to IBM should be A Good Thing-but I
can easily imagine someone saying, "We estimate this will reduce MIPS
purchases by x%, that's bad, don't do it".

 

Just sayin'.

 

...phsiii


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Dave Jones
+1 for this idea.
DJ

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


How to use the FILTER command for OPERLOG in batch

2021-09-20 Thread ibmmain
Hi Rob


 Thanks for your help!


 I have another question.


 When we use the SDSF ??FILTER" command on the operlog, the output of 
"FILTER" is the complete messages ( for multi-line messages)


If we use "POS" or "INDEX" on the OPERLOG rows, how to 
handle multi-line messages?


Thanks a lot!


Jason Cai




??: 
   "IBM Mainframe Discussion List"  

  

Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Paul Gilmartin
On Mon, 20 Sep 2021 14:20:04 +, Seymour J Metz wrote:

>If portability is an issue then we need ANSI stream I/O in all REXX 
>environments. If we had that we wouldn't need EXECIO for new code.
>
It's ironic.  The z/OS Rexx interpreter supports ANSI stream, but only under 
OMVS.
I wonder whether the API could start Rexx with the needed function package in 
other
environments.  And only for UNIX files, not Classic.

The z/OS Rexx compiler appears to support Stream I/O, but not for UNIX files.

Conway's Law.  An  OMVS developer pleaded for directing message and TRACE
output to stderr (the Regina default) rather than stdout.  The Rexx designers
rebuffed that.  Even though the Rexx programming interface provides distinct
calls for those functions.

I've been RTFM.  It appears that the "other side" of DDNAME is SUBSYS=PIPE.
Would that work even in a single job step, with use of alternate DDNAMES as
necessary?

And concurrency.  ADDRESS ATTCHMVS is a deception.  It always WAITs for
subtask completion rather than allowing it to run in the background. Concurrency
would be enormously difficult in CMS; relatively easy in z/OS.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Seymour J Metz
If portability is an issue then we need ANSI stream I/O in all REXX 
environments. If we had that we wouldn't need EXECIO for new code.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Hobart Spitz 
Sent: Sunday, September 19, 2021 12:53 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

Thank you for your relevant and helpful comments.  They are very much
appreciated, as I omitted some topics.  I'll do my best to address them
here.  Pardon the lack of brevity.

Concerning EXECIO:
   Yes, the z/OS and z/VM EXECIO versions are mostly incompatible.
Once you have Pipes you don't need or want EXECIO.  Here's why:

   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
   can only use a DD.
   - TSO Pipes has a format that is syntactically and semantically similar
   to that in CMS Pipes.  It is used with a PDS/E allocated to a DD.
   - When a dataset is specified, Pipes defaults to OLD when writing.  This
   is something that z/OS access methods don't even check.  You could still
   accidentally shoot yourself in the foot, but it's not obvious in JCL  In
   Pipes to have to explicitly override the default by coding SHR next to the
   dataset name.  I don't know why you would want to.  Consider the
   organization that doesn't protect itself from writing with a DISP=SHR in
   JCL, either with education, standards or exits.  OT:  This is why you
   should *always* put DISP= on the first line of a JCL DD and the DSN on a
   continuation.  Otherwise, if someone performs maintenance and accidentally
   ends up with DISP=SHR on an output DD, for a long time there may be no
   errors and it may run fine even in production.  That is, until the day that
   another process writes with SHR or reads while the dataset is in an
   inconsistent state.  There could be a lot of confusion.  I would be glad to
   hear from someone knowledgeable that I'm wrong on this and that I missed
   some access method change that has made the concern unnecessary.
   - Pipes lets you read member(s) through a DD and names(s) specified in
   the pipe stage or by feeding member names to certain stages.  With EXECIO,
   you must put the member name on the allocation.  If you've ever read a
   large number of members from a PDS/E with EXECIO, you know it takes
   forever.  You must go through TSO ALLOC command attach, enqueue, catalog
   search, VTOC search, directory search, and OPEN (to name those I know
   about), and then free and dequeue the entire allocation before starting the
   next member.  (OT:  ISPF services allows you to allocate a PDS/E and then
   process multiple members.  What takes minutes with EXECIO takes seconds
   with ISPF and Pipes.)
   - With Pipes, you don't have to write a loop to process or build records
   in a stem or in the stack.  Since the records are ready available to the
   steam, you process them from there in any way that your application
   dictates.
   - Pipes does parallel enqueues, allocations and opens via its COMMIT
   mechanism.  In a nutshell, during the initialization stage (commit level <
   0), any stage can issue a negative return code.  It stops further regular
   processing and gives stages the opportunity to release any resources
   obtained.  This is similar to what happens with batch JOB JCL.  Recovering
   from an enqueue, allocation. or open failure can be complicated with
   multiple instances of EXECIO.

Concerning GLOBALV:

   I have looked at GLOBALV files and they do not appear to be too
difficult to read and write compatibly with REXX and/or Pipes.  IBM
documents the format.  SESSION values could be saved in a VIO dataset for
performance and tranciency.  Thus writing a GLOBALV for TSO seems highly
practical.  If there was no better option, one could write logic for just
those functions that are used by the CMS code being ported.


Concerning checkpointing:

   - Checkpointing was originally implemented because large organizations
   had regular jobs that ran for hours. If there was an abend and a long
   running job had to be rerun, whole sets of dependent jobs would be delayed,
   throwing weekly payroll, monthly payables, or bill cycle receivables, etc.
   behind schedule.  The company could be put at risk of penalties for late
   payroll, interest charges for overdue payments, or delayed receivables
   income, etc.  Because today's platforms are orders of magnitude faster,
   there are many fewer such jobs today, even given increased volumes.  Many
   checkpointing concerns are moot today.  (This includes for example baring
   temporary datasets in production because a job that now runs in 10 minutes
   or less might have to be restarted when it would have run fine from
   scratch.  It will take that time to review the problem, fix it and set up
   

Re: How to use the FILTER command for OPERLOG in batch

2021-09-20 Thread Rob Scott
The SDSF "FILTER" command normally applies to tabular panels like "DA" or "ST" 
and not browse panels (of which "LOG O" aka OPERLOG is one).

In 1995, SDSF added some special case code so that FILTER would work in 
TSO/ISPF with the "LOG O" panel , however it appears this was never implemented 
into the REXX interface. We will take a look to see if there were architectural 
reasons why this is the case or whether this is just an omission (or a 
regression).

Local testing shows that the ISFFILTER special variable is ignored for "ISFLOG 
READ TYPE(OPERLOG)" function calls on z/OS 2.3 and 2.4.

In the meantime, I suggest you implement some sort of REXX exec filtering using 
functions like "POS" or "INDEX" on the OPERLOG rows that are returned in the 
ISFLINE stem variables.

Rob Scott
Rocket Software

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
ibmmain
Sent: 20 September 2021 08:54
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: How to use the FILTER command for OPERLOG in batch

EXTERNAL EMAIL





Hi all   SDSF provides use of the FILTER command for OPERLOG.  Could you tell 
us how to use the FILTER command for OPERLOG in batch(REXX)? Any 
thoughts/comments/suggestions would be greatly appreciated Thanks a lot!  Best 
Regards, Jason Cai

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Rocket Software, Inc. and subsidiaries ■ 77 Fourth Avenue, Waltham MA 02451 ■ 
Main Office Toll Free Number: +1 855.577.4323
Contact Customer Support: 
https://my.rocketsoftware.com/RocketCommunity/RCEmailSupport
Unsubscribe from Marketing Messages/Manage Your Subscription Preferences - 
http://www.rocketsoftware.com/manage-your-email-preferences
Privacy Policy - http://www.rocketsoftware.com/company/legal/privacy-policy


This communication and any attachments may contain confidential information of 
Rocket Software, Inc. All unauthorized use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please notify Rocket 
Software immediately and destroy all copies of this communication. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


How to use the FILTER command for OPERLOG in batch

2021-09-20 Thread ibmmain
Hi all   SDSF provides use of the FILTER command for OPERLOG.  Could you tell 
us how to use the FILTER command for OPERLOG in batch(REXX)? Any 
thoughts/comments/suggestions would be greatly appreciated Thanks a lot!  Best 
Regards, Jason Cai

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN