Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-24 Thread Hobart Spitz
Martin wote:
>I'm not familiar with FANOUT but if it writes a record to, say, two
destinations, it's got to copy one of them.

Actually not.  I know it sounds like magic, but there is still only one
copy.  I'll give you my best rendition of what I think is happening.  Don't
worry if you don't get the whole picture.  I had to work with Pipes for
many years before I understood this.

Pipes manages the storage for records.  When a new record is created (or
modified) Pipes allocates the space for it.  That pointer is what a stage
gets when it asks for an input record.  When an input record is no longer
needed by a stage, Pipes is informed.  This is referred to as "consuming"
the record.  In fact, the record is not "consumed", as in grinding with
teeth and digesting, rather it is how a stage tells Pipes that it's done
with the record.

Changes are made to new storage at a new record address.  Changing a record
means getting new storage and building a new record, with the changes.  The
input record is not changed, and it can be inspected by other streams via
FANOUT or any multiwrite stage.  When the storage area of a record is no
longer used by any stage, i.e., no stage has a pointer to it, that storage
can be reclaimed by Pipes.

I don't know the exact mechanism, but that's it at a high level.  No matter
how many streams and stages use a record, there is only one copy. DUP, for
example, could just write the record over and over using the same pointer.
(I don't know that for a fact, but John Hartmann is a lot smarter than I.)

The majority of Pipes building stages do not modify their input records.
(With REXX user written stages, there always is a new copy on output, even
if no changes were made to the data.)


OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Thu, Sep 23, 2021 at 9:44 AM Paul Gilmartin <
000433f07816-dmarc-requ...@listserv.ua.edu> wrote:

> On Thu, 23 Sep 2021 09:30:43 +0100, Martin Packer wrote:
>
> >I'm not familiar with FANOUT but if it writes a record to, say, two
> >destinations, it's got to copy one of them.
> >
> It could be deferred; Copy-on-Write, optimizing for what Hobart earlier
> calledd the "typical case" of stages that don't modify the data.
> But incurring the complexity of a responsibility count.
>
>
> >From:   "Hobart Spitz"
> >Date:   23/09/2021 04:18
> >
> >>  I'm guessing the atypical case is a stage such as FANOUT which
> >> necessarily copies the data.
> >
> >Not sure what you mean by atypical.
> >
> I apologize; I trimmed your earlier mention of "typical case".
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-24 Thread Paul Gilmartin
On Fri, 24 Sep 2021 08:01:27 -0500, Hobart Spitz wrote:

>Mike wrote:
>>Could something be developed similar to a SORTOUT exit that implements
>this switch?
>BatchPipes fittings are like sort exits on steroids:  They can be applied
>to almost any DD, ...
>
SORTWKnn?

A problem posed earlier was to estimate the size of the sort.
I''m imagining a FANOUT to two pipelines:
o One | COUNT RECORDS to measure the size
o One | ELASTIC | DD:SORTIN

A critique of this idea is left as an exercise for the student.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-24 Thread Hobart Spitz
Phil, thanks again for the helpful feedback and the great food for thought.

Your point is well taken.

I'll try to break it down as best I can:  MIPS or CPU usage, main memory or
working set, and input/output operations.  (I'd be happy if someone else
has better information.)

MIPS is dependent on at least three things:  Arithmetic Logical Unit (ALU)
usage, instruction pipeline activity, and cache misses.  With Pipes, ALU
and instruction pipeline usage would be more intense, while cache misses
would be less so.  Many characters, ones that the Pipes stages never even
look at, won't even enter the cache and ALU.  Some data can be processed
through thousands of stages and only be cache missed a few times, because
stages are typically dispatched one after the other to process the same
record.  Compared to UNIX piping, where every character of a text file must
go through every stage, even if it is going to be ignored, Pipes is much
faster.  As Melinda Varian says (approximately):  This is an advantage of
file structure aware operating systems.  I could include Pipes as
systems software.

Main memory (i.e., non-cache) and working set affect how fast instructions
can be executed.  As with characters, records can enter and leave the
working set just a few times, and still be processed by thousands of
stages.  So the working sets of Pipes will mostly tend to be much lower.
Stages like TAKE and DROP can do their work without ever referencing a
single byte of the records being processed.  (With UNIX head and tail,
every character must be inspected by the CPU.)

Finally, there is I/O, which is where Pipes and even UNIX piping really
shine.   The fastest input and output operations are the ones that never
take place.  Records moved from program to program in memory skip I/O.  I/O
generally involves some kind of mechanical movement, and which is far
slower than the CPU, caches or real memory.

Taken together, a set of Pipes stages, whether in a TSO PIPE command or in
a BatchPipes Fitting on a DD, \will have a smaller average working set and
spend less time in the system than alternatives.  If we estimate costs as
WS*ER (working set size times elapsed residency), even a conservative 50%
reduction in each would mean about a 75% reduction in hardware usage.
(WS*0.5)*(ER*0.5) = WS*ER*0.25.

If you have been paying attention, I have purposely glossed over some
details.  Unreferenced characters may be cache loaded because they reside
on the same cache line as referenced characters.  Also, record length and
blocking factor, among other things, affect how fast data can reach main
memory, the caches, and then the ALU.  On the average, I don't think these
two details change the relative effects of Pipes processing versus
conventional processing..

I suspect that UNIX piping was the inspiration for Pipes (A.K.A. CMS/TSO
Pipelines and BatchPipesWorks) both in suggesting in-memory "I/O" as well
as improvements to the concept.

I hope this helps.

OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Mon, Sep 20, 2021 at 8:10 PM Phil Smith III  wrote:

> Hobart Spitz wrote, in part:
>
> >This is a great comment.  I hadn't given that much thought to the
> question.
>
> >Not to split hairs, but I didn't say MIPS, I said hardware.
>
> >If I had to guess, MIPS usage might actually increase slightly, because
> the
> >Pipes dispatcher has to switch between stages twice for every record that
> >is passed.
>
>
>
> Sure, just sayin' you'd want to be very clear about what you do mean.
>
>
>
> I'm not quite sure what you mean by "more MIPS but less hardware", though?
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-24 Thread Hobart Spitz
Mike wrote:
>Could something be developed similar to a SORTOUT exit that implements
this switch?
BatchPipes fittings are like sort exits on steroids:  They can be applied
to almost any DD, can use most Pipes filters, do not require compiled code,
and they are not restricted to a single program.  I plan to say more
later.  In a nutshell:  I think BatchPipes, including the Pipe command,
should be in the z/OS base.  It would be the biggest enhancement to JCL
ever, and would be of much more interest to production oriented management
who care about stability.


OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Mon, Sep 20, 2021 at 7:11 PM Mike Schwab  wrote:

> So, in a sense, instead of pipes, the programs could be modified so
> that instead of outputting a record, call the next program passing the
> record as input.
>
> Could something be developed similar to a SORTOUT exit that implements
> this switch?
>
> On Mon, Sep 20, 2021 at 4:27 PM Hobart Spitz  wrote:
> >
> > Phil;
> >
> > This is a great comment.  I hadn't given that much thought to the
> question.
> >
> > Not to split hairs, but I didn't say MIPS, I said hardware.
> >
> > If I had to guess, MIPS usage might actually increase slightly, because
> the
> > Pipes dispatcher has to switch between stages twice for every record that
> > is passed.  Access method overheard would drop.  Buffered access methods,
> > in most cases, only have to increment the pointer into the block buffer,
> > check for end-of-buffer and return to the caller.  I don't know for sure
> > which is larger.  Maybe someone more knowledgeable than I can shed some
> > light.
> >
> > I would say the real savings would be in elapsed run time and working set
> > size.  Run time, due to eliminating something like 80-95% of I/O
> operations
> > for intra-JOB datasets.  Working set reduction would save on real memory.
> > (See below.)  Run time is probably more of a concern to customers,
> > especially those with tight batch windows.  That said, working set size
> > reduction would mean that processors would likely spend more, if not all,
> > time pegged at 100% busy, because so many more address spaces (TSO and
> JOB)
> > would be in a swapped-in and ready state than before.  Depending on what
> > metrics the capacity planners are looking at, CPU sales might actually
> > increase.  As I think about it more, if thru-put increases, new data
> could
> > be generated more quickly and other times of hardware could be more in
> > demand during peak load times.  I just don't know enough to say for sure.
> >
> > Phil and others know what follows.
> >
> > For those who don't know, in the typical case, a record passes through
> all
> > possible stages before the next record begins the same trip.  Each record
> > stays in the working page set, at least partially, during the entire
> time.
> > Parts that are referenced have a good chance of staying cache resident
> > between stages.
> >
> > Think of it this way:  You can visualize UNIX piping as a series of
> > hourglasses open at both ends and connected in a tower.  Each grain of
> sand
> > must stop at every "stage" and wait its turn to go through the narrow
> > opening at the waist of each hourglass.  In Pipes, most stages have no
> > delay and it's like a single tall hourglass tube with only one narrow
> > point.  My best guess is that Pipes, in this analogy, would have only
> > 5%-15% of the narrow openings as an equivalent UNIX piping command,
> meaning
> > that the data (sand) would flow 85-95% faster in the Pipes "hourglass".
> >
> >
> > OREXXMan
> > Would you rather pass data in move mode (*nix piping) or locate mode
> > (Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
> > with more than a dozen filters, while Pipelines specifications are
> commonly
> > over 100s of stages, and 1000s of stages are not uncommon.
> > IBM has been looking for an HLL for program products; REXX is that
> language.
> >
> >
> > On Mon, Sep 20, 2021 at 12:48 PM Phil Smith III  wrote:
> >
> > > Hobart Spitz wrote, in part:
> > >
> > > >The case *for *Pipes in the z/OS base.:
> > >
> > > > 2. Hardware usage would drop for customers.
> > >
> > >
> > >
> > > From IBM's perspective, that might not be a positive argument. It
> should
> > > be-they're hopefully not fooling themselves that they have a lock on
> > > enterprise computing any more, so anything that makes life more
> palatable
> > > for the remaining faithful at low cost to IBM should be A Good
> Thing-but I
> > > can easily imagine someone saying, "We estimate this will reduce MIPS
> > > purchases by x%, that's bad, don't do it".
> > >
> > >
> > >
> > > Just sayin'.
> > >
> 

Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-23 Thread Paul Gilmartin
On Thu, 23 Sep 2021 09:30:43 +0100, Martin Packer wrote:

>I'm not familiar with FANOUT but if it writes a record to, say, two
>destinations, it's got to copy one of them.
> 
It could be deferred; Copy-on-Write, optimizing for what Hobart earlier
calledd the "typical case" of stages that don't modify the data.
But incurring the complexity of a responsibility count.


>From:   "Hobart Spitz"
>Date:   23/09/2021 04:18
>
>>  I'm guessing the atypical case is a stage such as FANOUT which
>> necessarily copies the data.
>
>Not sure what you mean by atypical.  
> 
I apologize; I trimmed your earlier mention of "typical case".

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-23 Thread Seymour J Metz
Yes, checkpointing is increasingly important in a high volume workd, but it is 
also increasingly more difficult. There is an OS facility for restarting from a 
checkpoint, but it has significant restrictions and I wonder whether it has 
been used in the last half century.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Timothy Sipples [sipp...@sg.ibm.com]
Sent: Thursday, September 23, 2021 12:26 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

I misplaced the original post, but somewhere in this thread someone
commented that checkpointing is less important. I think I disagree, so
just a quick comment from me.

Yes, absolutely, there's much more computing power and much better I/O.
There are also lots of efficiency gains -- much better compilers, for
example. However, if anything the data volumes and related requirements
are growing even faster. We've also seen recent, real world incidents
involving major organizations failing to meet batch processing deadlines
with serious consequences, in some cases to whole national economies. My
anecdotal observation is that checkpointing is becoming more important at
least on z/OS, not less. By sheer coincidence I'm having a technical
conversation this afternoon that (when you boil it down to its essence) is
"please implement a certain type of checkpointing."

I interpreted this particular remark as a side comment, not really
anything that genuinely affects whether pipes are useful in some cases.
Yes, pipes are useful. It's not necessary to bash checkpointing in defense
of pipes, or vice versa.

- - - - - - - - - -
Timothy Sipples
I.T. Architect Executive
Digital Asset & Other Industry Solutions
IBM Z & LinuxONE
- - - - - - - - - -
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-23 Thread Martin Packer
I'm not familiar with FANOUT but if it writes a record to, say, two 
destinations, it's got to copy one of them.

Cheers, Martin

Martin Packer

WW z/OS Performance, Capacity and Architecture, IBM Technology Sales

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: https://mainframeperformancetopics.com

Mainframe, Performance, Topics Podcast Series (With Marna Walle): 
https://anchor.fm/marna-walle

Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   "Hobart Spitz" 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   23/09/2021 04:18
Subject:[EXTERNAL] Re: The Business Case for Pipes in the z/OS 
Base (was: Re: REXX - Interpret or Value - Which is better?)
Sent by:"IBM Mainframe Discussion List" 



Paul said:
>  I'm guessing the atypical case is a stage such as FANOUT which
necessarily
copies the data.

Not sure what you mean by atypical.  FANOUT is typical in the respect that
it doesn't create an actual copy of the input record.  It just looks like
it.  FANOUT, and non-record-changing stages, pass the same input record
pointer to their downstream stage(s).  This is what makes Pipes so
efficient.  No working set expansion and less reloading of just purged
cache data.



OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are 
commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that 
language.


On Wed, Sep 22, 2021 at 3:13 AM Martin Packer 
wrote:

> Conversely a pipe as input is not necessarily a good input medium for a
> sort. 10 years ago I contributed to a Batch Modernization Redbook on 
this,
> emphasising the need for BatchPipes input to DFSORT to be accompanied by 
a
> FILSZ / AVGRLEN estimate pair.
>
> Bringing it back to pipes, I wonder if it's feasible to tell a sorting
> stage (whether DFSORT (yes please Sri Hari) or otherwise) the input 
size.
> Otherwise we could have blow ups or bad performance at scale.
>
> BTW I'm all in favour of pipes as a first class citizen but note I have
> little influence in this regard.
>
> Cheers, Martin
>
> Martin Packer
>
> WW z/OS Performance, Capacity and Architecture, IBM Technology Sales
>
> +44-7802-245-584
>
> email: martin_pac...@uk.ibm.com
>
> Twitter / Facebook IDs: MartinPacker
>
> Blog: 
https://mainframeperformancetopics.com 

>
> Mainframe, Performance, Topics Podcast Series (With Marna Walle):
> 
https://anchor.fm/marna-walle 

>
> Youtube channel: 
https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA 

>
>
>
> From:   "Paul Gilmartin" 
<000433f07816-dmarc-requ...@listserv.ua.edu>
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/09/2021 03:50
> Subject:[EXTERNAL] Re: The Business Case for Pipes in the z/OS
> Base (was: Re: REXX - Interpret or Value - Which is better?)
> Sent by:"IBM Mainframe Discussion List" 

>
>
>
> On Tue, 21 Sep 2021 21:03:14 -0500, Mike Schwab  wrote:
>
> >If a SORT (or other similar temporary data store) program is one of
> >the pipe programs, when the EXEC PGM= program closes the output file
> >then the program holding the data needs to output the stored data to
> >output ddnames (pipe or output files).
> >
> Are you thinking of MS-DOS pseudo-"pipes" where the upstream program
> wrote a temporary file under-the-covers and the downstream program
> processed it?  A pipe in syntax only.  Even Windows is better nowadays.
>
> SORT is a bad conceptual example for Pipethink because SORT can't
> write its first output record until it has read its last input record.
> Better
> to envision a filter which re-formats log records from a long-running 
(or
> never-terminating) program, writing a file to be browsed with SDSF or
> tail -f in real time.
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 
3AU
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-22 Thread Hobart Spitz
Paul said:
>  I'm guessing the atypical case is a stage such as FANOUT which
necessarily
copies the data.

Not sure what you mean by atypical.  FANOUT is typical in the respect that
it doesn't create an actual copy of the input record.  It just looks like
it.  FANOUT, and non-record-changing stages, pass the same input record
pointer to their downstream stage(s).  This is what makes Pipes so
efficient.  No working set expansion and less reloading of just purged
cache data.



OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Wed, Sep 22, 2021 at 3:13 AM Martin Packer 
wrote:

> Conversely a pipe as input is not necessarily a good input medium for a
> sort. 10 years ago I contributed to a Batch Modernization Redbook on this,
> emphasising the need for BatchPipes input to DFSORT to be accompanied by a
> FILSZ / AVGRLEN estimate pair.
>
> Bringing it back to pipes, I wonder if it's feasible to tell a sorting
> stage (whether DFSORT (yes please Sri Hari) or otherwise) the input size.
> Otherwise we could have blow ups or bad performance at scale.
>
> BTW I'm all in favour of pipes as a first class citizen but note I have
> little influence in this regard.
>
> Cheers, Martin
>
> Martin Packer
>
> WW z/OS Performance, Capacity and Architecture, IBM Technology Sales
>
> +44-7802-245-584
>
> email: martin_pac...@uk.ibm.com
>
> Twitter / Facebook IDs: MartinPacker
>
> Blog: https://mainframeperformancetopics.com
>
> Mainframe, Performance, Topics Podcast Series (With Marna Walle):
> https://anchor.fm/marna-walle
>
> Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA
>
>
>
> From:   "Paul Gilmartin" <000433f07816-dmarc-requ...@listserv.ua.edu>
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date:   22/09/2021 03:50
> Subject:[EXTERNAL] Re: The Business Case for Pipes in the z/OS
> Base (was: Re: REXX - Interpret or Value - Which is better?)
> Sent by:"IBM Mainframe Discussion List" 
>
>
>
> On Tue, 21 Sep 2021 21:03:14 -0500, Mike Schwab  wrote:
>
> >If a SORT (or other similar temporary data store) program is one of
> >the pipe programs, when the EXEC PGM= program closes the output file
> >then the program holding the data needs to output the stored data to
> >output ddnames (pipe or output files).
> >
> Are you thinking of MS-DOS pseudo-"pipes" where the upstream program
> wrote a temporary file under-the-covers and the downstream program
> processed it?  A pipe in syntax only.  Even Windows is better nowadays.
>
> SORT is a bad conceptual example for Pipethink because SORT can't
> write its first output record until it has read its last input record.
> Better
> to envision a filter which re-formats log records from a long-running (or
> never-terminating) program, writing a file to be browsed with SDSF or
> tail -f in real time.
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-22 Thread Martin Packer
Conversely a pipe as input is not necessarily a good input medium for a 
sort. 10 years ago I contributed to a Batch Modernization Redbook on this, 
emphasising the need for BatchPipes input to DFSORT to be accompanied by a 
FILSZ / AVGRLEN estimate pair.

Bringing it back to pipes, I wonder if it's feasible to tell a sorting 
stage (whether DFSORT (yes please Sri Hari) or otherwise) the input size. 
Otherwise we could have blow ups or bad performance at scale.

BTW I'm all in favour of pipes as a first class citizen but note I have 
little influence in this regard.

Cheers, Martin

Martin Packer

WW z/OS Performance, Capacity and Architecture, IBM Technology Sales

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: https://mainframeperformancetopics.com

Mainframe, Performance, Topics Podcast Series (With Marna Walle): 
https://anchor.fm/marna-walle

Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   "Paul Gilmartin" <000433f07816-dmarc-requ...@listserv.ua.edu>
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   22/09/2021 03:50
Subject:[EXTERNAL] Re: The Business Case for Pipes in the z/OS 
Base (was: Re: REXX - Interpret or Value - Which is better?)
Sent by:"IBM Mainframe Discussion List" 



On Tue, 21 Sep 2021 21:03:14 -0500, Mike Schwab  wrote:

>If a SORT (or other similar temporary data store) program is one of
>the pipe programs, when the EXEC PGM= program closes the output file
>then the program holding the data needs to output the stored data to
>output ddnames (pipe or output files).
>
Are you thinking of MS-DOS pseudo-"pipes" where the upstream program
wrote a temporary file under-the-covers and the downstream program
processed it?  A pipe in syntax only.  Even Windows is better nowadays.

SORT is a bad conceptual example for Pipethink because SORT can't
write its first output record until it has read its last input record. 
Better
to envision a filter which re-formats log records from a long-running (or
never-terminating) program, writing a file to be browsed with SDSF or
tail -f in real time.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Paul Gilmartin
On Mon, 20 Sep 2021 16:26:59 -0500, Hobart Spitz wrote:
>...
>For those who don't know, in the typical case, a record passes through all
>possible stages before the next record begins the same trip.  Each record
>stays in the working page set, at least partially, during the entire time.
>Parts that are referenced have a good chance of staying cache resident
>between stages.
>
I'm guessing the atypical case is a stage such as FANOUT which necessarily
copies the data.

>...  My best guess is that Pipes, in this analogy, would have only
>5%-15% of the narrow openings as an equivalent UNIX piping command, meaning
>that the data (sand) would flow 85-95% faster in the Pipes "hourglass".
> ...
I suspect the cost of moving data is overwhelmed by the cost of process
switching.  And z/OS UNIX is probably the worst of all UNIXem because
its design hasn't been optimized for process switching.

(I wonder whether nowadays z/OS creates more address spaces for job
step initiation or for fork().  I'm confident that the design  remains
optimized for the former, regardless.)

But remember that nowadays silicon is ofteen cheaper than carbon.
With POSIX pipes I can:
407 $ ls | wc
  2   2  17
(got it right the first time.  A useless example, admittedly.)

What would that look like using JCL an Batchpipes, replacing "ls"
with LISTDS and "wc" with (utility of your chiice)?  (I wouln't get it
right the first time.)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Mike Schwab
Yes, I was not aware of the subsystem idea and was just throwing out an idea.

On Tue, Sep 21, 2021 at 10:14 PM Paul Gilmartin
<000433f07816-dmarc-requ...@listserv.ua.edu> wrote:
>
> On Mon, 20 Sep 2021 22:30:32 -0500, Mike Schwab wrote:
>
> >How about
> >//ddname DD DISP=(NEW,DELETE,DELETE),
> >// DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no 
> >B]),BLKSIZE=n),
> >// PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
> >// ddname1,ddname2,...,ddnameN),
> >and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
> >and sets up the PUT/WRITE to DDNAME to call the program instead?
> >The close unloads the PIPES program in (subtask memory).
> >VSAM writes accepted like a DSORG=PS?
> >Any other parameters possible?
> >
> What manual did you find that in?  I don't see it in the JCL Ref.
>
> Are you makiing it up?  How is it preferable to SUBSYS=PIPE?  (BP01?
> I see each among various IBM pages.)
>
> >A pipes DDNAME can use a PIPES parameter for a subsequent program.
> >
> "subsequent"?  Not "concurrent"?  (But perhaps you mean lexically subsequent
> but temporally concurrent.)
>
> >Of course, ddnames can only be used by the EXEC PGM= or one of the
> >pipes programs.
> >And all pipes programs remain in memory until step end.
> >In case of abend / restart the checkpoint is taken in EXEC PGM= and
> >abandon updates / writes in any pipes program after the checkpoint.
> > ...
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Paul Gilmartin
On Mon, 20 Sep 2021 22:30:32 -0500, Mike Schwab wrote:

>How about
>//ddname DD DISP=(NEW,DELETE,DELETE),
>// DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no B]),BLKSIZE=n),
>// PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
>// ddname1,ddname2,...,ddnameN),
>and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
>and sets up the PUT/WRITE to DDNAME to call the program instead?
>The close unloads the PIPES program in (subtask memory).
>VSAM writes accepted like a DSORG=PS?
>Any other parameters possible?
>
What manual did you find that in?  I don't see it in the JCL Ref.

Are you makiing it up?  How is it preferable to SUBSYS=PIPE?  (BP01?
I see each among various IBM pages.)

>A pipes DDNAME can use a PIPES parameter for a subsequent program.
>
"subsequent"?  Not "concurrent"?  (But perhaps you mean lexically subsequent
but temporally concurrent.)

>Of course, ddnames can only be used by the EXEC PGM= or one of the
>pipes programs.
>And all pipes programs remain in memory until step end.
>In case of abend / restart the checkpoint is taken in EXEC PGM= and
>abandon updates / writes in any pipes program after the checkpoint.
> ...

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Paul Gilmartin
On Tue, 21 Sep 2021 21:03:14 -0500, Mike Schwab  wrote:

>If a SORT (or other similar temporary data store) program is one of
>the pipe programs, when the EXEC PGM= program closes the output file
>then the program holding the data needs to output the stored data to
>output ddnames (pipe or output files).
>
Are you thinking of MS-DOS pseudo-"pipes" where the upstream program
wrote a temporary file under-the-covers and the downstream program
processed it?  A pipe in syntax only.  Even Windows is better nowadays.

SORT is a bad conceptual example for Pipethink because SORT can't
write its first output record until it has read its last input record.  Better
to envision a filter which re-formats log records from a long-running (or
never-terminating) program, writing a file to be browsed with SDSF or
tail -f in real time.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Mike Schwab
If a SORT (or other similar temporary data store) program is one of
the pipe programs, when the EXEC PGM= program closes the output file
then the program holding the data needs to output the stored data to
output ddnames (pipe or output files).

On Tue, Sep 21, 2021 at 5:25 AM Mike Schwab  wrote:
>
> Oh, and PIPES=(program,'parm',ddname1,ddname2,ddnaen) the ddname1 gets
> as input the one record written to the //ddname DD PIPES ddname.
>
> On Mon, Sep 20, 2021 at 10:30 PM Mike Schwab  wrote:
> >
> > How about
> > //ddname DD DISP=(NEW,DELETE,DELETE),
> > // DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no 
> > B]),BLKSIZE=n),
> > // PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
> > // ddname1,ddname2,...,ddnameN),
> > and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
> > and sets up the PUT/WRITE to DDNAME to call the program instead?
> > The close unloads the PIPES program in (subtask memory).
> > VSAM writes accepted like a DSORG=PS?
> > Any other parameters possible?
> >
> > A pipes DDNAME can use a PIPES parameter for a subsequent program.
> > Of course, ddnames can only be used by the EXEC PGM= or one of the
> > pipes programs.
> > And all pipes programs remain in memory until step end.
> > In case of abend / restart the checkpoint is taken in EXEC PGM= and
> > abandon updates / writes in any pipes program after the checkpoint.
> >
> > On Mon, Sep 20, 2021 at 9:12 PM Paul Gilmartin
> > <000433f07816-dmarc-requ...@listserv.ua.edu> wrote:
> > >
> > > On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:
> > >
> > > >So, in a sense, instead of pipes, the programs could be modified so
> > > >that instead of outputting a record, call the next program passing the
> > > >record as input.
> > > >
> > > No.
> > > Rather than requiring every utility to be modified in order to use the
> > > capability, it should be provided at the access method level or lower,
> > > transparent, so the top programs are oblivious to it.
> > >
> > > And there should be provision for inserting filters between a producer
> > > and a consumer to convert incompatible data formats.
> > >
> > > And ... what am I thinking?
> > >
> > > -- gil
> > >
> > > --
> > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
> >
> >
> > --
> > Mike A Schwab, Springfield IL USA
> > Where do Forest Rangers go to get away from it all?
>
>
>
> --
> Mike A Schwab, Springfield IL USA
> Where do Forest Rangers go to get away from it all?



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-21 Thread Mike Schwab
Oh, and PIPES=(program,'parm',ddname1,ddname2,ddnaen) the ddname1 gets
as input the one record written to the //ddname DD PIPES ddname.

On Mon, Sep 20, 2021 at 10:30 PM Mike Schwab  wrote:
>
> How about
> //ddname DD DISP=(NEW,DELETE,DELETE),
> // DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no 
> B]),BLKSIZE=n),
> // PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
> // ddname1,ddname2,...,ddnameN),
> and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
> and sets up the PUT/WRITE to DDNAME to call the program instead?
> The close unloads the PIPES program in (subtask memory).
> VSAM writes accepted like a DSORG=PS?
> Any other parameters possible?
>
> A pipes DDNAME can use a PIPES parameter for a subsequent program.
> Of course, ddnames can only be used by the EXEC PGM= or one of the
> pipes programs.
> And all pipes programs remain in memory until step end.
> In case of abend / restart the checkpoint is taken in EXEC PGM= and
> abandon updates / writes in any pipes program after the checkpoint.
>
> On Mon, Sep 20, 2021 at 9:12 PM Paul Gilmartin
> <000433f07816-dmarc-requ...@listserv.ua.edu> wrote:
> >
> > On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:
> >
> > >So, in a sense, instead of pipes, the programs could be modified so
> > >that instead of outputting a record, call the next program passing the
> > >record as input.
> > >
> > No.
> > Rather than requiring every utility to be modified in order to use the
> > capability, it should be provided at the access method level or lower,
> > transparent, so the top programs are oblivious to it.
> >
> > And there should be provision for inserting filters between a producer
> > and a consumer to convert incompatible data formats.
> >
> > And ... what am I thinking?
> >
> > -- gil
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> --
> Mike A Schwab, Springfield IL USA
> Where do Forest Rangers go to get away from it all?



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Mike Schwab
How about
//ddname DD DISP=(NEW,DELETE,DELETE),
// DCB=(DSORG=PS[only],RECFM=F/FA/FM/V/VA/VM/U?/UA?/UM?/[no B]),BLKSIZE=n),
// PIPES=(PROGNAME,'100/32k byte parm to select or modify records',
// ddname1,ddname2,...,ddnameN),
and the OPEN (ddname,OUTPUT) loads the PIPES program in (sub)task memory)
and sets up the PUT/WRITE to DDNAME to call the program instead?
The close unloads the PIPES program in (subtask memory).
VSAM writes accepted like a DSORG=PS?
Any other parameters possible?

A pipes DDNAME can use a PIPES parameter for a subsequent program.
Of course, ddnames can only be used by the EXEC PGM= or one of the
pipes programs.
And all pipes programs remain in memory until step end.
In case of abend / restart the checkpoint is taken in EXEC PGM= and
abandon updates / writes in any pipes program after the checkpoint.

On Mon, Sep 20, 2021 at 9:12 PM Paul Gilmartin
<000433f07816-dmarc-requ...@listserv.ua.edu> wrote:
>
> On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:
>
> >So, in a sense, instead of pipes, the programs could be modified so
> >that instead of outputting a record, call the next program passing the
> >record as input.
> >
> No.
> Rather than requiring every utility to be modified in order to use the
> capability, it should be provided at the access method level or lower,
> transparent, so the top programs are oblivious to it.
>
> And there should be provision for inserting filters between a producer
> and a consumer to convert incompatible data formats.
>
> And ... what am I thinking?
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Paul Gilmartin
On Mon, 20 Sep 2021 19:10:16 -0500, Mike Schwab wrote:

>So, in a sense, instead of pipes, the programs could be modified so
>that instead of outputting a record, call the next program passing the
>record as input.
>
No.
Rather than requiring every utility to be modified in order to use the
capability, it should be provided at the access method level or lower,
transparent, so the top programs are oblivious to it.

And there should be provision for inserting filters between a producer
and a consumer to convert incompatible data formats.

And ... what am I thinking?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Phil Smith III
Hobart Spitz wrote, in part:

>This is a great comment.  I hadn't given that much thought to the question.

>Not to split hairs, but I didn't say MIPS, I said hardware.

>If I had to guess, MIPS usage might actually increase slightly, because the
>Pipes dispatcher has to switch between stages twice for every record that
>is passed.

 

Sure, just sayin' you'd want to be very clear about what you do mean.

 

I'm not quite sure what you mean by "more MIPS but less hardware", though?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Mike Schwab
So, in a sense, instead of pipes, the programs could be modified so
that instead of outputting a record, call the next program passing the
record as input.

Could something be developed similar to a SORTOUT exit that implements
this switch?

On Mon, Sep 20, 2021 at 4:27 PM Hobart Spitz  wrote:
>
> Phil;
>
> This is a great comment.  I hadn't given that much thought to the question.
>
> Not to split hairs, but I didn't say MIPS, I said hardware.
>
> If I had to guess, MIPS usage might actually increase slightly, because the
> Pipes dispatcher has to switch between stages twice for every record that
> is passed.  Access method overheard would drop.  Buffered access methods,
> in most cases, only have to increment the pointer into the block buffer,
> check for end-of-buffer and return to the caller.  I don't know for sure
> which is larger.  Maybe someone more knowledgeable than I can shed some
> light.
>
> I would say the real savings would be in elapsed run time and working set
> size.  Run time, due to eliminating something like 80-95% of I/O operations
> for intra-JOB datasets.  Working set reduction would save on real memory.
> (See below.)  Run time is probably more of a concern to customers,
> especially those with tight batch windows.  That said, working set size
> reduction would mean that processors would likely spend more, if not all,
> time pegged at 100% busy, because so many more address spaces (TSO and JOB)
> would be in a swapped-in and ready state than before.  Depending on what
> metrics the capacity planners are looking at, CPU sales might actually
> increase.  As I think about it more, if thru-put increases, new data could
> be generated more quickly and other times of hardware could be more in
> demand during peak load times.  I just don't know enough to say for sure.
>
> Phil and others know what follows.
>
> For those who don't know, in the typical case, a record passes through all
> possible stages before the next record begins the same trip.  Each record
> stays in the working page set, at least partially, during the entire time.
> Parts that are referenced have a good chance of staying cache resident
> between stages.
>
> Think of it this way:  You can visualize UNIX piping as a series of
> hourglasses open at both ends and connected in a tower.  Each grain of sand
> must stop at every "stage" and wait its turn to go through the narrow
> opening at the waist of each hourglass.  In Pipes, most stages have no
> delay and it's like a single tall hourglass tube with only one narrow
> point.  My best guess is that Pipes, in this analogy, would have only
> 5%-15% of the narrow openings as an equivalent UNIX piping command, meaning
> that the data (sand) would flow 85-95% faster in the Pipes "hourglass".
>
>
> OREXXMan
> Would you rather pass data in move mode (*nix piping) or locate mode
> (Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
> with more than a dozen filters, while Pipelines specifications are commonly
> over 100s of stages, and 1000s of stages are not uncommon.
> IBM has been looking for an HLL for program products; REXX is that language.
>
>
> On Mon, Sep 20, 2021 at 12:48 PM Phil Smith III  wrote:
>
> > Hobart Spitz wrote, in part:
> >
> > >The case *for *Pipes in the z/OS base.:
> >
> > > 2. Hardware usage would drop for customers.
> >
> >
> >
> > From IBM's perspective, that might not be a positive argument. It should
> > be-they're hopefully not fooling themselves that they have a lock on
> > enterprise computing any more, so anything that makes life more palatable
> > for the remaining faithful at low cost to IBM should be A Good Thing-but I
> > can easily imagine someone saying, "We estimate this will reduce MIPS
> > purchases by x%, that's bad, don't do it".
> >
> >
> >
> > Just sayin'.
> >
> >
> >
> > ...phsiii
> >
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Hobart Spitz
Phil;

This is a great comment.  I hadn't given that much thought to the question.

Not to split hairs, but I didn't say MIPS, I said hardware.

If I had to guess, MIPS usage might actually increase slightly, because the
Pipes dispatcher has to switch between stages twice for every record that
is passed.  Access method overheard would drop.  Buffered access methods,
in most cases, only have to increment the pointer into the block buffer,
check for end-of-buffer and return to the caller.  I don't know for sure
which is larger.  Maybe someone more knowledgeable than I can shed some
light.

I would say the real savings would be in elapsed run time and working set
size.  Run time, due to eliminating something like 80-95% of I/O operations
for intra-JOB datasets.  Working set reduction would save on real memory.
(See below.)  Run time is probably more of a concern to customers,
especially those with tight batch windows.  That said, working set size
reduction would mean that processors would likely spend more, if not all,
time pegged at 100% busy, because so many more address spaces (TSO and JOB)
would be in a swapped-in and ready state than before.  Depending on what
metrics the capacity planners are looking at, CPU sales might actually
increase.  As I think about it more, if thru-put increases, new data could
be generated more quickly and other times of hardware could be more in
demand during peak load times.  I just don't know enough to say for sure.

Phil and others know what follows.

For those who don't know, in the typical case, a record passes through all
possible stages before the next record begins the same trip.  Each record
stays in the working page set, at least partially, during the entire time.
Parts that are referenced have a good chance of staying cache resident
between stages.

Think of it this way:  You can visualize UNIX piping as a series of
hourglasses open at both ends and connected in a tower.  Each grain of sand
must stop at every "stage" and wait its turn to go through the narrow
opening at the waist of each hourglass.  In Pipes, most stages have no
delay and it's like a single tall hourglass tube with only one narrow
point.  My best guess is that Pipes, in this analogy, would have only
5%-15% of the narrow openings as an equivalent UNIX piping command, meaning
that the data (sand) would flow 85-95% faster in the Pipes "hourglass".


OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands
with more than a dozen filters, while Pipelines specifications are commonly
over 100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Mon, Sep 20, 2021 at 12:48 PM Phil Smith III  wrote:

> Hobart Spitz wrote, in part:
>
> >The case *for *Pipes in the z/OS base.:
>
> > 2. Hardware usage would drop for customers.
>
>
>
> From IBM's perspective, that might not be a positive argument. It should
> be-they're hopefully not fooling themselves that they have a lock on
> enterprise computing any more, so anything that makes life more palatable
> for the remaining faithful at low cost to IBM should be A Good Thing-but I
> can easily imagine someone saying, "We estimate this will reduce MIPS
> purchases by x%, that's bad, don't do it".
>
>
>
> Just sayin'.
>
>
>
> ...phsiii
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Seymour J Metz
It would not be difficult to use the API to bring in stream I/O under TSO, but 
supporting PAM and SAM is another matter.

Concurrency is another reason for porting oorexx. As for z/VM, it's not your 
father's CMS.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Paul Gilmartin [000433f07816-dmarc-requ...@listserv.ua.edu]
Sent: Monday, September 20, 2021 10:48 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

On Mon, 20 Sep 2021 14:20:04 +, Seymour J Metz wrote:

>If portability is an issue then we need ANSI stream I/O in all REXX 
>environments. If we had that we wouldn't need EXECIO for new code.
>
It's ironic.  The z/OS Rexx interpreter supports ANSI stream, but only under 
OMVS.
I wonder whether the API could start Rexx with the needed function package in 
other
environments.  And only for UNIX files, not Classic.

The z/OS Rexx compiler appears to support Stream I/O, but not for UNIX files.

Conway's Law.  An  OMVS developer pleaded for directing message and TRACE
output to stderr (the Regina default) rather than stdout.  The Rexx designers
rebuffed that.  Even though the Rexx programming interface provides distinct
calls for those functions.

I've been RTFM.  It appears that the "other side" of DDNAME is SUBSYS=PIPE.
Would that work even in a single job step, with use of alternate DDNAMES as
necessary?

And concurrency.  ADDRESS ATTCHMVS is a deception.  It always WAITs for
subtask completion rather than allowing it to run in the background. Concurrency
would be enormously difficult in CMS; relatively easy in z/OS.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Phil Smith III
Hobart Spitz wrote, in part:

>The case *for *Pipes in the z/OS base.:

> 2. Hardware usage would drop for customers.

 

>From IBM's perspective, that might not be a positive argument. It should
be-they're hopefully not fooling themselves that they have a lock on
enterprise computing any more, so anything that makes life more palatable
for the remaining faithful at low cost to IBM should be A Good Thing-but I
can easily imagine someone saying, "We estimate this will reduce MIPS
purchases by x%, that's bad, don't do it".

 

Just sayin'.

 

...phsiii


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Dave Jones
+1 for this idea.
DJ

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Paul Gilmartin
On Mon, 20 Sep 2021 14:20:04 +, Seymour J Metz wrote:

>If portability is an issue then we need ANSI stream I/O in all REXX 
>environments. If we had that we wouldn't need EXECIO for new code.
>
It's ironic.  The z/OS Rexx interpreter supports ANSI stream, but only under 
OMVS.
I wonder whether the API could start Rexx with the needed function package in 
other
environments.  And only for UNIX files, not Classic.

The z/OS Rexx compiler appears to support Stream I/O, but not for UNIX files.

Conway's Law.  An  OMVS developer pleaded for directing message and TRACE
output to stderr (the Regina default) rather than stdout.  The Rexx designers
rebuffed that.  Even though the Rexx programming interface provides distinct
calls for those functions.

I've been RTFM.  It appears that the "other side" of DDNAME is SUBSYS=PIPE.
Would that work even in a single job step, with use of alternate DDNAMES as
necessary?

And concurrency.  ADDRESS ATTCHMVS is a deception.  It always WAITs for
subtask completion rather than allowing it to run in the background. Concurrency
would be enormously difficult in CMS; relatively easy in z/OS.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-20 Thread Seymour J Metz
If portability is an issue then we need ANSI stream I/O in all REXX 
environments. If we had that we wouldn't need EXECIO for new code.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Hobart Spitz 
Sent: Sunday, September 19, 2021 12:53 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

Thank you for your relevant and helpful comments.  They are very much
appreciated, as I omitted some topics.  I'll do my best to address them
here.  Pardon the lack of brevity.

Concerning EXECIO:
   Yes, the z/OS and z/VM EXECIO versions are mostly incompatible.
Once you have Pipes you don't need or want EXECIO.  Here's why:

   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
   can only use a DD.
   - TSO Pipes has a format that is syntactically and semantically similar
   to that in CMS Pipes.  It is used with a PDS/E allocated to a DD.
   - When a dataset is specified, Pipes defaults to OLD when writing.  This
   is something that z/OS access methods don't even check.  You could still
   accidentally shoot yourself in the foot, but it's not obvious in JCL  In
   Pipes to have to explicitly override the default by coding SHR next to the
   dataset name.  I don't know why you would want to.  Consider the
   organization that doesn't protect itself from writing with a DISP=SHR in
   JCL, either with education, standards or exits.  OT:  This is why you
   should *always* put DISP= on the first line of a JCL DD and the DSN on a
   continuation.  Otherwise, if someone performs maintenance and accidentally
   ends up with DISP=SHR on an output DD, for a long time there may be no
   errors and it may run fine even in production.  That is, until the day that
   another process writes with SHR or reads while the dataset is in an
   inconsistent state.  There could be a lot of confusion.  I would be glad to
   hear from someone knowledgeable that I'm wrong on this and that I missed
   some access method change that has made the concern unnecessary.
   - Pipes lets you read member(s) through a DD and names(s) specified in
   the pipe stage or by feeding member names to certain stages.  With EXECIO,
   you must put the member name on the allocation.  If you've ever read a
   large number of members from a PDS/E with EXECIO, you know it takes
   forever.  You must go through TSO ALLOC command attach, enqueue, catalog
   search, VTOC search, directory search, and OPEN (to name those I know
   about), and then free and dequeue the entire allocation before starting the
   next member.  (OT:  ISPF services allows you to allocate a PDS/E and then
   process multiple members.  What takes minutes with EXECIO takes seconds
   with ISPF and Pipes.)
   - With Pipes, you don't have to write a loop to process or build records
   in a stem or in the stack.  Since the records are ready available to the
   steam, you process them from there in any way that your application
   dictates.
   - Pipes does parallel enqueues, allocations and opens via its COMMIT
   mechanism.  In a nutshell, during the initialization stage (commit level <
   0), any stage can issue a negative return code.  It stops further regular
   processing and gives stages the opportunity to release any resources
   obtained.  This is similar to what happens with batch JOB JCL.  Recovering
   from an enqueue, allocation. or open failure can be complicated with
   multiple instances of EXECIO.

Concerning GLOBALV:

   I have looked at GLOBALV files and they do not appear to be too
difficult to read and write compatibly with REXX and/or Pipes.  IBM
documents the format.  SESSION values could be saved in a VIO dataset for
performance and tranciency.  Thus writing a GLOBALV for TSO seems highly
practical.  If there was no better option, one could write logic for just
those functions that are used by the CMS code being ported.


Concerning checkpointing:

   - Checkpointing was originally implemented because large organizations
   had regular jobs that ran for hours. If there was an abend and a long
   running job had to be rerun, whole sets of dependent jobs would be delayed,
   throwing weekly payroll, monthly payables, or bill cycle receivables, etc.
   behind schedule.  The company could be put at risk of penalties for late
   payroll, interest charges for overdue payments, or delayed receivables
   income, etc.  Because today's platforms are orders of magnitude faster,
   there are many fewer such jobs today, even given increased volumes.  Many
   checkpointing concerns are moot today.  (This includes for example baring
   temporary datasets in production because a job that now runs in 10 minutes
   or less might have to be restarted when it would have run fine from
   scratch.  It will take that time to review the problem, fix it and set up
   

Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-19 Thread Paul Gilmartin
On Sun, 19 Sep 2021 11:53:18 -0500, Hobart Spitz wrote:
>...
>Concerning EXECIO:
>   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
>   can only use a DD.
>  
o Can that DD be allocated to a UNIX file, with FILEDATA any of BINARY, TEXT, 
or RECORD?
o Is it savvy to files tagged with FILEDATA and/or CCSID?
o Does it restrict concurrent use of UNIX files as ISPF does?:
  


>... -   I would be glad to
>   hear from someone knowledgeable that I'm wrong on this and that I missed
>   some access method change that has made the concern unnecessary.
>
As I mentioned earlier, the SPFEDIT member ENQs with DISP=SHR
avoid the need to lock an entire PDS in order to update a single member:


>   - Pipes lets you read member(s) through a DD and names(s) specified in
>   the pipe stage or by feeding member names to certain stages. 
>   you must put the member name on the allocation.  If you've ever read a
>   large number of members from a PDS/E 
>
Does this support mixed concatenations of PDS, PDSE and UNIX directories?
(It should.  This is intrinsic to  BPAM.)  What happens if the UNIX files have
unlike tags?

Does Pipelines support the "other side" of DDNAMEs?  I.e. can Pipelines
output connectors be associated with input DDs of Classic utilities such as
OLD and NEW of ISRSUPC, or SYSUT2 of an arbitrary Classic utility be
associated with a Pipeline input cnonector.  I could do this with a Rube
Goldberg of SYSCALL PIPE, BPXWDYN, SYSCALL SPAWN,  ...
It would be nice to have it in an opaque package.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-19 Thread Paul Gilmartin
On Sun, 19 Sep 2021 11:53:18 -0500, Hobart Spitz wrote:
>...
>Once you have Pipes you don't need or want EXECIO.  ...
>  
At times the effort of converting existing art outweighs the benefit.  And  
POSIX
pipes are portable to desktop systems; CMS/TSO Pipelines is not.

>   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
>   can only use a DD.

>   - When a dataset is specified, Pipes defaults to OLD when writing.  This
>   is something that z/OS access methods don't even check.  You could still
>   accidentally shoot yourself in the foot, but it's not obvious in JCL  In
>   Pipes to have to explicitly override the default by coding SHR next to the
>   dataset name.  I don't know why you would want to.  
>
Pipes ought also support the SPFEDIT ENQ:

as FTP and NFS do in order to allow safely updating one member of a PDS by one
job while another job processes a different member, and to allow creating 
different
members concurenttly, supported by PDSE but not PDS.  With "1000s of stages"
a programmer might not easily check for this.

Is there any protection against FANOUT's having downstream stages that 
destructively
update the same data sett?  Does Pipes do "ENQ RET=HAVE"?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-19 Thread Hobart Spitz
Thank you for your relevant and helpful comments.  They are very much
appreciated, as I omitted some topics.  I'll do my best to address them
here.  Pardon the lack of brevity.

Concerning EXECIO:
   Yes, the z/OS and z/VM EXECIO versions are mostly incompatible.
Once you have Pipes you don't need or want EXECIO.  Here's why:

   - Pipes can read from or write to either a DD or a dataset.  z/OS EXECIO
   can only use a DD.
   - TSO Pipes has a format that is syntactically and semantically similar
   to that in CMS Pipes.  It is used with a PDS/E allocated to a DD.
   - When a dataset is specified, Pipes defaults to OLD when writing.  This
   is something that z/OS access methods don't even check.  You could still
   accidentally shoot yourself in the foot, but it's not obvious in JCL  In
   Pipes to have to explicitly override the default by coding SHR next to the
   dataset name.  I don't know why you would want to.  Consider the
   organization that doesn't protect itself from writing with a DISP=SHR in
   JCL, either with education, standards or exits.  OT:  This is why you
   should *always* put DISP= on the first line of a JCL DD and the DSN on a
   continuation.  Otherwise, if someone performs maintenance and accidentally
   ends up with DISP=SHR on an output DD, for a long time there may be no
   errors and it may run fine even in production.  That is, until the day that
   another process writes with SHR or reads while the dataset is in an
   inconsistent state.  There could be a lot of confusion.  I would be glad to
   hear from someone knowledgeable that I'm wrong on this and that I missed
   some access method change that has made the concern unnecessary.
   - Pipes lets you read member(s) through a DD and names(s) specified in
   the pipe stage or by feeding member names to certain stages.  With EXECIO,
   you must put the member name on the allocation.  If you've ever read a
   large number of members from a PDS/E with EXECIO, you know it takes
   forever.  You must go through TSO ALLOC command attach, enqueue, catalog
   search, VTOC search, directory search, and OPEN (to name those I know
   about), and then free and dequeue the entire allocation before starting the
   next member.  (OT:  ISPF services allows you to allocate a PDS/E and then
   process multiple members.  What takes minutes with EXECIO takes seconds
   with ISPF and Pipes.)
   - With Pipes, you don't have to write a loop to process or build records
   in a stem or in the stack.  Since the records are ready available to the
   steam, you process them from there in any way that your application
   dictates.
   - Pipes does parallel enqueues, allocations and opens via its COMMIT
   mechanism.  In a nutshell, during the initialization stage (commit level <
   0), any stage can issue a negative return code.  It stops further regular
   processing and gives stages the opportunity to release any resources
   obtained.  This is similar to what happens with batch JOB JCL.  Recovering
   from an enqueue, allocation. or open failure can be complicated with
   multiple instances of EXECIO.

Concerning GLOBALV:

   I have looked at GLOBALV files and they do not appear to be too
difficult to read and write compatibly with REXX and/or Pipes.  IBM
documents the format.  SESSION values could be saved in a VIO dataset for
performance and tranciency.  Thus writing a GLOBALV for TSO seems highly
practical.  If there was no better option, one could write logic for just
those functions that are used by the CMS code being ported.


Concerning checkpointing:

   - Checkpointing was originally implemented because large organizations
   had regular jobs that ran for hours. If there was an abend and a long
   running job had to be rerun, whole sets of dependent jobs would be delayed,
   throwing weekly payroll, monthly payables, or bill cycle receivables, etc.
   behind schedule.  The company could be put at risk of penalties for late
   payroll, interest charges for overdue payments, or delayed receivables
   income, etc.  Because today's platforms are orders of magnitude faster,
   there are many fewer such jobs today, even given increased volumes.  Many
   checkpointing concerns are moot today.  (This includes for example baring
   temporary datasets in production because a job that now runs in 10 minutes
   or less might have to be restarted when it would have run fine from
   scratch.  It will take that time to review the problem, fix it and set up
   the job for restart.  Too many people are copying JCL or following less
   than well documented standards that they and/or their management don't
   understand.)   The remaining long running jobs are prime candidates for
   being rewritten in Pipes.
   - Rewriting a long running JOB or UNIX piping command in Pipes can cut
   the run time dramatically.  A rule of thumb for the time saved is the total
   I/O time for all intermediate datasets used to pass data from step to
   step.  A one 

Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-17 Thread Paul Gilmartin
On Fri, 17 Sep 2021 16:21:25 -0400, Steve Thompson wrote:

>EXECIO is not supported the same between z/os and CMS. This is a pain. 
> 
Yes.  What are your specifics.  Mine include:

o Lack of the VAR and, especially STRING, options, requiring extra code.
  Is that an accommodation to syntactic limitations of the CLIST/TSO parser?

o Requirement for the OPEN option in some cases.  I suspect this is a
  feckless accommodation to a customer who complained that
  EXECIO 0 DISKW fileid (FINIS created an empty file on TSO but not
  on CMS.  That's merely the behavior of the underlying file system.

o Lack of the PRINT, PUNCH, etc. forms.  That's a difference in the
  Data Management conventions of the supporting OS.

o Difference in the fileid syntax.  As in the previous bullet.  I've easily
  dealt with that by dual-path.

The page at  lauds Rexx portability.
I see it as an argument for the other side.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-17 Thread Paul Gilmartin
On Fri, 17 Sep 2021 14:56:15 -0500, Hobart Spitz wrote:
>...
>   15. Is consistent with policies for combating global warming of
>   customers, vendors and the public.   ...
>  
OK.  But 40% of the U.S. electorate would consider that an argument "against".

Think of the environmental impact of cryptocurrency mining (GIYF).  Would
TSO Pipelines and/or ICSF mitigate that?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-17 Thread Farley, Peter x23353
One additional "against" (until the problem is solved):

The knowledge of how to structure recovery from abends in one (or more) pipe 
stages is not well known in application circles.  Most batch recovery today 
involves stopping any successor jobs (usually via scheduler dependency on 
successful completion of the abending job(s)), then resolving the program 
abend(s) and rerunning the failing job(s).

How does one structure a multi-hundred stage pipeline to enable easy and rapid 
recovery from an abend at any stage?  I've been in the mainframe programming 
business since 1972 and I would not know how to structure such a beast.  
Restarting from the beginning is not an option with the increasingly gargantuan 
volumes of data we must process and the tighter and tighter SLA's our customers 
demand.

Peter

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Hobart Spitz
Sent: Friday, September 17, 2021 3:56 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

IMHO, the Business Cases on Pipes in the z/OS Base are as follows.  (Pipes is 
already available as part of BatchPipes.)

The case *for *Pipes in the z/OS base.:

   1. Development costs would drop for customers, vendors, and IBM, since
   everyone could use Pipes in their software.
   2. Hardware usage would drop for customers.  In addition to avoiding
   I/O, Pipes uses a record address-and-length descriptor.  A record can
   flow from stage to stage with the only data movement being for changed
   records.  Potential data needed by a stage could have already been in the
   working set and/or cache-loaded by the previous stage.  (A methodology for
   identifying the cost/benefits by JOB and application would allow the best
   to be reworked first.  Thus Pipes would pay for itself in the shortest
   amount of time.)
   3. Product efficiency for vendors (IBM and others) would improve.
   (Arguably it's the other side of the coin in #2.)
   4. Tight integration with REXX, CLIST and JCL.
   5. Portability to and from z/VM.  This breaks down differently for
   different groups:
  - Customers: Cheaper porting to/from z/OS.  (Porting to other IBM
  Series is expensive and time-consuming, AFAIK.)
  - Vendors:  Write once for both platforms.
  - IBM:  Rather than customers moving to non-IBM platforms, when z/OS
  or z/VM don't meet their needs, those customers would have another option
  to stay with IBM.
   6. You can process both RecFM F records and RecFM V records with the
   same stages.
   7. Pipes can be used on both EXEC and DD JCL statements.  This is
   primarily for REXX-a-phobes.  Pipe commands in REXX are amazing; I've used
   the combination on both z/OS and z/VM (and predecessors).  Pipes with
   CLISTs is almost as good, AFAIK.
   8. Increased competitiveness for IBM hardware and software.  This would
   especially apply to UNIX customers who have exceeded the capabilities of
   their platforms.
   9. CMS/TSO Pipes is better than UNIX piping, and REXX is better than C.
   With today's processors, C's performance advantage over REXX is not
   significant, and dwarfed by low developer productivity (your bullet, your
   gun, your foot) of C.  Strategically using Pipes with REXX, can give better
   performance that UNIX piping and C.
   10. Since both C++ and Pipes are internally pointer based, you could get
   similar benefits by using OO techniques exclusively.  How are you going to
   convert a COBOL and JCL application to C++?  Not as easily as going to REXX
   and Pipes.
   11. z/OS is a file-structure-aware operating system.  It does not
   use embedded control characters in text files or impose record boundary
   knowledge in binary files.  Pipes reads and writes byte stream data by
   converting to and from record address-and-length descriptor format.  This
   means that, in most cases, any sequence of stages perform equally well on
   RecFM F, RecFM V, byte stream text data and deblocked UNIX binary files.
   12. Addresses staff shortages due to baby-boomer retirements and CoViD
   impacts.
   13. Reduces batch window requirements.  With less I/O, JOBs finish
   faster.
   14. It's my understanding that there are people inside IBM who are
   behind Pipes going into the z/OS base.  Pipes is part of the z/VM base.
   Can z/OS be that far behind?
   15. Is consistent with policies for combating global warming of
   customers, vendors and the public.  Fewer CPU cycles wasted means less heat
   to be dissipated and less electricity to be produced.  The UNIX stream and
   C string models are obsolete in this light; every character must go through
   the CPU to get to the end of a record or string.   Not so in Pipes or SAM
   access methods.  Rarely do you see a UNIX command with more than a dozen
   filters; Pipes commands go into the 100s or 1000s of stages.  In general,
   the more stages the better the 

Re: The Business Case for Pipes in the z/OS Base (was: Re: REXX - Interpret or Value - Which is better?)

2021-09-17 Thread Steve Thompson
EXECIO is not supported the same between z/os and CMS. This is a pain. 

GLOBALV probably needs porting to solve other problems. I was tempted to do 
that on the last project I was on where we had REXX working between the two 
platforms. 

Sent from my iPhone — small keyboarf, fat fungrs, stupd spell manglr. Expct 
mistaks 


> On Sep 17, 2021, at 3:57 PM, Hobart Spitz  wrote:
> 
> IMHO, the Business Cases on Pipes in the z/OS Base are as follows.  (Pipes
> is already available as part of BatchPipes.)
> 
> The case *for *Pipes in the z/OS base.:
> 
>   1. Development costs would drop for customers, vendors, and IBM, since
>   everyone could use Pipes in their software.
>   2. Hardware usage would drop for customers.  In addition to avoiding
>   I/O, Pipes uses a record address-and-length descriptor.  A record can
>   flow from stage to stage with the only data movement being for changed
>   records.  Potential data needed by a stage could have already been in the
>   working set and/or cache-loaded by the previous stage.  (A methodology for
>   identifying the cost/benefits by JOB and application would allow the best
>   to be reworked first.  Thus Pipes would pay for itself in the shortest
>   amount of time.)
>   3. Product efficiency for vendors (IBM and others) would improve.
>   (Arguably it's the other side of the coin in #2.)
>   4. Tight integration with REXX, CLIST and JCL.
>   5. Portability to and from z/VM.  This breaks down differently for
>   different groups:
>  - Customers: Cheaper porting to/from z/OS.  (Porting to other IBM
>  Series is expensive and time-consuming, AFAIK.)
>  - Vendors:  Write once for both platforms.
>  - IBM:  Rather than customers moving to non-IBM platforms, when z/OS
>  or z/VM don't meet their needs, those customers would have another option
>  to stay with IBM.
>   6. You can process both RecFM F records and RecFM V records with the
>   same stages.
>   7. Pipes can be used on both EXEC and DD JCL statements.  This is
>   primarily for REXX-a-phobes.  Pipe commands in REXX are amazing; I've used
>   the combination on both z/OS and z/VM (and predecessors).  Pipes with
>   CLISTs is almost as good, AFAIK.
>   8. Increased competitiveness for IBM hardware and software.  This would
>   especially apply to UNIX customers who have exceeded the capabilities of
>   their platforms.
>   9. CMS/TSO Pipes is better than UNIX piping, and REXX is better than C.
>   With today's processors, C's performance advantage over REXX is not
>   significant, and dwarfed by low developer productivity (your bullet, your
>   gun, your foot) of C.  Strategically using Pipes with REXX, can give better
>   performance that UNIX piping and C.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN