On Wed, 20 May 2020 08:38:07 -0400, you wrote:

>I was a bit surprised by the behavior Alain describe as well, but the 
>description of the STEM stage says:
>   When APPEND is specified, writing starts with <stem>n where n
>   is one more than the value returned for <stem>0. 
>
>Stated more clearly, when APPEND is specified, only the appended elements 
>of the array are written to the output stream.

Well, sort of, but really nothing from the stem is written to the output
stream, regardless of whether APPEND or FROM is specified.  The input file
is copied to the output stream unchanged.

The part I think you imagined wrong was the same STEM stage both reading
and writing to the stem.  STEM *either* reads or writes to the stem,
depending on whether it's first in the pipeline.

What you envisioned would be done with two STEM stages, one to read and
one to write--and the timing might be a little tricky.  The obvious thing
would be to use PREFACE STEM to read the original stem before the stack:

  PIPE stack | stem table1. append | preface stem table1. | cons

But is that safe?  The doc for STEM says it commits to level 0 once it's
verified that the REXX environment exists.  If the STEM added by PREFACE
doesn't read the number of variables from TABLE1.0 before committing, then
it's possible the other stages could run and add the first line from the
stack to the stem before it does.  If that happened, that line would be
doubled in the output (though not in the stem).

So it might be better to do it the other way around, and move STEM APPEND
inside a pipeline added by APPEND:

  PIPE stem table1. | append stack || stem table1. append | cons

(where doubling the pipe symbol escapes it, so it's passed as part of the
argument to APPEND)

Then you know the entire original stem contents will be read before STACK
and STEM APPEND are even added to the pipeline.

I suppose we could ask for a new keyword, maybe READ, to avoid this worry
and do it all in one stage:

  STEM stem. [APPEND|FROM next [READ [start]]] 

Read the stem into the pipeline, starting from index 1 or start, up to
stem.0 or next-1, and then save the input file in the stem starting at
stem.0+1 or next.

I'm not sure what I'd expect when start > next, though:  Maybe fail, maybe
just skip reading the stem, maybe don't even copy records from input to
output until the index reaches start.

¬R

Reply via email to