That is the way PUTFILES works: ADDPIPE.
It worked fine until it had to unravel a spool file carrying over 150
different CMS files: storage below 16M full.  The drawback is indeed that
with these ADDPIPEs, all files remain open all the time (PUTFILES got a
GROUPED option, then it reverts to CALLPIPEs).
As for the original problem here: it is supposed to filter SCIF messages, so
I guess it is in a long running server, so keeping the files open until the
servers stops is probably not a good idea.  So either EXECIO or FILESLOW
with a FINIS every now and then would be required.

2009/10/22 Rob van der Heij <rvdh...@gmail.com>

> On Thu, Oct 22, 2009 at 9:56 PM, Hughes, Jim <jim.hug...@doit.nh.gov>
> wrote:
> > A sipping pipeline maybe.  Call this stage for each SCIF message.
>
> When you expect a fair amount of messages for a relatively small
> number of files, it may be more fun to spawn a little prefix pipeline
> for each recipient:
>
> do forever
>  'peekto rec'
>  parse var rec fn .
>  'addpipe (end \)',
>      '\ *.input.0: | p: pick w1 == ,'fn', | >' fn 'file a',
>      '\ p: | *.input.0:'
> end
>
> The first record will be inspected to find the filename, and the
> "addpipe" re-configures the pipeline by inserting a few stages in
> front of this one. The "pick" will select the matching records and
> pass them to the next stage to write them to disk. Only the rest of
> the records pass through. So the next record that "peekto" sees is the
> first one meant for another file, and the same "addpipe" chains in
> another pipe to divert those records, etc.
>
> A neat part of the trick is that the "peekto" will not consume the
> record, so it will still be in the pipeline by the time the addpipe
> inserts the "pick" and ">" stages. So it would still be able to flow
> there.
>
> Rob
>



--
Kris Buelens,
IBM Belgium, VM customer support

Reply via email to