On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
> John Macdonald wrote:
> 
> >The most common (and what people sometimes believe the
> >*only* usage) is as a generator - a coroutime which creates a
> >sequence of values as its "chunk" and always returns control
> >to its caller.  (This retains part of the subordinate aspect
> >of a subroutine.  While it has the ability to resume operation
> >from where it left off and so doesn't terminate as soon as it
> >has a partial result to pass on, it has the subordinate trait
> >of not caring who called it and not trying to exert any control
> >over which coroutine is next given control after completing a
> >"chunk").
> > 
> >
> [Rest of lengthy, but good explanation of coroutines omitted]
> 
> Question:
> 
> Do we not get all of this loveliness from lazy lists and the given/take 
> syntax? Seems like that makes a pretty straightforward generator, and 
> even wraps it into a nice, complete object that people can play with.
> 
> Now, I'm all in favor of TMTOWTDI, but in this case, if there are no 
> other decent uses of co-routines, I don't see the need for AWTDI. 
> Given/Take _does_ create a coroutine.
> 
> 
> If there are good uses for coroutines that given/take does not address, 
> I'll gladly change my opinion. But I'd like to see some examples.
> FWIW, I believe that Patrick's example of the PGE returning matches 
> could be written with given/take (if it was being written in P6).

Um, I don't recall what given/take provides, so I may be only
addressing the limitations of lazy lists...

I mentioned Unix pipelines as an example.  The same concept of
a series of programs that treat each other as a data stream
translates to coroutines: each is a "mainline" routine that
treats the others as subroutines.  Take a simple pipeline
component, like say "tr".  When it is used in the middle
of a pipeline, it has command line arguments that specify
how it is to transform its data and stdin and stdout are
connected to other parts of the pipeline.  It reads some
data, transforms it, and then writes the result.  Lather,
rinse, repeat.  A pipeline component program can be written
easily because it keeps its own state for the entire run but
doesn't have to worry about keeping track of any state for
the other partsd of a pipeline.  This is like a coroutine -
since a coroutine does not return at each step it keeps its
state and since it simply resumes other coroutines it does not
need to keep track of their state at all.  To change a coroutine
into a subroutine means that the replacement subroutine has to
be able, on each invokation, to recreate its state to match
where it left off; either by using private state variables
or by having the routine that calls it take over the task of
managing its state.  If pipeline components were instead like
subroutines rather than coroutines, then whenever a process
had computed some output data, instead of using a "write"
to pass the data on to an existing coroutine-like process,
it would have to create a new process to process this bit of
data.  Using coroutines allows you to create the same sort of
pipelines within a single process; having each one written as
its own "mainline" and thinking of the others as data sources
and sinks that it reads from and writes to is very powerful.
Lazy lists are similar to redirection of stdin from a file at
the head of a pipeline.  Its fine if you already have that data
well specified.  Try writing a perl shell program that uses
coroutines instead of separate processes to handle pipelines
and has a coroutine library to compose the pipelines; this
would be a much more complicated programming task to write
using subroutines instead of coroutines.

The example of a compiler was also given - the parser runs over
th input and turns it into tokens, the lexer takes tokens and
parses them into an internal pseudo code form, the optimizer
takes the pseudo code and shuffles it around into pseudocode
that (one hopes) is better, the code generator takes the
pseudocode and transforms it into Parrot machine code, the
interpretor takes the Parrot machine code and executes it.
They mostly connect together in a kind of pipeline; but
there can be dynamic patches to that pipeline (a BEGIN block,
for example, causes the interpretor to be pulled in as soon
as that chunk if complete, and if that code includes a "use"
it might cause a new pipeline of parser/lexer/etc to be set up
to process an extra file right now, while keeping the original
pipeline intact to be resumed in due course.  (This example
also fits with Luke's reservations about failing to distinguish
clearly between crating and resuming a coroutine - how are you
going to start a new parser if "calling" the parse subroutine
will just resume the instance that is already running instead
of creating a separate coroutime.)

For many simple uses generators are exactly what you need,
but they have limits.  A more powerful coroutine mechanism can
easily provide the simple forms (and, I would expect, without
any serious loss of performance).

-- 

Reply via email to